id stringlengths 3 9 | source stringclasses 1 value | version stringclasses 1 value | text stringlengths 1.54k 298k | added stringdate 1993-11-25 05:05:38 2024-09-20 15:30:25 | created stringdate 1-01-01 00:00:00 2024-07-31 00:00:00 | metadata dict |
|---|---|---|---|---|---|---|
244491558 | pes2o/s2orc | v3-fos-license | Rainwater-driven microbial fuel cells for power generation in remote areas
The possibility of using rainwater as a sustainable anolyte in an air-cathode microbial fuel cell (MFC) is investigated in this study. The results indicate that the proposed MFC can work within a wide temperature range (from 0 to 30°C) and under aerobic or anaerobic conditions. However, the rainwater season has a distinct impact. Under anaerobic conditions, the summer rainwater achieves a promised open circuit potential (OCP) of 553 ± 2 mV without addition of nutrients at the ambient temperature, while addition of nutrients leads to an increase in the cell voltage to 763 ± 3 and 588 ± 2 mV at 30°C and ambient temperature, respectively. The maximum OCP for the winter rainwater (492 ± 1.5 mV) is obtained when the reactor is exposed to the air (aerobic conditions) at ambient temperature. Furthermore, the winter rainwater MFC generates a maximum power output of 7 ± 0.1 mWm−2 at a corresponding current density value of 44 ± 0.7 mAm−2 at 30°C. While, at the ambient temperature, the maximum output power is obtained with the summer rainwater (7.2 ± 0.1 mWm−2 at 26 ± 0.5 mAm−2). Moreover, investigation of the bacterial diversity indicates that Lactobacillus spp. is the dominant electroactive genus in the summer rainwater, while in the winter rainwater, Staphylococcus spp. is the main electroactive bacteria. The cyclic voltammetry analysis confirms that the electrons are delivered directly from the bacterial biofilm to the anode surface and without mediators. Overall, this study opens a new avenue for using a novel sustainable type of MFC derived from rainwater.
Do you have any ethical concerns with this paper? No
Have you any concerns about statistical analyses in this paper? No
Recommendation?
Accept with minor revision (please list in comments) Comments to the Author(s) I think that the idea presented in the paper is very original and worth keep exploring in the future.
The only aspect I think should be considered in the paper is an estimation about the energy produced by the Fuel cells. Authors are focused in monitoring voltage produced in the fuel cell, but as the intended use of such a device is to provide power, voltage is not sufficient. The authors measure an I-V curve every now and then, but this is not sufficient as the power delivered by a system with no forced convection will not be constant. Also the lazy metabolism of bacteria may prevent the system from producing the current reported in the I-V in a continuos manner. Therefore I think that a discussion and some estimations about energy generated in the cells in such a long period of time is mandatory. I agree that sensors can operated with only few mW but for how long? How big should a FC be to continuosly provide power to such sensors?
Are there any pictures of the fuel cells used in the experiment? Can the authors also describe how an operative system in the field would operate? Would it collect only some rain water at the beginning of the season or would the system accept rain through the whole period?
Decision letter (RSOS-210996.R0) We hope you are keeping well at this difficult and unusual time. We continue to value your support of the journal in these challenging circumstances. If Royal Society Open Science can assist you at all, please don't hesitate to let us know at the email address below.
Dear Dr Amen
The Editors assigned to your paper RSOS-210996 "Rainwater-driven Microbial Fuel Cells for Power Generation in the Remote Areas" have now received comments from reviewers and would like you to revise the paper in accordance with the reviewer comments and any comments from the Editors. Please note this decision does not guarantee eventual acceptance.
We invite you to respond to the comments supplied below and revise your manuscript. Below the referees' and Editors' comments (where applicable) we provide additional requirements. Final acceptance of your manuscript is dependent on these requirements being met. We provide guidance below to help you prepare your revision.
We do not generally allow multiple rounds of revision so we urge you to make every effort to fully address all of the comments at this stage. If deemed necessary by the Editors, your manuscript will be sent back to one or more of the original reviewers for assessment. If the original reviewers are not available, we may invite new reviewers.
Please submit your revised manuscript and required files (see below) no later than 21 days from today's (ie 07-Sep-2021) date. Note: the ScholarOne system will 'lock' if submission of the revision is attempted 21 or more days after the deadline. If you do not think you will be able to meet this deadline please contact the editorial office immediately.
Please note article processing charges apply to papers accepted for publication in Royal Society Open Science (https://royalsocietypublishing.org/rsos/charges). Charges will also apply to papers transferred to the journal from other Royal Society Publishing journals, as well as papers submitted as part of our collaboration with the Royal Society of Chemistry (https://royalsocietypublishing.org/rsos/chemistry). Fee waivers are available but must be requested when you submit your revision (https://royalsocietypublishing.org/rsos/waivers). Associate Editor Comments to Author: Two reviewers have provided commentary on your work -please ensure that your revision carefully addresses their comments and, where additional clarifications or content is requested, you include it in the manuscript resubmission, too. Please note that the reviewers will be invited to assess your revision.
Reviewer comments to Author: Reviewer: 1 Comments to the Author(s) It's interesting that Amen et al. carried out the investigation on the rainwater as the anolyte in an air-cathode microbial fuel cell (MFC) to generate electricity. They found the season and the oxygen might be the important factors that affect the performance of rainwater-driven MFC. This try is very useful for the power supply of some sensors in mountains and land areas. However, I still have some confusions about the design and the explanations. Why the authors did not inoculate the MFC and start up? No more microbial and chemical information on the rainwater? The authors should do more work on the microbial names and formats. Overall, it can't be accepted for publication before a major revision. Here are my specific comments. Abstract P2, line5, 'aerobic or anaerobic conditions ' is not clear; P2, line 14-15, Authors should confirm all microbial names and their formats, almost all names are not correct, such as, 'lactobacillus sp.', 'Staphylococcus sp. is the main electroactive bacteria'; P2, line 15-17, 'cyclic voltammetry analysis confirms that the electrons are delivered directly from the bacterial biofilm to the anode surface and without mediators.' Your data support this conclusion? Introduction P4, line8, 'It is known that rainwater contains several kinds of microorganisms collected from the atmosphere.' Who knows it? P4, line18, 'some bacteria can catalyze ice formation at temperatures near 19 -2 °C'? confused about this description; P5, line4-5, wrong formats; Materials and Methods P5, line12-13, here the authors addressed rainwater 'was transferred to 1 L sterilised bottle for chemical and microbiological analyses.', however, we did not find the chemical and microbiological results; Section '2.3. MFC Construction and operation' MFC need inoculation and a long time to startup, however, you did not address the methods, no results. P6, line19-23, The nutrient in the control is too high 'with mixture of rainwater and Nutrient broth media (1:1 ratio v/v)'; the aerobic and anaerobic conditions are not clear; P7, line18-22, experiment design is not acceptable. The authors should analyze the biofilm directly, but not cultured the biofilm; P8, line17-19, what the purpose for amplifying the 16S rRNA gene? Results and Discussion P9, line10 and others, 'the ambient temperature' should give a temperature arrange; P10, section '3.2. Aerobic Conditions Effect', how to maintain the anaerobic conditions? What the difference with the aerobic condition? P19, section' 3.7.2. Bacterial Diversity of RMFC', confirm all the formats of the microbial names, such as the lactobacillus sp., staphylococcus sp., Staphylococcus sp…..need more work to do; Table 2, Table S2 and S3 also should be corrected in formats.
Reviewer: 2 Comments to the Author(s) I think that the idea presented in the paper is very original and worth keep exploring in the future.
The only aspect I think should be considered in the paper is an estimation about the energy produced by the Fuel cells. Authors are focused in monitoring voltage produced in the fuel cell, but as the intended use of such a device is to provide power, voltage is not sufficient. The authors measure an I-V curve every now and then, but this is not sufficient as the power delivered by a system with no forced convection will not be constant. Also the lazy metabolism of bacteria may prevent the system from producing the current reported in the I-V in a continuos manner. Therefore I think that a discussion and some estimations about energy generated in the cells in such a long period of time is mandatory. I agree that sensors can operated with only few mW but for how long? How big should a FC be to continuosly provide power to such sensors?
Are there any pictures of the fuel cells used in the experiment? Can the authors also describe how an operative system in the field would operate? Would it collect only some rain water at the beginning of the season or would the system accept rain through the whole period?
===PREPARING YOUR MANUSCRIPT===
Your revised paper should include the changes requested by the referees and Editors of your manuscript. You should provide two versions of this manuscript and both versions must be provided in an editable format: one version identifying all the changes that have been made (for instance, in coloured highlight, in bold text, or tracked changes); a 'clean' version of the new manuscript that incorporates the changes made, but does not highlight them. This version will be used for typesetting if your manuscript is accepted.
Please ensure that any equations included in the paper are editable text and not embedded images.
Please ensure that you include an acknowledgements' section before your reference list/bibliography. This should acknowledge anyone who assisted with your work, but does not qualify as an author per the guidelines at https://royalsociety.org/journals/ethicspolicies/openness/.
While not essential, it will speed up the preparation of your manuscript proof if accepted if you format your references/bibliography in Vancouver style (please see https://royalsociety.org/journals/authors/author-guidelines/#formatting). You should include DOIs for as many of the references as possible.
If you have been asked to revise the written English in your submission as a condition of publication, you must do so, and you are expected to provide evidence that you have received language editing support. The journal would prefer that you use a professional language editing service and provide a certificate of editing, but a signed letter from a colleague who is a native speaker of English is acceptable. Note the journal has arranged a number of discounts for authors using professional language editing services (https://royalsociety.org/journals/authors/benefits/language-editing/).
===PREPARING YOUR REVISION IN SCHOLARONE===
To revise your manuscript, log into https://mc.manuscriptcentral.com/rsos and enter your Author Centre -this may be accessed by clicking on "Author" in the dark toolbar at the top of the page (just below the journal name). You will find your manuscript listed under "Manuscripts with Decisions". Under "Actions", click on "Create a Revision".
Attach your point-by-point response to referees and Editors at Step 1 'View and respond to decision letter'. This document should be uploaded in an editable file type (.doc or .docx are preferred). This is essential.
Please ensure that you include a summary of your paper at Step 2 'Type, Title, & Abstract'. This should be no more than 100 words to explain to a non-scientific audience the key findings of your research. This will be included in a weekly highlights email circulated by the Royal Society press office to national UK, international, and scientific news outlets to promote your work.
At
Step 3 'File upload' you should include the following files: --Your revised manuscript in editable file format (.doc, .docx, or .tex preferred). You should upload two versions: 1) One version identifying all the changes that have been made (for instance, in coloured highlight, in bold text, or tracked changes); 2) A 'clean' version of the new manuscript that incorporates the changes made, but does not highlight them. --If you are requesting a discretionary waiver for the article processing charge, the waiver form must be included at this step.
--If you are providing image files for potential cover images, please upload these at this step, and inform the editorial office you have done so. You must hold the copyright to any image provided.
--A copy of your point-by-point response to referees and Editors. This will expedite the preparation of your proof.
At
Step 6 'Details & comments', you should review and respond to the queries on the electronic submission form. In particular, we would ask that you do the following: --Ensure that your data access statement meets the requirements at https://royalsociety.org/journals/authors/author-guidelines/#data. You should ensure that you cite the dataset in your reference list. If you have deposited data etc in the Dryad repository, please include both the 'For publication' link and 'For review' link at this stage.
--If you are requesting an article processing charge waiver, you must select the relevant waiver option (if requesting a discretionary waiver, the form should have been uploaded at Step 3 'File upload' above).
--If you have uploaded ESM files, please ensure you follow the guidance at https://royalsociety.org/journals/authors/author-guidelines/#supplementary-material to include a suitable title and informative caption. An example of appropriate titling and captioning may be found at https://figshare.com/articles/Table_S2_from_Is_there_a_trade-off_between_peak_performance_and_performance_breadth_across_temperatures_for_aerobic_sc ope_in_teleost_fishes_/3843624.
At
Step 7 'Review & submit', you must view the PDF proof of the manuscript before you will be able to submit the revision. Note: if any parts of the electronic submission form have not been completed, these will be noted by red message boxes.
Author's Response to Decision Letter for (RSOS-210996.R0) See Appendix A.
Are the interpretations and conclusions justified by the results? Yes
Is the language acceptable? Yes
Recommendation? Accept with minor revision (please list in comments)
Comments to the Author(s) I received the response of the authors on my concerns, almost all questions have been addressed and resolved. One more thing, you should know that the microbial community structure of the cultured biofilm is totally different from the real structure of anodic biofilm. You should pay more attention on the formats for the microbial names, such as the strains in ' Table 4 ' should not be in the italic, s should be capital in '16s'.
Review form: Reviewer 2
Is the manuscript scientifically sound in its present form? Yes
Do you have any ethical concerns with this paper? No
Have you any concerns about statistical analyses in this paper? No
Recommendation?
Accept as is
Comments to the Author(s)
The paper can now be published
Decision letter (RSOS-210996.R1)
We hope you are keeping well at this difficult and unusual time. We continue to value your support of the journal in these challenging circumstances. If Royal Society Open Science can assist you at all, please don't hesitate to let us know at the email address below.
Dear Dr Amen
On behalf of the Editors, we are pleased to inform you that your Manuscript RSOS-210996.R1 "Rainwater-driven Microbial Fuel Cells for Power Generation in the Remote Areas" has been accepted for publication in Royal Society Open Science subject to minor revision in accordance with the referees' reports. Please find the referees' comments along with any feedback from the Editors below my signature.
We invite you to respond to the comments and revise your manuscript. Below the referees' and Editors' comments (where applicable) we provide additional requirements. Final acceptance of your manuscript is dependent on these requirements being met. We provide guidance below to help you prepare your revision.
Please submit your revised manuscript and required files (see below) no later than 7 days from today's (ie 22-Oct-2021) date. Note: the ScholarOne system will 'lock' if submission of the revision is attempted 7 or more days after the deadline. If you do not think you will be able to meet this deadline please contact the editorial office immediately.
Please note article processing charges apply to papers accepted for publication in Royal Society Open Science (https://royalsocietypublishing.org/rsos/charges). Charges will also apply to papers transferred to the journal from other Royal Society Publishing journals, as well as papers submitted as part of our collaboration with the Royal Society of Chemistry (https://royalsocietypublishing.org/rsos/chemistry). Fee waivers are available but must be requested when you submit your revision (https://royalsocietypublishing.org/rsos/waiverss).
Thank you for submitting your manuscript to Royal Society Open Science and we look forward to receiving your revision. If you have any questions at all, please do not hesitate to get in touch.
Kind regards, Royal Society Open Science Editorial Office Royal Society Open Science openscience@royalsociety.org on behalf of Pete Smith (Subject Editor) openscience@royalsociety.org Reviewer comments to Author: Reviewer: 1 Comments to the Author(s) I received the response of the authors on my concerns, almost all questions have been addressed and resolved. One more thing, you should know that the microbial community structure of the cultured biofilm is totally different from the real structure of anodic biofilm. You should pay more attention on the formats for the microbial names, such as the strains in ' You should provide two versions of this manuscript and both versions must be provided in an editable format: one version should clearly identify all the changes that have been made (for instance, in coloured highlight, in bold text, or tracked changes); a 'clean' version of the new manuscript that incorporates the changes made, but does not highlight them. This version will be used for typesetting.
Please ensure that any equations included in the paper are editable text and not embedded images.
Please ensure that you include an acknowledgements' section before your reference list/bibliography. This should acknowledge anyone who assisted with your work, but does not qualify as an author per the guidelines at https://royalsociety.org/journals/ethicspolicies/openness/.
While not essential, it will speed up the preparation of your manuscript proof if you format your references/bibliography in Vancouver style (please see https://royalsociety.org/journals/authors/author-guidelines/#formatting). You should include DOIs for as many of the references as possible.
If you have been asked to revise the written English in your submission as a condition of publication, you must do so, and you are expected to provide evidence that you have received language editing support. The journal would prefer that you use a professional language editing service and provide a certificate of editing, but a signed letter from a colleague who is a proficient user of English is acceptable. Note the journal has arranged a number of discounts for authors using professional language editing services (https://royalsociety.org/journals/authors/benefits/language-editing/).
===PREPARING YOUR REVISION IN SCHOLARONE===
To revise your manuscript, log into https://mc.manuscriptcentral.com/rsos and enter your Author Centre -this may be accessed by clicking on "Author" in the dark toolbar at the top of the page (just below the journal name). You will find your manuscript listed under "Manuscripts with Decisions". Under "Actions", click on "Create a Revision".
Attach your point-by-point response to referees and Editors at the 'View and respond to decision letter' step. This document should be uploaded in an editable file type (.doc or .docx are preferred). This is essential, and your manuscript will be returned to you if you do not provide it.
Please ensure that you include a summary of your paper at the 'Type, Title, & Abstract' step. This should be no more than 100 words to explain to a non-scientific audience the key findings of your research. This will be included in a weekly highlights email circulated by the Royal Society press office to national UK, international, and scientific news outlets to promote your work. An effective summary can substantially increase the readership of your paper.
At the 'File upload' step you should include the following files: --Your revised manuscript in editable file format (.doc, .docx, or .tex preferred). You should upload two versions: 1) One version identifying all the changes that have been made (for instance, in coloured highlight, in bold text, or tracked changes); 2) A 'clean' version of the new manuscript that incorporates the changes made, but does not highlight them. --If you are requesting a discretionary waiver for the article processing charge, the waiver form must be included at this step.
--If you are providing image files for potential cover images, please upload these at this step, and inform the editorial office you have done so. You must hold the copyright to any image provided.
--A copy of your point-by-point response to referees and Editors. This will expedite the preparation of your proof.
At the 'Details & comments' step, you should review and respond to the queries on the electronic submission form. In particular, we would ask that you do the following: --Ensure that your data access statement meets the requirements at <a href="https://royalsociety.org/journals/authors/authorguidelines/#data">https://royalsociety.org/journals/authors/author-guidelines/#data</a>. You should ensure that you cite the dataset in your reference list. If you have deposited data etc in the Dryad repository, please only include the 'For publication' link at this stage. You should remove the 'For review' link.
--If you are requesting an article processing charge waiver, you must select the relevant waiver option (if requesting a discretionary waiver, the form should have been uploaded, see 'File upload' above).
--If you have uploaded any electronic supplementary (ESM) files, please ensure you follow the guidance at https://royalsociety.org/journals/authors/author-guidelines/#supplementarymaterial to include a suitable title and informative caption. An example of appropriate titling and captioning may be found at https://figshare.com/articles/Table_S2_from_Is_there_a_trade-off_between_peak_performance_and_performance_breadth_across_temperatures_for_aerobic_sc ope_in_teleost_fishes_/3843624. At the 'Review & submit' step, you must view the PDF proof of the manuscript before you will be able to submit the revision. Note: if any parts of the electronic submission form have not been completed, these will be noted by red message boxes -you will need to resolve these errors before you can submit the revision.
Decision letter (RSOS-210996.R2)
We hope you are keeping well at this difficult and unusual time. We continue to value your support of the journal in these challenging circumstances. If Royal Society Open Science can assist you at all, please don't hesitate to let us know at the email address below.
Dear Dr Amen, I am pleased to inform you that your manuscript entitled "Rainwater-driven Microbial Fuel Cells for Power Generation in the Remote Areas" is now accepted for publication in Royal Society Open Science.
Please remember to make any data sets or code libraries 'live' prior to publication, and update any links as needed when you receive a proof to check -for instance, from a private 'for review' URL to a publicly accessible 'for publication' URL. It is good practice to also add data sets, code and other digital materials to your reference list.
Our payments team will be in touch shortly if you are required to pay a fee for the publication of the paper (if you have any queries regarding fees, please see https://royalsocietypublishing.org/rsos/charges or contact authorfees@royalsociety.org).
The proof of your paper will be available for review using the Royal Society online proofing system and you will receive details of how to access this in the near future from our production office (openscience_proofs@royalsociety.org). We aim to maintain rapid times to publication after acceptance of your manuscript and we would ask you to please contact both the production office and editorial office if you are likely to be away from e-mail contact to minimise delays to publication. If you are going to be away, please nominate a co-author (if available) to manage the proofing process, and ensure they are copied into your email to the journal.
Please see the Royal Society Publishing guidance on how you may share your accepted author manuscript at https://royalsociety.org/journals/ethics-policies/media-embargo/. After publication, some additional ways to effectively promote your article can also be found here https://royalsociety.org/blog/2020/07/promoting-your-latest-paper-and-tracking-yourresults/.
On behalf of the Editors of Royal Society Open Science, thank you for your support of the journal and we look forward to your continued contributions to Royal Society Open Science.
Subject Editor of Royal Society Open Science Journal
Thank you for your kind response about the manuscript (ID RSOS-210996) titled "Rainwater-driven Microbial Fuel Cells for Power Generation in the Remote Areas".
The referees' comments supported the mentioned novelty in the cover letter.
Accordingly, we can confidently claim that this manuscript will have strong impact in the microbial fuel cells field as it introduces a new class of microbial fuel cells based on the rainwater.
Actually, the given comments were also helpful to strength the manuscript. We would like to inform you that we have modified the manuscript according to the given comments.
To make it more easily, we have written the comments in bold phase followed by the responses in normal one. Moreover, in the revised manuscript, you can find the changes in the text in blue color.
We hope our responses cover all the comments. It will be our pleasure to respond about any more comments. Here are my specific comments.
First, we strongly appreciate the very valuable comments given by the reviewer. We believe that the given comments were generated because of problems in the explanation of some parts in the original manuscript. Below, we introduce a detailed response for every point. To avoid readers' confusion, section 3.5 in the revised manuscript has been updated.
P4, line8, 'It is known that rainwater contains several kinds of microorganisms collected from the atmosphere.' Who knows it?
Response: It is known that microorganisms can be found in any medium including the air atmosphere. Therefore, it is accepted to claim that the rainwater contains several kinds of microorganisms coming from the air. In the revised manuscript, this hypothesis was paraphrased and supported by some references; 20 to 23.
Response:
The text has been updated.
Materials and Methods
P5, line12-13, here the authors addressed rainwater 'was transferred to 1 L sterilised bottle for chemical and microbiological analyses.', however, we did not find the chemical and microbiological results.
Response: According to this valuable comment, section 3.7, which explains the microbiological analysis and their result (bacterial community analyses), has been updated.
Before using in the RMFCs, the as-collected rainwater samples have been examined using microbiological analyses to investigate the microbial community in the used rainwater.
On the other hand, chemical analysis was a wrong terminology; it should be "bioelectrochemical" as this water was investigated for power generation in the proposed RMFC.
The text has been updated; "chemical and microbiological analyses" has been changed to "microbiological (to investigate the microbial community; section 3.7) and bioelectrochemical (usability as anolyte in an MFC) analyses.
Section '2.3. MFC Construction and operation' MFC need inoculation and a long time
to start up, however, you did not address the methods, no results.
Response:
As the reviewer realized in his general comment, the introduced manuscript investigates utilization of rainwater as anolyte for the MFC to be exploited as a small power source in the remote areas. In these remote areas, it is difficult to inoculate the MFC or use fed-batch mode. Therefore, the experimental work in this study was planned on using the naturally present microbial community in the rainwater as bio-catalysts in the proposed RMFC without any external inoculation. Moreover, a batch mode was selected to operate the proposed MFC. Overall, the manuscript is opening an avenue for the researchers to establish a new class of MFCs based on the rainwaters.
Nutrient broth media (1:1 ratio v/v)'
Response: This ratio has been used to compare the maximum effect of nutrients (which should be high) with rainwater alone as has been discussed in section 3.4 titled "reliability and evaluation". It is mentioned in this section that "the addition of nutrients to enrich the rainwater flora is possible in the lab experiments, but it's a difficult task in the real applications. Therefore, the summer rainwater (alone without additives) was invoked to evaluate the feasibility of using pristine rainwater as an anolyte in the proposed MFC.
Response:
We agree with the reviewer, this part was not clear in the original manuscript.
Typically, running the RMFC under anaerobic conditions was performed by purging first the anolyte using nitrogen gas bubbling for 5 min before using in the RMFC and closing the anolyte feeding hole in the anolyte chamber in the assembled MFC. A photo image can be found in the supporting information (Fig. S1) showing the closed inlet opening. On the other hand, aerobic conditions were applied by utilizing the anolyte without purging and opening the anolyte feeding opening; this strategy has been used according to literature (refs. 26 & 27). This explanation has been added in the revised manuscript, page 7, line 5. This explanation has been added in section "2.3. MFC Construction and operation", page 7.
P10, section '3.2. Aerobic Conditions Effect', how to maintain the anaerobic conditions?
and What the difference with the aerobic condition?
Response: Running the RMFC under anaerobic conditions was performed by purging first the anolyte using nitrogen gas bubbling for 5 min before using in the RMFC and closing the anolyte feeding hole in the anolyte chamber in the assembled MFC. A photo image can be found in the supporting information (Fig. S1) showing the closed inlet hole. On the other hand, aerobic conditions were applied by utilizing the anolyte without purging and opening the anolyte feeding hole; this strategy has been used according to literature (refs. 26 & 27). This explanation has been added in the revised manuscript, page 7, line 5. Table 2, Table S2 and S3 also should be corrected in formats.
Response: All microbial names have been checked and corrected as advised.
Reviewer: 2
Comments to the Author(s) I think that the idea presented in the paper is very original and worth keep exploring in the future.
The only aspect I think should be considered in the paper is an estimation about the energy produced by the Fuel cells. Authors are focused in monitoring voltage produced in the fuel cell, but as the intended use of such a device is to provide power, voltage is not sufficient. The authors measure an I-V curve every now and then, but this is not sufficient as the power delivered by a system with no forced convection will not be constant.
Response:
We appreciate the efforts of the reviewer in evaluating the manuscript. Actually, the reviewer could successfully catch the main target of the manuscript.
The power output has been calculated for both summer and winter rainwaters after different days of operation as shown in Fig. 3, in the manuscript. Moreover, two tables (1&2) summarizing the generated powers were added in the revised manuscript.
Also, the lazy metabolism of bacteria may prevent the system from producing the current reported in the I-V in a continues manner. Therefore I think that a discussion and some estimations about energy generated in the cells in such a long period of time is mandatory. I agree that sensors can operated with only few mW but for how long. How big should a FC be to continuously provide power to such sensors?
Response: It is a good comment from the reviewer. First, two tables summarizing the generated powers were added to the revised manuscript. Furthermore, beside the proposed running root described in sec 3.2, this explanation has been added too.
Although, it seems that the generated power from the proposed RMFCs is small, however utilizing the rainwater without inoculation strongly recommends exploiting these cells as power sources in the remote areas as duplicating the power can be Can the authors also describe how an operative system in the field would operate?
Would it collect only some rainwater at the beginning of the season, or would the system accept rain through the whole period?
Response: Indeed, this suggested point should be evaluated and examined in the future work to investigate the possibility of constructing a prototype RMFC in the field with the suggested operation mode in section "3.2". Briefly, the proposed RMFC is suggested to work under pseudo continuous mode at the remote areas. In other words, the future design of the RMFC will be based on an open cell configuration having the possibility of filling the cell by fresh rainwater to remove dead microorganisms and activating the biofilm as well as providing nutrients. Moreover, the maintenance of anaerobic conditions in MFC is not easy for the real application 33 . Consequently, the performance of the introduced cells was investigated under aerobic conditions". Overall, this study is a proof of the concept for using the rainwater as analyte for MFC, although the operation mode worth to be investigated in the future.
Subject Editor of Royal Society Open Science Journal
Thank you for accepting our manuscript (ID RSOS-210996) titled "Rainwater-driven Microbial Fuel Cells for Power Generation in the Remote Areas" for publication in your journal.
The given comments are helpful to strength the manuscript. We would like to inform you that we have modified the manuscript according to the given comments from review: 1 and we would thank both reviewers for their efforts.
We have written the comments in bold phase followed by the responses in normal one. Moreover, in the revised manuscript, you can find the changes in the text in blue color.
We hope our responses cover all the comments. | 2021-11-24T14:06:28.046Z | 2021-11-01T00:00:00.000 | {
"year": 2021,
"sha1": "ba385df4094cc875ed0728c823a11eb8c93c7b89",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1098/rsos.210996",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "7c601a02315a587f46471f711038b1246110baca",
"s2fieldsofstudy": [
"Engineering"
],
"extfieldsofstudy": [
"Medicine"
]
} |
213771496 | pes2o/s2orc | v3-fos-license | Improving Mechanical Engineering Students' Achievement in Calculus through Problem-based Learning
This research aims to evaluate the effectiveness of problem-based learning (PBL) approach to improve mechanical engineering students' learning achievements. As for the research sample, it consisted of 56 engineering students selected by cluster random sampling which were grouped into the experimental class and the control class. The instrument used is in the form of a description test which is used to measure students' learning achievements. T-test and N-Gain test were used to analyze research data, while students' mastery learning was presented in quantitative description. The results of this research concluded that PBL is effectively applied in the learning process for mechanical engineering students, because the learning achievements of students who get PBL learning are better than the learning achievements of students who get conventional learning. The majority of students complete their mastery learning and there is an increase in students' achievements of high categories.
Introduction
Mathematical problems Calculus is one of the main courses in the curriculum in Indonesia, both for mathematics and engineering. The fact shows that many engineering students are no exception to mechanical engineering students who do not like calculus, because it is difficult and less useful in the field of engineering [1] [2]. Engineering students often have difficulty understanding concepts because of their lack of ability to do deductive reasoning. This happens because students still face conventional approaches in the learning process [3] [4]. Conventional approaches are closely related to the use of lecture methods, where teachers are more active in providing guidance while students only accept material passively. Textbooks are used as the only source of information for classroom teaching, which emphasize the computing side rather than understanding concepts [3] [5] [6].
Although the conventional approach can be done, but this approach is criticized for producing students who are passive and do not take into account the various needs and abilities of students [7]. Many students are unable to associate the concept of calculus with its application in real life. Thus, lecturers need to emphasize the importance of realizing the interactions between students, students and lecturers, and students with learning resources are necessary in calculus learning in engineering [8]. Therefore, it needs a learning approach that provides more interaction and learning opportunities for students so they can produce graduates who are relevant to the needs of the industry. Problem-based learning (PBL) is an approach that is used in overcoming real life problems as a context for students to learn, as well as to acquire essential knowledge and concepts from lecture material [9][10][11]. PBL emphasizes that the learning process moves from the transfer of information to the construction process of knowledge socially and individually. This is in line with the constructivism understanding that every student can understand teaching material through everything according to his own construction.
Furthermore Barret [9] and Arends & Kilcher [12] explain the steps in implementing PBL that include: 1) giving problems (questions) by lecturers, 2) students conducting discussions in small groups through clarifying cases of problems given, defining problems, exchanging ideas, setting things needed to solve problems, and determining what must be done to solve the problem. 3) Students conducting studies related to the problem by finding sources in the library, database, internet, and observation. 4) Students returning to the group to exchange ideas, peer learning, and working together in solving problems, 5) students presenting solutions found, and 6) students assisted by lecturers evaluating all learning activities. PBL is able to improve students' learning skills, and this approach also helps students explore real problems that will be encountered after graduation [10] [11] [13]. Through PBL, each student interacts and helps each other in the learning process so that they can overcome the different levels of student's learning abilities. Students who do not understand the course material will get help from students who understand the course material and at the same time students who understand lecture material can better consolidate their mastery of knowledge and skills. Based on the description above, the objectives of this research are: 1) Are there differences between learning achievements of mechanical engineering students with PBL and conventional learning?, 2) How is the mastery learning of students with PBL and conventional learning ?, and 3) How is the improvement in learning achievements of mechanical engineering students with PBL and conventional learning?
Research Sample
This research was conducted in September 2018 until January 2019, middle semester 2018/2019 academic year. To ensure the objectivity of researchers and avoid bias in research, the sample in this research was selected using cluster random sampling technique so that it can provide equal opportunities to a group of students gathered in the class to become a research sample. The research sample consisted of 56 mechanical engineering students at the Universitas PGRI Semarang, Central Java Province, Indonesia. The sample is divided into two classes, namely the experimental class (PBL learning) and the control class (conventional learning), with each class consisting of 28 students. Before doing the research, test the normality with the Lilliefors method, the homogeneity test with the Bartlett test, and the t-test is done first. The results showed that samples from conventional learning classes and PBL classes came from populations that were normally distributed, the variance between the two groups was homogeneous, and the two samples had the same initial ability.
Instrument and Procedures
At this stage the researcher determines methods, teaching materials, learning strategies and learning media. The learning method used in each lesson plan is a cooperative learning method. Teaching materials used by researchers are printed books, and the learning strategies chosen are active learning. Before being used, the device was validated by two validators and concluded that the device was suitable for use, with an expert rating of 84.4% (good), and 86.6% (very good) to be used.
The test questions are arranged in reference to the syllabus used in the mechanical engineering research program at the Universitas PGRI Semarang. The test questions were first validated by two experts, then tested to find out the reliability, the level of difficulty and the differentiation of items. Test questions are used to measure the ability of students' learning achievements in the experimental class and the control class. The results of the test analysis of the test instruments are presented in Table 1, which clearly shows that there are three items used as a matter of pre-test and post-test in this research.
Test T-Test
The t-test is used to find out whether there is a difference of mean of student's achievement between PBL class and conventional class. The data tested is the post-test result, in the following way: (The mean of student's achievement of PBL class is less than the average of conventional class).
(The mean of student's achievement of PBL class is better than the average of conventional class).
Students' Learning Mastery
Mastery learning is a minimum level of mastery over the substance of calculus teaching material. Students are said to master learning if they get a minimum value of 70, and mastery learning is classically met if at least 85% of all students complete the learning material. Value of 70 is the minimum criteria of mastery learning (KKM) established by mechanical engineering program in Universitas PGRI Semarang. The percentage of classical learning completeness is calculated using the formula: P = (Number of students who have completed (> 70): Number of students who took the test) x 100%
Test Improvement of Students' Achievements
To calculate the improvement of mechanical engineering students' achievements in calculus before and after learning, it is calculated by the normalized gain formula [14], namely: N-Gain (g) = (posttest score -pretest score): (maximum ideal score -pretest score) The result of N-Gain calculation was then interpreted on Table 2.
Test of Prerequisites
Table 3 provides data that L obs < L table , with p = 5% and n = 28, thus it was concluded that the samples from conventional learning classes and PBL classes came from populations that were normally distributed.
Presented in Table 4 that F obs = 0.691, F table = 1.904, p = 5%, and H 0 accepted.. thus it was concluded that the variance of the two groups was homogeneous.
Test of Research Data
Based on the results of the statistical test and prerequisite test, then the test was carried out to determine the differences in learning achievement of mechanical engineering students from the application of each learning approach. Presented in Table 5 that s p = 8.316, t obs = 5.83, with the score of v = 28 + 28 -2 = 54 and p = 0.05, obtained t (0.05,54) = 1.70. Therefore H 0 is rejected. It can be concluded that the learning achievements of mechanical engineering students who get PBL learning are better than the learning achievements of mechanical engineering students who get conventional learning. Table 6 shows the pre-test and post-test score of the two research classes, where the pre-test scores were taken before PBL learning, while the post-test scores were taken after PBL learning. Related to the achievement of student learning completeness, in the PBL class the percentage of learning completeness was 85.714%. Whereas in the conventional class, completeness was only 53.571%, which means that almost half of the students had not yet completed the KKM. The N-Gain test is used to see the improvement in learning achievement of mechanical engineering students from the application of each learning approach. The data used are the pre-test and post-test value data. Obviously it is presented in Table 7 that the increase in learning outcomes of mechanical engineering students who get PBL learning is 0.70. While the increase in learning outcomes of mechanical engineering students who get conventional learning is 0.40. This shows that PBL is better for improving learning achievement than conventional learning.
Discussion
The results of the research showed that the learning achievements of students who received PBL were better than the learning achievements of students who received conventional learning. This shows that the PBL approach has been able to foster interaction between students, students with lecturers, students with learning resources in calculus learning, and students are able to associate the concept of calculus with its application in real life. Through giving problems, students are challenged and motivated to solve them so that students are increasingly active in learning [8] [15][16][17]. This result is supported by the majority of mechanical engineering students (85.714%) who have learned PBL to have achieved KKM as determined by the school, which is 70; which is different from the completeness of students with conventional learning which is only 53.571%. Furthermore, this fact is also reinforced by the increase in student achievement of students who get PBL of 0.7 in the high category and only an increase of 0.4 with the moderate category in conventional learning. Thus, the application of PBL to calculus material has taken into account the needs and abilities of students in learning [7]. PBL learning process as described by Barret [9] and Arends & Kilcher [12] has been able to improve student learning skills, and this approach also helps students explore real problems that will be encountered after graduation. This is in line with the opinion that PBL uses real-world problems as a context for students to learn, and to acquire essential knowledge and concepts from lecture material [9][10][11] [13]. During the learning process, students actively construct knowledge socially and individually as well as constructively. Whereas in conventional learning it can be seen that students passively receive the knowledge explained by the lecturer, which emphasizes more on computing than understanding the concept [3] [5] [6].
Conclusions
This research shows that the PBL approach is effective for improving students' learning achievements compared to conventional learning. Lecturers need to often apply PBL in lectures so that students will increasingly be accustomed to linking teaching material to daily life, and improving their learning achievement. Lecturers need to pay attention to student collaboration to develop ideas for solving problems. Furthermore, the application of PBL needs to be developed in other subjects and other technical fields. | 2020-02-27T09:23:31.563Z | 2019-12-01T00:00:00.000 | {
"year": 2019,
"sha1": "9cbca1da0b451f8e6624c466049f05d14edac750",
"oa_license": "CCBY",
"oa_url": "http://www.hrpub.org/download/20191130/UJER21-19514169.pdf",
"oa_status": "GOLD",
"pdf_src": "Adhoc",
"pdf_hash": "752e6277895d3982f1ac1b0a6507297e4a15d353",
"s2fieldsofstudy": [
"Education"
],
"extfieldsofstudy": [
"Computer Science"
]
} |
204836276 | pes2o/s2orc | v3-fos-license | Development of effective anti-influenza drugs: congeners and conjugates – a review
Influenza is a long-standing health problem. For treatment of seasonal flu and possible pandemic infections, there is a need to develop new anti-influenza drugs that have good bioavailability against a broad spectrum of influenza viruses, including the resistant strains. Relenza™ (zanamivir), Tamiflu™ (the phosphate salt of oseltamivir), Inavir™ (laninamivir octanoate) and Rapivab™ (peramivir) are four anti-influenza drugs targeting the viral neuraminidases (NAs). However, some problems of these drugs should be resolved, such as oral availability, drug resistance and the induced cytokine storm. Two possible strategies have been applied to tackle these problems by devising congeners and conjugates. In this review, congeners are the related compounds having comparable chemical structures and biological functions, whereas conjugate refers to a compound having two bioactive entities joined by a covalent bond. The rational design of NA inhibitors is based on the mechanism of the enzymatic hydrolysis of the sialic acid (Neu5Ac)-terminated glycoprotein. To improve binding affinity and lipophilicity of the existing NA inhibitors, several methods are utilized, including conversion of carboxylic acid to ester prodrug, conversion of guanidine to acylguanidine, substitution of carboxylic acid with bioisostere, and modification of glycerol side chain. Alternatively, conjugating NA inhibitors with other therapeutic entity provides a synergistic anti-influenza activity; for example, to kill the existing viruses and suppress the cytokines caused by cross-species infection.
Background
Influenza is a serious and long-standing health problem Influenza virus is one of major human pathogens responsible for respiratory diseases, causing high morbidity and mortality through seasonal flu and global pandemics. Vaccines and antiviral drugs can be applied to prevent and treat influenza infection, respectively [1,2]. Unfortunately, the RNA genome of influenza virus constantly mutates and the genomic segments may undergo reassortment to form new virus subtypes. Although vaccine is the most effective way for prophylaxis of influenza, vaccine formulations must be updated annually due to changes in circulating influenza viruses [3], and the production of influenza vaccine takes several months. If prediction of the incoming influenza strains is incorrect, the vaccines may just give limited efficacy in protection.
Several influenza pandemics have occurred in the past, such as Spanish flu caused by H1N1 virus in 1918, Asian flu by H2N2 virus in 1957, Hong Kong flu by H3N2 virus in 1968, bird flu by H5N1 and H7N9 viruses in 2003 and 2013, respectively, as well as swine flu by H1N1 virus in 2009 ( Fig. 1) [4][5][6]. The influenza pandemics have claimed a large number of human lives and caused enormous economic loss in many countries. A universal vaccine for flu remains elusive.
Genome organization of influenza A virus
Influenza viruses are negative-sense RNA viruses of the Orthomyxoviridae family [7]. The viral genome is divided into multiple segments and differs in host range and pathogenicity. There are A, B and C types of influenza viruses, and influenza A viruses are the most virulent. Influenza A viruses infect a wide range of avian and mammalian hosts, whereas influenza B viruses infect almost exclusively humans. Much attention has been paid to influenza A viruses because they have brought about pandemic outbreaks. The structure of influenza virus contains three parts: core, envelope and matrix proteins. These proteins are hemagglutinin (HA), neuraminidase (NA), matrix protein 1 (M1), proton channel protein (M2), nucleoprotein (NP), RNA polymerase (PA, PB1 and PB2), non-structural protein 1 (NS1) and nuclear export protein (NEP, NS2). In addition, some proteins (e.g. PB1-F2, PB1-N40 and PA-X) were found in particular strains [8,9]. Influenza A viruses are further classified by HA and NA subtypes [10]. There are 18 subtypes of HA and 11 subtypes of NA; for example, H1N1 and H3N2 are human influenza viruses, while H5N1 and H7N9 are avian influenza viruses. HA and NA constantly undergo point mutations (antigenic drift) in seasonal flu. Genetic reassortment (antigenic shift) between human and avian viruses may occur to cause pandemics [11,12].
Infection and propagation route of influenza virus
The life cycle of influenza virus is a complex biological process that can be divided into the following steps ( Fig. 2): (i) virion attachment to the cell surface (receptor binding); (ii) internalization of the virus into the cell (endocytosis); (iii) viral ribonucleoprotein (vRNP) decapsidation, cytoplasmic transport and nuclear import; (iv) viral RNA transcription and replication; (v) nuclear exportation and protein synthesis; (vi) viral progeny assembly, budding and release from the cell membrane. All of these steps in the life cycle of influenza virus are essential for its virulence, replication and transmission. Developing a small molecule inhibitor that blocks any of these steps can produce a potentially efficient strategy to control and prevent influenza infection [13].
The influenza HA exists as a trimer and mediates the attachment to host cell via interactions with the cell surface glycoproteins that contain a terminal sialic acid (Nacetylneuraminic acid, Neu5Ac, compound 1 in Fig. 3) linked to galactose in α2,3 or α2,6 glycosidic bond [14]. Influenza viruses from avian recognize the 2,3-linked Neu5Ac receptor on host cell, whereas the humanderived viruses recognize 2,6-linked Neu5Ac receptor. The viruses from swine recognize both α2,3 and α2,6 receptors (Fig. 3a). After endocytosis and fusion of the viral envelope membrane into the host endosomal membrane, the viral ribonucleoprotein (RNP) complexes will enter the host cell, and proceed with replication by the machinery of host cell. The newly generated virus will bud on the plasma membrane, and its NA will break the connection between HA and host cell, thereby releasing the progeny virus to infect surrounding cells. NA is a tetrameric transmembrane glycoprotein that catalyzes the hydrolytic reaction to cleave the terminal Neu5Ac residue from the sialo-receptor on the surface of host cell. Thus, HA and NA play the central roles in influenza virus infection [15].
Development of anti-influenza drugs
Drugs are needed for treatment of patients infected by influenza viruses, especially during influenza pandemics without effective vaccine. Even broadly protective flu vaccines were available, anti-influenza drugs are still needed, especially important for treating the patients The currently available anti-influenza drugs directly target the virus at various stages of the viral life cycle, while therapeutics targeting the host are under development [16,17].
Approved anti-influenza drugs Figure 4 shows the approved anti-influenza drugs [18], including M2 ion channel blockers, neuraminidase inhibitors, and a nucleoprotein inhibitor [19]. However, the emerging drug-resistant influenza viruses have posed problems in treatment [20]. Two M2 ion channel inhibitors Fig. 4a (a in black), amantadine (2) [21] and rimantadine (3) [22], were widely used against influenza. However, the efficacy of M2 ion channel inhibitors is limited to influenza A because influenza B viruses lack M2 protein. In addition, almost all of influenza strains have developed high resistance against both amantadine and rimantadine [23]. The M2 ion channel inhibitors are now largely discontinued and replaced by NA inhibitors [24,25]. Baloxavir marboxil (Xofluza™, Shionogi/Hoffmann-La Roche, 2018) is used as single-dose oral drug for treatment of influenza [19]. Baloxavir acid, the active form of baloxavir marboxil, is a cap-dependent endonuclease inhibitor targeting the viral PA polymerase and interferes with the transcription of viral mRNA [19]. Moreover, the combination treatment with baloxavir marboxil and oseltamivir, a neuraminidase inhibitor, showed synergistic effect against influenza virus infections in mice experiments [26]. It is possible to develop the combination therapy using suboptimal dose of baloxavir marboxil and NA inhibitor.
Zanamivir (ZA) is more effective than oseltamivir, but the oral bioavailability of ZA in humans is poor (< 5%) [36], presumably because ZA is a hydrophilic compound that is water soluble and readily eliminated through renal system. ZA is usually delivered by intranasal or dry powder inhalation [29,30,37]. After inhaling dry powder, about 7-21% is deposited in the lower respiratory tract, and the rest is deposited in the oropharynx [36]. To prevent influenza, the recommended dose of ZA is 20 mg/50 kg/day for adults by inhalation twice daily (half dose at each inhalation). Adverse drug reactions of zanamivir are rarer than oseltamivir because zanamivir carries a glycerol side chain similar to the chemical structure of sialic acid, the natural NA substrate. Tamiflu, the phosphate salt of oseltamivir (OS), is a popular orally available anti-flu drug, which is well absorbed and rapidly cleaved by endogenous esterases in the gastrointestinal tract, liver and blood to give OS carboxylate (OC). To treat influenza, the recommended dose of OS for adults is 75 mg, twice a day, for 5 days. Tamiflu is less effective if used after 48 h of influenza infection. The preventive dose is usually 75 mg, once a day for at least 10 days or up to 6 weeks during a community outbreak. In comparison with ZA, oseltamivir has more adverse effects and tends to induce resistant viral strains. The cause of drug resistance is related to the change of binding mode that will be discussed in section 2.3.2.
Laninamivir octanoate is a long-acting anti-flu prodrug that is converted by endogenous esterases in the airway to give laninamivir, the C 7 -methoxy analog of ZA as a potent NA inhibitor [38]. Currently, laninamivir octanoate is only approved for use in Japan to treat and prevent influenza A and B infection. A single inhalation of the drug powder at a dose of 20 mg daily for 2 days is recommended for prophylaxis, and at 40 mg dosage for treatment of individuals greater than or equal to 10 years of age.
Peramivir (PE) has low oral bioavailability and is administered by a single intravenous drip infusion at a dose of 300 mg in 15 min during influenza treatment. PE is a highly effective inhibitor against influenza A and B viruses with good safety. PE can be used to treat the patients who cannot use oral drugs or insensitive to OS and ZA [39].
Why do we need new anti-influenza drugs?
Anti-influenza drugs are needed to treat seasonal flu and particularly unexpected global influenza infection. Our recent challenge is to deal with new influenza strains, crossspecies transmission, and drug resistance. The pandemic influenza A/H1N1 virus in 2009 is currently circulating as a seasonal virus and resistant to M2 inhibitors [40]. Since 2009, only NA inhibitors have been able to provide protection against the circulating human influenza A and B viruses. Small molecular NA inhibitors are powerful tools to fight against influenza viruses. Like other antiviral therapeutics, influenza NA inhibitor is not an exception to encounter the problem of drug-resistant mutations in the target enzyme. Since the drug-resistant H1N1 influenza virus became popular in 2007 and quickly dominated in the 2008-2009 season, the emergence of OS resistance is of particular concern [41,42]. The resistant phenotype is associated with an H275Y mutation in NA. In comparison with other permissive mutations, H275Y-mutant viruses do not display any fitness deficits, and thus remain in circulation [43,44]. The clinically relevant H5N1 avian influenza virus from a patient even shows an increasing resistance against OS. Fortunately, the H275Y mutant is still sensitive to ZA.
In this review, we highlight the latest advances in structural modification of oseltamivir, zanamivir and peramivir for the development of effective anti-influenza drugs, especially focusing on using congeners and conjugates of the existing NA inhibitors. Congeners are the related compounds having comparable chemical structures and biological functions, whereas conjugate refers to a compound having two bioactive entities joined by a covalent bond.
Rational design of neuraminidase inhibitor congeners Mechanism and assay of neuraminidase catalyzed reaction
Influenza virus NA is an ideal drug target because NA is an essential enzyme that located on virus membrane for easy access of drugs. Moreover, all subtypes of influenza NAs have a similar conserved active site. On NA-catalyzed hydrolysis of sialo-glycoprotein, the scaffold of Neu5Ac is flipped to a pseudo-boat conformation, so that cleavage of the glycoside bond is facilitated by anomeric effect, giving an oxocarbenium intermediate (Fig. 3b). Based on this reaction mechanism, a fluorometric assay using 2-(4-methylumbelliferyl)-α-D-N-acetylneuraminic acid (MUNANA) as NA substrate is designed (Fig. 5a). On hydrolysis of MUNANA, the anion of 4-methylumbelliferone will be released to show strong fluorescence at 460 nm (excitation at 365 nm). The fluorescence dims in the presence of NA inhibitor to suppress the enzymatic hydrolysis. A sialic acid 1,2-dioxetane derivative (NA-Star™, Applied Biosystems) can be used as the luminescence substrate to assess the NA inhibitory activity when the test compound contains a fluorescent moiety to interfere with the fluorescence assay (Fig. 5b).
Neuraminidase inhibitors and binding modes
Didehydro-2-deoxy-N-acetylneuraminic acid (Neu5Ac2en, DANA, 8) is the first reported influenza NA inhibitor [45]. The crystal structure of NA-DANA complex ( Fig. 6a) has been used as a template for the discovery of more potent NA inhibitors. ZA and OS are two NA inhibitors having (oxa)cyclohexene ring to mimic the oxocarbenium intermediate (Fig. 3). ZA is a DANA guanidino derivative designed by von Itzstein and coworkers [46,47]; the key interactions of ZA in NA active site are depicted in Fig. 6b. The carboxylate group shows electrostatic interactions with the three arginine residues (Arg118, Arg292 and Arg371 as a tri-arginine motif) in the S1 site of influenza NA [48,49], whereas the basic guanidino group exhibits strong electrostatic interactions with the acidic residues of Glu119, Asp151 and Glu227 in the S2 site. In addition, the glycerol side chain provides hydrogen bonds with Glu276 in the S5 site.
Oseltamivir carboxylate (OC) contains an amine group at C 5 -position to interact with the acidic residues (Glu119, Asp151 and Glu227). Instead of glycerol side chain, OC has a 3-pentoxy group at the C-3 position. Upon binding to OC, NA redirects the Glu276 residue to Arg224 to form a larger hydrophobic pocket for incorporation of the 3-pentoxy group [50,51]. However, the salt bridge between Glu276 and Arg224 in H275Y mutant will collapse by substitution of the histidine with a bulkier tyrosine residue, thus altering the hydrophobic pocket of NA and causing decreased affinity with OC [51,52]. In contrast, ZA rarely induces resistant viruses because it is structurally similar to the natural substrate Neu5Ac.
Conversion of carboxylic acid to ester prodrug for better bioavailability
Lipophilicity is an important factor in the pharmacokinetics behavior of drugs. The partition coefficient (log P) of a compound between octanol and water can be taken as a measure of lipophilicity. Compounds with log P values between − 1 and 5 are likely developed as orally available drugs [53]. In lieu of log P, the distribution coefficient (log D) between octanol and PBS buffer is used to predict the lipophilicity of ionic compounds.
OC has low lipophilicity and oral bioavailability (< 5%). To solve this problem, the ethyl ester OS was prepared as prodrug with improved oral bioavailability (35%) [54]. The phosphate salt of OS was formulated with appropriate filler materials to make tamiflu capsule with good bioavailability (79%).
A similar strategy has been applied to modify ZA molecule to develop better anti-influenza drugs with improved pharmacokinetic properties and oral bioavailability. Li and coworkers have shown that (heptadecyloxy)ethyl ester of ZA is an effective drug for mice by oral or intraperitoneal administration [55]. Similar to oseltamivir, the ZA ester can undergo enzymatic hydrolysis to release ZA as an active antiinfluenza agent. Compared to the rapid elimination of ZA in body, the ZA ester appears to sustain by oral administration. However, no pharmacokinetics studies were performed to determine the value of bioavailability. Amidon and coworkers have synthesized several acyloxy ester prodrugs of zanamivir with conjugation of amino acids [56]. For example, [(Lvalyl)oxy] ethyl ester of ZA improved the cell permeability by targeting hPepT1, an oligopeptide transporter present in gastrointestinal tract with broad substrate specificity. This ZA ester is a carrier-linked prodrug with a bioreversible covalent bond, and may be developed as an oral drug. Besides the carboxylate group, the highly hydrophilic guanidinium group also accounts for the low oral bioavailability of ZA and guanidino-oseltamivir carboxylate (GOC). In one approach to improve bioavailability, Amidon and coworkers [57] prepared ZA heptyl ester and used 1-hydroxy-2-naphthoic acid (HNAP) as a counterion of the guanidinium group (Fig. 7a) [58,59]. This intact ion-pair prodrug (9) showed an enhanced permeability across Caco-2 and rat jejunum cell membranes. Moreover, Fang and coworkers have synthesized an intramolecular ion-pair ZA ester prodrug 10 by annexing an HNAP moiety [60]. Compound 10 has improved lipophilicity (log D = 0.75 at pH 7.4) by incorporating an aromatic moiety of HNAP and forming the guanidinium-phenoxide ionpair. The ZA-HNAP prodrug resumes high anti-influenza activity, EC 50 = 48 nM in cell-based anti-influenza assays, by enzymatic hydrolysis to release zanamivir along with nontoxic HNAP.
Conversion of guanidine to acylguanidine for better bioavailability
Though the guanidinium moiety in ZA and GOC plays an important role in NA binding, its polar cationic nature is detrimental to oral administration. Modification of the guanidine group to acylguanidine by attachment of lipophilic acyl substituent improves bioavailability (Fig. 7b) [61]. Moreover, appropriate acyl substituents at the external N-position of the guanidine group in ZA are proposed to attain extra bindings in the 150-cavity [47,62] and 430-cavity [63] of H1N1 virus [61,64,65]. Some GOC acylguanidines also possess higher activities than OC against wild-type H1N1 and OS-resistant H259Y viruses [66]. The ZA and GOC acylguanidine derivatives 11 and 12 are stable in acidic media, but slowly hydrolyzed in neural phosphate buffer, and the hydrolytic degradation is accelerated in basic conditions [61]. The hydrolysis of ZA and GOC acylguanidines in animal plasma at physiological condition liberates the parental antiinfluenza agents ZA and GOC. Thus, influenza infected mice receiving the octanoylguanidine derivative 11 (or 12) by intranasal instillation have better or equal survival rate than those treated with parental ZA or GOC [61].
Substitution of carboxylic acid with bioisosteres
Bioisosteres are the surrogates mimicking the structure of an active compound while keep similar chemical, physical, electronic, conformational and biological properties [67,68]. There are two types of bioisosteres, mimicking the enzyme substrate or the reaction transition state. For example, hydroxamic acid, sulfinic acid and boronic acid can mimic the planar structure of carboxylic acid, whereas phosphonic acid, sulfonic acid, sulfonamide, and trifluoroborate can mimic the transition state in enzymatic hydrolysis of peptide bond.
Sialic acid (Neu5Ac, 1), the product of NA-catalyzed hydrolysis, exists as a mixture of two anomers. The affinity of Neu5Ac to influenza NA was weak (K i = 5 mM to A/H2N2 virus) [69], presumably due to low proportion (~5%) of appropriate anomer in the solution [70]. By substitution of the C 2 -OH group in Neu5Ac with hydrogen atom, the configurations at C-1 position are fixed [71]. Compounds 13a and 13b (Fig. 8) have the carboxylate group axially and equatorially located on the chair conformation of pyranose ring, respectively. The inhibition constant of 13b against V. cholera NA is 2.6 mM, but 13a is inactive.
Considering phosphonic acid and sulfonic acid are more acidic than carboxylic acid, the phosphonate and sulfonate congeners are predicted to have higher affinity toward NA by enhancing the binding strength with the tri-arginine cluster in NA. The phosphonate congener 14 (equatorial PO 3 H 2 ) was found to inhibit the NAs of influenza A/N2 and V. cholera viruses with IC 50 values of 0.2 and 0.5 mM, better than the natural carboxylate substrate Neu5Ac [72]. The 2-deoxy phosphonate congeners 15a (axial PO 3 H) and 15b (equatorial PO 3 H) were synthesized [71], and shown to bind V. cholera NA Fig. 7 Tackling the hydrophilic guanidinium group in zanamivir and guanidine-oseltamivir carboxylate. a Using 1-hydroxy-2-naphthoic acid to form ion-pair. b Forming acylguanidine as prodrug with K i values of 0.23 and 0.055 mM, respectively. In a related study [73], 15b shows inhibitory activity against H2N2 virus with K i and IC 50 values of 103 and 368 μM, respectively. However, the binding affinity of epimer 15a is too low to be detected.
The sulfonate derivative 16b (equatorial SO 3 H) is a more potent inhibitor (K i = 2.47 μM against H2N2 virus NA) than the epimer 16a (axial SO 3 H) and the phosphonate congener 15b (equatorial PO 3 H) by 14 and 42 fold, respectively. Sulfonate 16b also inhibits the NAs of H5N1 and the drug-resistant H275Y mutant at the same level with K i values of 1.62 and 2.07 μM. In another report [74], the sulfonate derivatives 16a and 16b were evaluated for their inhibitory ability against H3N2 (A/ Perth/16/2009) virus by fluorometric enzymatic assay. The experiments indicate that 16b is a much stronger NA inhibitor than the axially substituted sulfonate 16a (IC 50 > 1000 μM). The cell-based assay confirms that 16b has good ability to block H3N2 virus infection of MDCK cells in vitro (IC 50 = 0.7 μM).
Furthermore, the C 4 -OH group in 16b is replaced by basic guanidino group to give the derivative 16c to engage strong bindings with the negatively charged residues (Glu119 and Asp151) in the active site of influenza NA [75]. Thus, the inhibitory activity of 16c (IC 50 = 19.9 nM) against H3N2 virus NA is greatly enhanced. The C 3 -guanidino sulfonate 16c is a very potent inhibitor against influenza NAs of various strains, including H1N1, pandemic California/2009 H1N1 and H5N1-H274Y viruses, with potencies of 7.9 to 65.2 nM. Importantly, 16c at 1 mM is still inactive to human sialidase Neu2. As 16c inhibits in vitro infection of influenza H3N2 virus to MDCK-II cells with a high potency of 5 nM, it provides good opportunity for lead optimization.
Zanamivir phosphonate congener
Phosphonate group is commonly used as a bioisostere of carboxylate in drug design [76]. Compared with carboxylic acid (pK a = 4.74), phosphonic acid (pK a1 = 2.38) has higher acidity and stronger electrostatic interactions with guanidinium group. In a helical protein, the formation of phosphonate-guanidinium complex (ΔG 0 = − 2.38 kJ/ mol) is more stable than the carboxylate-guanidinium ion-pair (ΔG 0 = + 2.51 kJ/mol) [77,78]. A phosphonate ion in tetrahedral structure is also topologically complementary to bind with Arg118, Arg292 and Arg371 in influenza NAs. The molecular docking experiment [79] shows that zanaphosphor (ZP, compound 21 in Fig. 9), the phosphonate bioisostere of ZA, has higher affinity to NA. Compared the bonding mode of ZA in NA, ZP attains two more hydrogen bonds with the tri-arginine motif while other functional groups (C 4 -guanidinium, C 5 -acetamide and glycerol side chain) maintain comparable interactions. ZP possesses high affinity to influenza NAs with IC 50 values in nanomolar range. Though the phosphonate analogs (e.g. 14 and 15b) of sialic acid are weak NA inhibitors with IC 50 values in sub-millimolar range [72,80], ZP mimicking the transition state of oxonium-like geometry in the enzymatic hydrolysis is a very effective NA inhibitor. ZP also showed higher activity than ZA in protecting the canine MDCK cells challenged by various influenza viruses including the resistant H275Y strain [79]. The first practical synthesis of ZP was achieved by Fang and coworkers using sialic acid as a viable starting material (Fig. 9) [79]. Sialic acid is firstly protected as a peracetate derivative, which undergoes a concomitant decarboxylation at 100°C to give the acetyl glycoside 17. The anomeric acetate was replaced with phosphonate group by using diethyl (trimethylsilyl)phosphite as the nucleophile in the presence of trimethylsilyl trifluoromethanesulfonate (TMSOTf) as a promoter. After photochemical bromination, the intermediate is treated with a base to eliminate an HBr molecule for construction of the oxacyclohexene core structure. Following the previously reported procedure [81], the guanidine substituent is introduced to the C-4 position to furnish ZP. Another synthetic route to ZP is also explored by using inexpensive D-glucono-δ-lactone as the starting material, which proceeds through an asymmetric aza-Henry reaction as a key step [82].
Oseltamivir phosphonate congener
In the related study, tamiphosphor (TP, 22) was synthesized as the phosphonate congener of oseltamivir carboxylate by several methods (Fig. 10). The first synthesis [83] begins with introduction of a (diphosphoryl)methyl substituent to the C-5 position of D-xylose, and the subsequent intramolecular Horner−Wadsworth−Emmons (HWE) reaction constructs the cyclohexene-phosphonate core structure. Intramolecular HWE reaction was also applied to build up the scaffold of the polysubstituted cyclohexene ring in another TP synthesis starting with Nacetyl-D-glucosamine (D-GlcNAc) [84]. D-GlcNAc contains a preset acetamido group to manipulate the required absolute configuration in the TP synthesis. In the threecomponent one-pot approach [85], a chiral amine-promoted Michael reaction of 2-ethylbutanal with nitroenamide, a second Michael addition to 1,1-diphosphorylethene and an intramolecular HWE reaction are sequentially performed in one flask to construct the cyclohexene-phosphonate core structure. TP is thus synthesized by subsequent reduction of the nitro group and hydrolysis of the phosphonate ester. In another synthetic strategy of TP, palladium-catalyzed phosphonylation of 1-halocyclohexene is effectively applied as a key reaction [86][87][88].
In addition to TP having C 5 -amino substituent, the TPG analog (24) having C 5 -guanidino group is also synthesized for evaluation its NA inhibitory activity. It is noted that treatment of phosphonate diethyl esters with bromotrimethylsilane (TMSBr) gives the phosphonic acids TP and TPG, whereas treatment with sodium ethoxide gives the corresponding phosphonate monoesters 23 and 25.
TP containing a phosphonate group is a potent inhibitor against human and avian influenza viruses, including A/ H1N1 (wild-type and H275Y mutant), A/H5N1, A/H3N2 and type B viruses. TPG is even a stronger NA inhibitor because the guanidine group is more basic for stronger interactions with Glu119, Asp151 and Glu227 [18][19][20]89].
Though TP (log D = − 1.04) has double negative charges on the phosphonate group, it is more lipophilic than OC (log D = − 1.69) carrying a single negative charge. The improved lipophilicity of TP is attributable to higher acidity of phosphonic acid to enhance the intramolecular zwitterionic structure or the intermolecular ion-pair structures [57,60,90,91]. The guanidino compounds are also more lipophilic than their corresponding amino compounds because guanidine is more basic and preferable to form zwitterionic/ion-pair structures with the phosphonate group.
Though oseltamivir as a carboxylate ester is inactive to NA, the phosphonate monoester 23 exhibits high NA inhibitory activity because it retains a negative charge in the monoalkyl phosphonate moiety to exert adequate electrostatic interactions with the tri-arginine motif. The phosphonate diester is inactive to NA, while both phosphonate monoesters 23 and 25 show the antiinfluenza activity comparable to phosphonic acids 22 and 24. This result may be attributed to better lipophilicity of monoesters to enhance intracellular uptake. The alkyl substituent in phosphonate monoester can be tuned to improve pharmacokinetic properties including bioavailability. For example, TP and TP monoethyl ester have 7 and 12% oral availability in mice, respectively. It is worth noting that TPG and its monoester 25 also possess significant inhibitory activity against the H275Y oseltamivir-resistant strain with IC 50 values of 0.4 and 25 nM, respectively. In another study [92], TP monoester molecules are immobilized on gold nanoparticles, which bind strongly and selectively to all seasonal and pandemic influenza viruses through the NAs.
The mice experiments are conducted by oral administration of TP or its derivative after challenge with a lethal dose (10 LD 50 ) of influenza virus [93]. When administered at doses of 1 mg/kg/day or higher, TP, TPG and their phosphonate monoesters (22)(23)(24)(25) all render significant protection of mice infected with influenza viruses. Despite the low bioavailability (≤ 12%), all four phosphonates maintain the plasma concentrations in mice above the concentration required to inhibit influenza viruses. The metabolism studies indicate that almost no phosphonate monoesters 23 and 25 were transformed into their parental phosphonic acids 22 and 24. Therefore, these phosphonate monoesters are active drugs, unlike OS prodrug that releases the active carboxylic acid by endogenous hydrolysis.
Although PP is a good NA inhibitor (IC 50 = 5.2 nM against A/WSN/33 H1N1), its inhibitory activity is unexpectedly 74 times lower than that of PE, contrary to the previous computational study [95] that predicted PP to be a stronger binder for N1 neuraminidase. Due to the flexible cyclopentane core structure, the phosphonate congener (PP) can display different conformation than the carboxylate compound (PE). Therefore, the NA inhibitory activity of PP series is less predictable. The phosphonate compounds 33 and 34 show reduced binding affinity to the H275Y mutant with IC 50 of 86 and 187 nM, respectively, presumably because less hydrophobic interactions are acquired by the 3-pentyl group in the active site of the mutant NA [96,97]. However, the phosphonate monoalkyl ester 34 exhibits the anti-influenza activity superior to that of parental phosphonic acid 33 in the cell-based assay. Inferred from the calculated partition and distribution coefficients, the phosphonate monoalkyl ester can increase lipophilicity to enhance intracellular uptake. Since the crystal structure of PE-NA complex (PDB code: 1L7F) [96] reveals that the C 2 -OH group of peramivir has no direct interaction with influenza NA, a dehydration analog of PP is prepared for bioactivity evaluation. By forming a more rigid cyclopentene ring, the PP dehydration analog regains extensive electrostatic interactions with the tri-arginine cluster in NA, thus exhibiting high NA inhibitory activity (IC 50 = 0.3 nM) against influenza H1N1 virus.
Oseltamivir boronate, trifluoroborate, sulfinate, sulfonate and sulfone congeners Compared to carboxylic acid (pK a ≈ 4.5), boronic acid is a weaker acid (pK a ≈ 10.0) while sulfinic acid (pK a ≈ 2.0) and sulfonic acid (pK a ≈ − 0.5) are stronger acids. Figure 12 outlines the synthetic methods for the oseltamivir boronate, trifluoroborate, sulfinate, sulfonate and sulfone congeners [98]. Oseltamivir carboxylic acid (OC) is converted to a Barton ester, which undergoes photolysis in the presence of CF 3 CH 2 I to give the iodocyclohexene derivative 35. This pivotal intermediate is subjected to palladium-catalyzed coupling reactions with appropriate diboron and thiol reagents to afford OS boronate (36a), trifluoroborate (37a), sulfinate (39a), sulfonate (40a) and sulfone (42a) congeners. The corresponding guanidino analogs (GOC congeners) are also synthesized. The GOC congeners (b series) consistently display better NA inhibition and antiinfluenza activity than the corresponding OC congeners (a series). The GOC sulfonate congener (40b) is the most potent anti-influenza agent in this series and shows EC 50 of 2.2 nM against the wild-type H1N1 virus. Since sulfonic acid is a stronger acid than carboxylic acid, it can exert stronger electrostatic interactions than GOC on the three arginine residues (R118, R292 and R371) in the NA active site. The sulfonate compound 40b may exist in zwitterionic structure and form the sulfonate−guanidinium ion-pair more effectively than GOC to attain higher lipophilicity as predicted by the distribution coefficients (cLog D) values. Interestingly, the congeners with trifluoroborate, sulfone or sulfonate ester still exhibit significant NA inhibitory activity, indicating that the polarized B−F and S → O bonds still provide sufficient interactions with the tri-arginine motif.
Modification of zanamivir at the glycerol side chain
Replacing the glycerol chain in ZA with tertiary amides (e.g. 43b, in Fig. 13) still keeps good NA inhibitory activity with the IC 50 values similar to that of ZA [99,100]. Compared to the function of 3-pentoxy group in Fig. 12 Synthesis of oseltamivir boronates (36a/36b), trifluoroborates (37a/37b), sulfinates (39a/39b), sulfonates (40a/40b) and sulfones (42a/ 42b) from oseltamivir carboxylic acid (OC) oseltamivir, the dialkylamide moiety in 43b may render similar hydrophobic interactions in the S5 site of NA. To support this hypothesis, the crystallographic and molecular dynamics studies of compound 43a with influenza NA were carried out to show that the Glu276 and Arg224 residues form a salt bridge to produce a lipophilic pocket, and an extended lipophilic cleft is formed between Ile222 and Ala246 near the S4 site. The N-isopropyl and phenylethyl substituents of 43a can properly reside in the lipophilic pocket and cleft, respectively [101,102].
The three-dimensional structure of ZA-NA complex [103] shows that the C 7 -OH group exposes to water without direct interaction with NA. Therefore, the C 7 -OH is an ideal site for structural modification. Laninamivir (compound 44) derives from ZA by changing the C 7 -OH group to a methoxy group without reduction of NA inhibitory activity. Laninamivir is developed to Inavir (6) as a long-acting drug by further converting the C 9 -OH group to an octanoate ester. The lipophilic octanoyl group is proposed to make compound 6 more permeable to cells. Compound 6 is rapidly hydrolyzed by esterases to give laninamivir, which is hydrophilic and may be captured in endoplasmic reticulum and Golgi. When the influenza NA matures in endoplasmic reticulum and Golgi apparatus, laninamivir can firmly retain the NA, thereby preventing the formation of progeny virus particles [104]. The half-life of prodrug 6 was about 2 h in man, and the active ingredient 44 appeared at 4 h after inhalation administration. Compound 44 was slowly eliminated over 144 h [38,105,106]. Inavir only needs one inhalation with 40 mg dose to last 5 days for influenza treatment, compared to Relenza and Tamiflu which require twice daily administration at 10 mg and 75 mg doses. Moreover, ZA analogs having the C 7 -OH derived to carbamates (e.g. compound 45) do not cause significant reduction in anti-influenza activity [107].
Conjugating neuraminidase inhibitors with enhanced anti-influenza activity
Using NA inhibitor is a good therapy by preventing the spread of progeny viral particles. However, there are related problems in quest of solutions. For example, how to kill the existing viruses in severely infected patients? Is it possible to develop anti-influenza drugs that also suppress the complication of inflammation, especially the cytokine storm caused by cross-species infection? To address these issues, one may consider conjugating NA inhibitors with other therapeutic entity to provide better anti-influenza activity.
Multi-component drug-cocktails may have complex pharmacokinetics and unpredictable drug−drug interactions [108], whereas conjugate inhibitors are designed to incorporate multiple therapeutic entities into a single drug by covalent bond [109,110].
Conjugating zanamivir with porphyrin to kill influenza viruses
Porphyrins and the related compounds have been used as photosensitizers to activate molecular oxygen [111][112][113]. Activated singlet oxygen ( 1 O 2 ) is a highly reactive oxidant that can be utilized to kill adjacent cells in photodynamic therapy (PDT), which has been successfully applied to cancer treatment, and occasionally for treatments of bacterial and viral infections [114][115][116].
Because ZA has strong affinity to influenza NA, it is an excellent payload to deliver porphyrins to influenza virus in a specific way. Using the C 7 -OH group as connection hinge, four ZA molecules are linked to a porphyrin core structure to furnish the dual functional ZA conjugate 46 Fig. 13 Modification of zanamivir at the glycerol side chain. The C 7 -OH group points away from the NA active site according to the crystallographic analysis of the ZA-NA complex [103] ( Fig. 14) [117]. The ZA-porphyrin conjugate inhibits human and avian influenza NAs with the IC 50 values in nanomolar range. By plaque yield reduction assay, conjugate 46 shows 100-fold potency than monomeric ZA in inactivation of influenza viruses. Influenza H1N1 viruses are reduced to less than 5% on treatment with conjugate 46 at 200 nM for 1 h under illumination of room light, whereas 60% titer of viruses remain on treatment with ZA alone or combination of ZA and porphyrin at micromolar concentrations. The viral inactivation by 46 is associated with the high local concentration of the ZA-porphyrin conjugate brought to the viral surface by the high affinity of the ZA moiety for NA. Under irradiation of room light, the porphyrin component of conjugate 46 brings about reactive singlet oxygen to kill the attached viruses without damaging other healthy host cells. In contrast, a similar concentration of free porphyrin alone or in combination with zanamivir cannot accumulate to a high local concentration on the viral surface, and thus the destruction of influenza virus by light irradiation is ineffective.
In another aspect, the tetrameric ZA conjugate 46 can also take advantage of multivalent effect [118][119][120][121] to enhance the binding with influenza NA, which exists as a homotetramer on the surface of the virus, thus inducing aggregation of viral particles for physical reduction of the infectivity. Di-, tri-, tetra-and polyvalent ZA conjugates are also designed to increase the binding affinity with NA [122][123][124][125][126][127][128]. Klibanov and coworkers [129] implanted ZA and sialic acid molecules on the poly(isobutylene-alt-maleic anhydride) backbone for concurrent bindings with viral NAs and HAs, thus greatly increasing the antiinfluenza activity by more than 1000 fold.
Conjugating zanamivir with caffeic acid to alleviate inflammation
Influenza infection may induce uncontrolled cytokine storms as that happened in 2003 avian flu, resulting in the crossspecies transmission of H5N1 avian virus to humans to claim a large number of lives. Since extension from the C 7 -OH would not interfere with NA binding, the dual functional ZA-caffeate conjugates 47a and 47b (Fig. 15) are prepared by connection of caffeic acid to ZA with ester or amide linkage [130]. The cell-based assays indicate that conjugates 47a and 47b effectively inactivate H1N1 and H5N1 influenza viruses with EC 50 in nanomolar range. These conjugates also significantly inhibit proinflammatory cytokines, such as interleukin-6 (IL-6) and interferon-gamma (INF-γ), compared to ZA alone or in the presence of caffeic acid (CA).
Treatment with the ZA conjugates 47a and 47b improves the survival of mice infected with influenza virus. For example, treatment of conjugates 47a and 47b at 1.2 μmol/kg/ day, i.e. the human equivalent dose, provides 100% protection of mice from lethal-dose challenge of influenza H1N1 or H5N1 viruses in the 14-day experimental period. Even at a low dose of 0.12 μmol/kg/day, conjugates 47a and 47b still significantly protect the H1N1 virus-infected mice, showing greater than 50% survival on day 14. ZA alone or antiinflammatory agent alone cannot reach such high efficacy for influenza therapy [131,132]. Although the combination of an NA inhibitor with anti-inflammatory agents is effective in treating influenza-infected mice [133,134], the drug development may encounter problems with complex pharmacokinetics behavior. On the other hand, conjugates 47a and 47b bear ZA component for specific binding to influenza virus, thus delivering the anti-inflammatory component for in situ action to suppress the virus-induced cytokines. By incorporating a caffeate component, conjugates 47a and 47b also have higher lipophilicity to improve the pharmacokinetic properties.
Conjugating peramivir with caffeic acid as enhanced oral anti-influenza drug The C 2 -OH group, which does not directly interact with NA protein [135,136], is used for conjugation of peramivir with caffeic acid. The PE-caffeate conjugates 48a and 48b (Fig. 15) are nanomolar inhibitors against wild-type and mutated H1N1 viruses [137]. The molecular modeling of conjugate 48b reveals that the caffeate moiety is preferably located in the 295-cavity of H275Y neuraminidase, thus providing additional interactions to compensate for the peramivir moiety, which has reduced binding affinity to H275Y mutant caused by Glu276 dislocation. By incorporating a caffeate moiety, conjugates 48a and 48b also have higher lipophilicity than PE. Thus, conjugates 48a and 48b provide better effect in protecting MDCK cells from infection of H275Y virus at low EC 50 (~17 nM). Administration of conjugates 48a or 48b by oral gavage is effective in treating mice infected by a lethal dose of wild-type or H275Y influenza virus. In view of drug metabolism, since the ester bond in the conjugate 48a is easily hydrolyzed in plasma, the conjugate 48b having a robust amide bond may be a Fig. 15 Enhanced anti-influenza activity of ZA−caffeate and PE−caffeate conjugates by synergistic inhibition of neuraminidase and suppression of the virus-induced cytokines better candidate for development into oral drug that is also active against mutant viruses.
Conclusion
In this review, the anti-influenza drugs are discussed with an emphasis on those targeting the NA glycoprotein. In order to generate more potent NA inhibitors and counter the surge of resistance caused by natural mutations, the structures of on-market anti-influenza drugs are used as templates for design of new NA inhibitors. In particular, we highlight the modifications of these anti-influenza drugs by replacing the carboxylate group in oseltamivir, zanamivir and peramivir with bioisosteres (e.g. phosphonate and sulfonate) to attain higher binding strength with influenza NA. The carboxylic acid can also be converted to ester prodrugs for better lipophilicity and bioavailability. Using lipophilic acyl derivatives of guanidine as prodrug of zanamivir and guanidino-oseltamivir can mitigate the problem of low bioavailability. The C 7 -OH in zanamivir and C 2 -OH in peramivir, which point outward from the active site of influenza NA, are suitable for derivatization. Conjugating zanamivir molecules to porphyrin not only enhances the NA inhibitory activity, but also effectively activates molecular oxygen to kill influenza viruses. The ZA-caffeate and PE-caffeate conjugates render higher efficacy than their parental compounds (ZA or PE) in treatments of the mice infected with human or avian influenza viruses. Using congeners and conjugates is a viable strategy to develop orally available anti-influenza drug that is also active to mutant viruses. Interdisciplinary collaboration is essential in development of new anti-influenza drugs, and synthetic chemists play an important role to reach the goal. | 2019-10-23T06:08:13.335Z | 2019-10-23T00:00:00.000 | {
"year": 2019,
"sha1": "abf58f79363f906bfef19dbe31de48f7e6838d95",
"oa_license": "CCBY",
"oa_url": "https://jbiomedsci.biomedcentral.com/track/pdf/10.1186/s12929-019-0567-0",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "abf58f79363f906bfef19dbe31de48f7e6838d95",
"s2fieldsofstudy": [
"Chemistry",
"Biology"
],
"extfieldsofstudy": [
"Medicine",
"Chemistry"
]
} |
263924297 | pes2o/s2orc | v3-fos-license | Effect of combination therapy of siRNA targeting growth hormone receptor and 5-fluorouracil in hepatic metastasis of colon cancer
The aim of this study was to investigate the effects of small interfering RNA (siRNA) targeting human growth hormone receptor (hGHR) combined with 5-fluorouracil (5-FU) on the hepatic metastasis of colon cancer. The animal model of liver metastases using human SW480 colon cancer cells was established on BALB/c mice and the siRNA interfering plasmid targeting hGHR gene was constructed. The tumor-bearing mice were randomly divided into the saline control, plasmid, growth hormone (GH), 5-FU, 5-FU+plasmid and 5-FU+plasmid+GH groups. The liver metastasis in each group was observed. All the animals showed liver metastases and using siRNA-interfering plasmid treatment the incidence of liver metastases was significantly reduced in the tumor groups compared to the saline or GH group. The combined treatment of interfering plasmid and 5-FU slightly decreased the incidence of liver metastases in the tumor groups compared to the plasmid alone or 5-FU alone treatment, although the findings were not statistically significant. On the basis of the combination of interfering plasmid and 5-FU, the additional GH did not increase the incidence of liver metastases (P>0.05), but improved the weight loss of the mice (P<0.05) induced by the inhibition of GHR and toxicity of 5-FU. The present results showed that siRNA targeting hGHR is able to reduce the incidence of liver metastases of human SW480 colon cancer cells in mice. Thus, GHR may be important in tumor metastasis.
Introduction
Colon cancer cells, which invade the blood circulation, readily form metastases in the liver (1). The current available treatments for the hepatic metastasis of colonic carcinoma mainly focus on surgical removal of the metastasis; however, such treatments do not yield satisfactory results (2,3). Hepatic metastasis is considered the primary reason for the failure of colon cancer treatment in clinic, affecting patient prognosis and long-term survival (survival rate 10-20%) (4). The growth hormone receptor (GHR) genes are located on the fifth chromosome and GHR is widely distributed in human organs and tissues. Findings of a preliminary study showed that GHR is highly expressed in human colorectal carcinoma (5,6). Growth hormone (GH) induces cell differentiation and maturation by combining its receptor GHR with 191 amino acid monomeric peptides that are secreted from the eosinocyte anterior pituitary, initiating the anabolism inside the cells and promoting cell proliferation (7,8). GH/GHR plays an essential role in the occurrence of colon cancer and the development of hepatic metastases. By utilizing the technology of macromolecule interfering, we constructed a plasmid of small interfering RNA (siRNA) to interfere with GHR expression under diseased conditions involving hepatic metastases in colon cancer.
Materials and methods
Animals and cell lines. Thirty-six 8-week-old BALB/c mice with a body mass of 20-22 g were randomly selected from the Vital River Lab Animal Technology Co., Ltd. (Beijing, China). The mice were fed in a specific pathogen-free environment. The SW480 colorectal cancer cell line was purchased from the Cell Resource Center of the Shanghai Institutes for Biological Sciences at the Chinese Academy of Sciences (Shanghai, China). We have obtained the approval of the study by the Ethics Committee of Xiangyang Hospital Affiliated to Hubei University of Medicine and the humane treatment of the mice was ensured.
Cell suspension preparation. The human SW480 colonic cancer cells were cultivated in RPMI-1640 nutrient solution (Sigma, St. Louis, MO, USA) containing 10% fetal bovine serum (Thermo Fisher Scientific, Waltham, MA, USA), penicillin (100x10 3 U/l) and streptomycin (100 mg/l), and then placed in an incubator at 37˚C, containing 5% CO 2 . The cells were collected at the exponential growth stage with 0.25% trypsinase and subsequently mechanically isolated to obtain cell suspension at a centrifugation of 200 x g for 5 min. The supernatant was discarded and moderate normal saline (NS) was added to adjust the cell concentration to 1x10 7 /ml. The cell viability was measured using trypan blue 95% (Chongqing Chemicals Co., Chongqing, China).
Introduction of cancer cells via animal surgery. Mice were weighed prior to anesthesia. Confirmation that animals were anesthetized was indicated by a decrease in limb tension, corneal reflex showing no response and disappearance of skin pain. Aseptically, an oblique incision was performed in the left rear of the animals of 0.5-1.0 cm under the juncture of the left posterior axillary line and costal margin. The abdominal cavity was subsequently exposed and obliquely punctured using a size five needle along the length of the spleen into the membranes below. By forwarding the needle approximately 0.5 cm under the membrane, human SW480 colonic cancer cells were injected into the spleen. Each mouse was injected for 3 min with 0.1 ml cell suspension (or 1x10 6 /mouse). When the spleen membranes swelled and became white, the needle was withdrawn and pressure was applied to the site using cotton for disinfection, avoiding the exposure of cancer cells and bleeding. The spleen was then returned and the abdominal cavity was secured. After the mice recovered, the animals were fed a regular diet.
Construction of siRNA synthesis and eukaryotic expression vector. The mRNA sequence of the human GHR gene was obtained from the GenBank database, and the full-length genomic sequence was 4,414 bp (access no.: X06562, GI: 31737). According to the design principle of siRNA (http://bioinfo. clontech.com/rnaidesigner), the siRNA was designed to the genetic locus of hGHR online which is 1602-1622 bp: TGGTCTCACTCTGCCAAGAAA. The restriction sites for BamHI and HindIII were inserted in the siRNA with enzyme digestion into the eukaryotic expression vector pcDNATM6.2-GW/ EmGFPmiRmiRNA, which was designated as pcDNATM6.2-GW/EmGFPmiRmiRNA-GHR-4 (G4).
Interfering particle. The bacterial solution containing the interfering particle was added into the lysozyme nutrient solution and centrifuged at 220 x g overnight at 37˚C. An effective and efficient plasmid with high purity was extracted using Plasmid Midiprep kit (high-quality; Sigma) to prepare the plasmid according to the manufacturer's instructions. Inoculation of animals. The mice were injected on the first day of inoculation with tumor cells. Based on the delivery and dose of treatment, the animals were divided as follows: i) NS, each mouse was injected with 10 µl NS in the abdominal cavity. ii) Plasmid G4, where animals were injected subcutaneously with eukaryotic expression plasmid as 10 µg/mouse. iii) GH, each mouse was injected subcutaneously with rhGH 2 IU/kg. iv) 5-FU, the intraperitoneal injection was applied at a dose of 20 mg of 5-FU/kg. v) FU+G4, intraperitoneal injection at a dose of 20 mg 5-FU/kg and 10 µg G4/mouse. vi) FU+G4+GH, intraperitoneal injection performed as 20 mg of 5-FU/kg, 10 µg of G4/mouse and rhGH 2 IU/kg. The mice in the abovementioned groups were injected once every 3 days and subsequently continually injected 10-fold.
The observation index. Body mass, volume of drinking water, food intake, mental and activity condition were observed on the 1st, 5th, 10th, 15th, 20th, 25th and 30th day piror to and following inoculation. On the 30th day of inoculation, the animals were sacrificed to collect liver and spleen and fixed in 4% of formaldehyde solution. Sections (4 µm) were obtained and paraffin-embedded, followed by hematoxylin and eosin (H&E) staining for histology. The animals were sacrificed by cervical dislocation.
Statistical analysis. Data were shown as mean ± standard deviation, and one-way analysis of variance was used for multi-groups. For comparison between two groups, the q-value was used for analysis. P<0.05 was considered to indicate a statistically significant result.
Results
Animal weight. BALB/c mice inoculated with human SW480 colon cancer cells survived. Following surgery, the mice were weighed every 5 days (Table I). On the first day, the body mass of mice in each group appeared to decrease, with a lack of activeness and a reduction in the consumption of water and food. This lack of activity may be explained by the anesthesia and subsequent surgical procedure. By the 15th day, the mice of the GH group regained their body mass (21.87±0.74) to that prior to the operation (21.93±0.58). By the 30th day, the body mass of the mice (22.00±0.46) increased slightly as compared to that prior to the operation. In the FU+GH group, the weight of the mice was recovered at the 4th week. For the remaining four groups, the body mass of the mice was decreased as compared to that prior to the operation (NS, 21 (Table I).
Morphology of liver metastases. Metastatic tumors were identified on the surface of the liver of the BALB/c mice, indicating a 100% increase of th eliver metastatic rate. Liver volume became smaller and the texture appeared as crisp and hard. The metastasis tumor foci of the liver surface were mainly identified in the lobe margin and lobe visceral surface, especially the right lobe. The liver micrometastases for 26 animals showed diffused distribution with hoary appearance. Some tumor surface ulceration was observed, while the hepatic tissue was destroyed (Table II) Histology. The H&E staining of tissues showed that inner liver metastatic tumors were clustered. The normal liver lobule structure was eradicated, cell volume was reduced, the cancer cell differentiation was poor with obvious atypia and the cytoplasm, karyopyknosis, karyorrhexis, dissolution and mitosis increased (Fig. 1). Various hoary nodes, with a diameter of 0.2-3 mm, were evident in part of the inoculated spleen. The tumor formation rate of orthotopic inoculation was 100%. The cancer cells of the spleen orthotopic inoculation were mainly distributed near the splenic sinusoids with obvious atypia, concentrated in the nucleus and cytoplasm, with the chromosomal becoming hard, and more intense staining (Fig. 2). The morphological structure of the liver metastases was similar to that of the tumor nodules of the spleen orthotopic inoculation, conforming to the structural features of colon low-differentiated adenocarcinoma. Table I. The variation of mice body mass in each group of liver metastases.
Discussion
Hepatic metastases occur in almost 500,000 colon cancer patients during disease progression annually (9). The liver is the primary target organ of hematogenous gastric and colorectal metastasis (10), which constitutes the highest hepatic metastatic rate of colorectal cancer in alimentary cancer. Currently, the treatment for hepatic metastasis of colon cancer is mainly focused on hepatectomy while combining the adjuvant therapy of chemo-and radiotherapy. However, surgery may not be a viable treatment option for advanced stage patients. Additionally, post-surgery tumor cells are not sensitive to chemotherapy leading to treatment failure. The patient survival rate following surgery is between 50 and 70% (11)(12)(13).
Previous studies have focused on the molecular mechanism of hepatic metastases of colorectal cancer to select appropriate genes associated with tumor as therapeutic target in order to identify the relevant therapy (1). Previous findings have shown that organizing the specificity of 5-fluorocytosine/cytosine deaminase to identify a thermochemotherapy system can effectively obtain the targeting and inhibitory effect of hepatic metastases in colon cancer of nude mice (14). RNAi technology is associated with double-stranded RNA, which is complementary with endogenous mRNA in cells, leading to the specific degradation of mRNA and resulting in mRNA encoding genes not expressing the result of gene silencing. The emergence of RNAi has been beneficial in the study of the function of genes and identification of the target of gene therapy (15). The rhGH is widely used in the surgical field of opsonizing the metabolism, enhancing the immune system, relieving post operative fatigue, promoting wound healing, maintaining the intestinal mucosal barrier, and reducing bacterial translocation. The rhGH binds to its receptor (GHR) on the cell surface and via the GH-GHR-insulin-like growth factors (IGFs) axis triggers a series of biological effects (16). In recent years, investigators have identified that the GH-GHR-IGFs axis markedly contributes to the occurrence, development and metastasis of maligant tumors (17,18). The expression of colon cancer tissues in GHR is high and rhGH can promote the prolife ration, differentiation, and metastasis of tumor cells of postoperative residuals. Previous findings have suggested that rhGH should be employed with care in patients with a high expression of colon cancer (19)(20)(21)(22)(23)(24)(25). Considering the role of GHR in tumor metastasis, the aim of the present study was to investigate the GHR response by constructing GHR siRNA. GRH is stimulated following colon surgery, in which the cancer cells flow backward into the vein and then to the liver with blood dissemination leading to hepatic metastasis.
In the present study, we developed a hepatic metastasis mouse model by injecting human SW480 colon cancer cells into the spleen of BALB/c mice. The animals were also treated with GHR siRNA-interfering plasmid and 5-FU was added. The formation of hepatic metastatic tumor in BALB/c mice was investigated. The experimental results showed that GHR siRNA is capable of inhibiting the hepatic metastasis of human SW480 colon cancer cells. In the joint group of FU+G4+GH, a comparison of the inhibition ratio of hepatic metastasis using 5-FU alone for the group FU+G4 yielded no statistically significant difference. However, in order to improve food intake in the body mass of mice, the latter two groups have an advantage over the group of FU+G4+GH. When GH binds with GHR-siRNA and 5-FU, the hepatic metastatic ratio of SW480 cells is not increased. Additionally, GHR-siRNA is capable of selectively inhibiting the metastasis of human SW480 colon cancer cells, confirming the significant role GHR plays in tumor metastasis (26)(27)(28). Therefore, results of the present study further enhance understanding for treating colon cancer through the combination therapy of 5-FU and siGHR. | 2018-04-03T02:15:17.888Z | 2015-09-30T00:00:00.000 | {
"year": 2015,
"sha1": "3193842035eeea4277b1aa382540e94f40fe64cd",
"oa_license": "CCBYNCND",
"oa_url": "https://www.spandidos-publications.com/10.3892/ol.2015.3770/download",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "3193842035eeea4277b1aa382540e94f40fe64cd",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine",
"Biology"
]
} |
267757900 | pes2o/s2orc | v3-fos-license | Bovine ultralong CDR-H3 derived knob paratopes elicit potent TNF-α neutralization and enable the generation of novel adalimumab-based antibody architectures with augmented features
: In this work we have generated cattle-derived chimeric ultralong CDR-H3 antibodies targeting tumor necrosis factor α (TNF-α ) via immunization and yeast surface display. We identi fi ed one particular ultralong CDR-H3 par-atope that potently neutralized TNF-α . Interestingly, grafting of the knob architecture onto a peripheral loop of the CH 3 domain of the Fc part of an IgG1 resulted in the generation of a TNF-α neutralizing Fc (Fc knob ) that did not show any potency loss compared with the parental chimeric IgG format. Eventually, grafting this knob onto the CH 3 region of adalimumab enabled the engineering of a novel TNF-α targeting antibody architecture displaying augmented TNF-α inhibition.
Introduction
A subset of antibodies found in cattle harbors ultralong CDR-H3 regions of up to 70 amino acids that form a protruding paratope (Haakenson et al. 2018;Saini et al. 1999).From a structural perspective, these peculiar paratopes typically adopt a very similar structure consisting of a stalk region that is composed of an ascending as well as descending β-strand and the knob domain (Wang et al. 2013).The knob domain which is primarily responsible for antigen binding, displays a vast structural diversity due to the presence of different disulfide bond patterns (Dong et al. 2019).These disulfide bonds rigidify the paratope and are critical for antigen binding (Svilenov et al. 2021).The main function of the stalk region relies in mediating structural stability (Passon et al. 2023;Svilenov et al. 2021) and it was demonstrated by Smider and co-workers that mutations within this architecture might impede the functionality of the paratope to a certain degree (Stanfield et al. 2020).Notwithstanding, Macpherson and co-workers were able to chemically synthesize a solitary knob architecture by solid-phase peptide synthesis, proving that the presence of a stalk region is not an absolute requirement for the knob paratope to function adequately (Macpherson et al. 2021a).
Since as of now the knob-based paratope represents the smallest antibody-derived paratope that can be harnessed by immunization.Most recently, efforts were made to generate antigen-specific knobs.It was first shown by Macpherson and co-workers that autonomous knob paratopes can be generated in a sophisticated process involving ultralong CDR-H3 Fab expression followed by tobacco etch virus-mediated cleavage and release of the knob domain (Macpherson et al. 2020).In addition, our group engineered knob-Fc fusion proteins, referred to as Knobbodies (Pekar et al. 2021b).Interestingly, in this study, only a fraction of knob regions that were functional within the natural Fab context were adequately produced as Knobbodies.In addition to this, Knobbodies were reduced in their binding capacities compared to their parental Fabs.While affinities were fairly equal between knob-Fc fusions and their IgG counterparts, cellular binding capacities as well as killing capacities via antibody-dependent cell-mediated cytotoxicity (ADCC) were diminished.Remarkably, this is in line with findings of Smider and co-workers who produced independent knob paratopes targeting SARS-CoV2 as TrxA-knob constructs in Escherichia coli followed by enterokinase cleavage of TrxA functioning as chaperone (Huang et al. 2023).While binding affinities were largely retained, neutralization potencies were significantly diminished for the independent knob domains compared with the parental Fab counterparts.
Besides exploiting the knob as solitary antigen binding unit, this miniprotein was also utilized as building block for the generation of more sophisticated multifunctional antibody derivatives.Our group recently demonstrated that the ultralong CDR-H3 repertoire in cattle can be readily harnessed for the generation of bispecific antibodies (Klewinghaus et al. 2022).Due to the underlying specific genetics of bovine ultralong CDR-H3 antibodiesultralong CDR-H3 heavy chains typically pair with a single VL gene that is reasonably sequence conserved (Dong et al. 2019;Stanfield et al. 2016Stanfield et al. , 2020;;Wang et al. 2013)this specific repertoire can be contemplated as almost natural source of common light chain bispecifics (Krah et al. 2018).Additionally, Hawkins et al. implanted antigen-specific knob domains into rat serum albumin as well as into a CDR-distal loop of a VHH domain, enabling the facile implementation of a novel binding specificity into a protein with a predefined function, resulting in bifunctional molecules (Hawkins et al. 2022).This approach was lately broadened by the same group by virtue of grafting albumin binding knob domains onto the VH framework III loop for the generation of bispecific Fab fragments with extended pharmakokinetics (Adams et al. 2023).Most recently, our group has shown that target specific knob paratopes can also be grafted onto peripheral loops (AB and EF) of the CH 3 domain of the Fc region of IgG1, enabling the generation of a novel symmetric bispecific antibody format (Yanakieva et al. 2023).Essentially, Fc-mediated effector function were not significantly impeded.
TNF-α is a pleiotropic and proinflammatory cytokine that is generally considered as one of the main mediators in the pathogenesis of autoimmune inflammatory diseases (Li et al. 2017;Moudgil and Choubey 2011).Consequently, several different inhibitors of TNF-α were granted marketing approval for disease treatment, such as infliximab, etanercept, certolizumab pegol or golimumab (Jang et al. 2021).In this regard, adalimumab, a fully human IgG1 blocking the interaction of TNF-α to its cognate receptors has proven to be of utmost relevance in terms of both, therapeutic efficacies as well as commercial performance (Figure 1A) (Bain and Brazil 2003;Urquhart 2023).
In this work, we have generated TNF-α specific bovine ultralong CDR-H3 paratopes by combining animal immunization with yeast surface display.Interestingly, when reformatted as chimeric bovine x human IgG (Figure 1B), one particular paratope elicited quite strong neutralization of TNF-α.For further characterization, we engineered the Knobbody counterpart (Figure 1C) derived from this ultralong CDR-H3 paratope as well as Fc knob variants, by engrafting the respective knob region onto the AB loop or EF loop of the CH 3 domain of the Fc region (Figure 1D).Essentially, grafting the knob domain onto the peripheral loops (AB or EF) of the CH 3 domain of adalimumab enabled the construction of a novel tetravalent antibody architecture with significantly enhanced neutralization capacities (Figure 1E).
Results
2.1 Yeast surface display enables the isolation of TNF-α specific ultralong CDR-H3 paratopes We have previously described a platform process for specifically harnessing the bovine ultralong CDR-H3 repertoire by combining animal immunization with yeast surface display (Pekar et al. 2021b).
The same procedure was applied for the generation of TNF-α targeting ultralong paratopes.In brief, two cattle were immunized with recombinant human (rh) TNF-α.Following immunization, based on the peripheral blood mononuclear cell (PBMC) repertoire and RNA as well as cDNA derived therefrom, the ultralong CDR-H3 repertoire encoding for the stalk and knob architectures was specifically amplified by PCR.To this end, forward primers were exploited which specifically annealed at the 3′ nucleotide extension of IGHV1-7 (also referred to as VHBUL) as well as reverse primers annealing to framework region 4 (FR4) as described elsewhere (Arras et al. 2023c).Based on this diversity, a chimeric (bovine × human) Fab library was constructed in a process involving homologous recombination (gap repair cloning (Benatuil et al. 2010)) followed by yeast mating (Roth et al. 2019;Weaver-Feldhaus et al. 2004), giving rise to diploid cells and consequently allowing for functional chimeric Fab display.Essentially, the constructed library with a size of approximately 6 × 10 7 unique clones was subjected for library sorting by employing fluorescence activated cell sorting (FACS).For this, an antigen concentration of 1 µM was used.To co-select for functional Fab display besides the binding functionality, we applied a two-dimensional staining strategy involving a detection antibody binding to the constant region of the λ light chain.As shown in Figure 2 we were able to significantly enrich for a (rh) TNF-α binding population within three rounds of FACS sorting (Figure 2).Sequencing unveiled 77 unique ultralong CDR-H3 regions.Within this set, clonotyping was performed by evaluating sequence distances, setting a threshold of >97 % CDR-H3 sequence identity (approximately a 2-residue difference) which resulted into 26 sequence clusters.The overall similarity between these clusters exhibited great variation, with a sequence identity of only 38.5 % on average.
A subset of bovine × human chimeric ultralong CDR-H3 IgG antibodies significantly neutralize TNF-α
For antibody production, we nominated these 26 ultralong CDR-H3 paratopes that were reformatted and produced as bovine × human IgG1 antibodies harboring bovine variable domains followed by human constant regions (Figure 1B).Expression yields were in the double-digit milligram per liter scale, indicating adequate production profiles in general (Supplementary Table 1).Likewise, besides several outliers, size exclusion chromatography (SEC) target peaks post protein A purification were above 85 % for most of generated ultralong CDR-H3 antibodies.As (rh) TNF-α is a trimeric molecule and the generated chimeric bovine × human ultralong CDR-H3 IgGs were produced as bivalent antibodies, fine determination of affinities was not possible in a biolayer interferometry (BLI) setting.Consequently, we only performed qualitative binding experiment at a single (rh) TNF-α concentration of 100 nM (Supplementary Figure 1) and focused on determination of the biological functionality i.e. neutralization capacities.However, eleven out of 26 chimeric antibodies showed specific binding to (rh) TNF-α as determined by BLI.Those were scrutinized more meticulously in terms of their TNF-α inhibition potential by exploiting TNF-α reporter cells which produce a secreted 2).This molecule was considered for further engineering and characterization.In this respect, we aimed at investigating the targeted epitope of B.01 in direct comparison to adalimumab.For this, an epitope binning experiment was conducted using BLI in which (rh) TNF-α was captured followed by two consecutive association steps with B.01 followed by adalimumab.As control, adalimumab or B.01 was exploited for both steps (Supplementary Figure 3).Compared to the controls, a second association was observed for adalimumab after binding of B.01 under saturating conditions.However, the interference pattern shift was clearly diminished in contrast to association of adalimumab to (rh) TNF-α only, indicating either an overlapping but not identical epitope that were addressed by B.01 and adalimumab or epitopes in close proximity resulting into steric hindrance.
Engineering of an adalimumab derived antibody architecture with augmented inhibition capacities by grafting an ultralong CDR-H3 knob paratope onto the Fc region
We have previously described the generation of Knobbodies, in which the knob region of ultralong CDR-H3 regions is directly grafted onto the hinge region followed by the Fc portion (Pekar et al. 2021b).To further characterize whether the 44 amino acid comprising knob domain of ultralong CDR-H3 antibody B.01 autonomously functions as paratope enabling efficient TNF-α neutralization, we generated a Knobbody derived thereof (Figure 1C).In addition, we constructed Fc knob versions of B.01, in which the knob architecture was either grafted onto the peripheral AB or EF loop of the CH3 region of the Fc part (Figure 1D).This novel antibody format has recently been described by our group (Yanakieva et al. 2023).Herein, the knob domain is flanked on both termini with a Gly4Ser linker and either replaces the EEMTK motif of the AB loop or the DKS motif of the EF loop of the CH3 region, respectively.Finally, in order to investigate if the knob derived from B.01 enables the construction of a novel version of adalimumab with enhanced neutralization potencies, we also grafted this respective knob onto the CH 3 AB or EF loop of Adalimumab (Figure 1E).Of note, all sequences are given in Supplementary Figure 2. The different knob-based formats derived from B.01 were produced in Expi293™ cells.Expression yields were in the double digit milligram per liter scale and fairly similar to the production of adalimumab (Table 1).Except for the Knobbody version of B.01 (SEC target peak of 80.5 %), purities as determined by SEC following protein A purification were generally above 85 % target species, indicating adequate properties (Supplementary Figure 5).Furthermore, all different architectures showed specific binding to (rh) TNF-α, as determined by BLI (Supplementary Figure 4).Subsequently, all constructs were further assessed with respect to their neutralization capacities in a reporter cell assay using a broad concentration range (Figure 4A 0.095 nM for adalimumab).The AB loop engraftment, however, considerably enhanced potencies in terms of (rh) TNF-α neutralization (IC 50 = 0.006 nM for adaFc knob AB).This is in line with target engagement assays (Supplementary Figure 4) revealing superior binding capacities especially for adaFc knob AB.Here, the interference pattern shift (BLI) at a fixed concentration of 100 nM was clearly increased for the B.01 derived knob engrafted onto the CH 3 domain of adalimumab compared to all other constructs including both parental molecules (adalimumab as well as B.01 IgG).
It is known that TNF-α elicits direct killing of various different cancer cell lines (Carswell et al. 1975;Dakhel et al. 2021).In another attempt to further analyze the different antibody architectures, we scrutinized the inhibition of TNF-α mediated killing of murine L929 cells (Humphreys and Wilson 1999).For this, killing was assessed at a (rh) TNF-α concentration of 10 nM.The different antibody-derived constructs were added at a concentration of 8 nM.As shown in Figure 4B, at this concentration, adalimumab only minorly inhibited killing of L929 cells triggered by TNF-α.Likewise, also ultralong CDR-H3 IgG B.01, the Knobbody based on B.01 as well as Fc knob derivatives only slightly reduced TNF-α induced killing.On the contrary, adalimumab derived molecules with CH 3 engrafted knob domains adaFc knob AB as well as adaFc knob EF substantially inhibited TNF-α mediated killing of L929 cells.Eventually, this resulted into trends of enhanced potencies of inhibiting TNF-α mediated killing (IC 50 ) elicited by adaFc knob AB as well as adaFc knob EF (IC 50 = 3.76 nM and 3.75 nM, respectively) compared to adalimumab (IC 50 = 11.22 nM, Supplementary Figure 6).Taken together, these findings are giving clear evidence that TNF-α specific Fc knob designs based on cattlederived ultralong CDR-H3 paratopes can be exploited for the generation of novel adalimumab-based antibody formats with augmented features.
Discussion
In order to efficiently inhibit the proinflammatory cytokine TNF-α, we have generated and engineered antibody-derived architectures based on bovine ultralong CDR-H3 paratopes.TNF-α is a master regulator of inflammatory processes and consequently, multiple TNF neutralizing therapeutics have been approved for the treatment of autoimmune diseases (Jang et al. 2021;Li et al. 2017).To generate TNF-α inhibiting molecules, we immunized cattle with (rh) TNF-α.It is known since quite some time that the adaptive immune repertoire in cattle produces a subset of antibodies with peculiar long CDR-H3 regions of up to 70 amino acids (Haakenson et al. 2018;Saini et al. 2003Saini et al. , 1999)).This set of antibodies has been efficiently harnessed to address several different antigens, such as viral components (Huang et al. 2023;Sok et al. 2017;Wang et al. 2013), complement proteins (Macpherson et al. 2021a(Macpherson et al. , 2021b)), serum albumin (Adams et al. 2023) or cellular receptors (Klewinghaus et al. 2022;Pekar et al. 2021b).Intriguingly, it was demonstrated that the main antigen-binding architecture of the CDR-H3, the knob region, can function as autonomous paratope (Huang et al. 2023;Macpherson et al. 2020), paving the way for a plethora of different engineering options (Adams et al. 2023;Hawkins et al. 2022;Macpherson et al. 2021a;Pekar et al. 2021b;Yanakieva et al. 2023).
From the PBMC repertoire of immunized cattle we were able to specifically enrich for TNF-α targeting ultralong CDR-H3 antibodies by yeast surface display (Doerner et al. 2014;Valldorf et al. 2022).Within the set of eleven sequence diverse IgGs (Figure 1B), only one particular clone (B.01) enabled a robust inhibition of TNF-α in a reporter cell assay.Compared with adalimumab which is from a commercial perspective one of the most successful biotherapeutics to this date (Urquhart 2023), neutralization potencies were reduced by approximately 14-fold.In a recent study that was inspired by work conducted by Rüker and co-workers (Wozniak-Knopp et al. 2010), our group was able to show that antigen-specific knob domains can be grafted onto peripheral loops of the CH 3 domain of an IgG, giving rise to a novel symmetric bispecific antibody format (Yanakieva et al. 2023).To investigate whether the TNF-α neutralizing knob derived of B.01 can also function in an autonomous manner, we grafted this domain onto the AB loop as well as onto the EF loop of the CH 3 domain of an effector silenced Fc domain (Fc knob , Figure 1D).In addition, we also engineered the Knobbody version of B.01 (Figure 1C), in which the knob domain is fused onto the hinge region of the Fc part (Pekar et al. 2021b).In accordance with previous findings (Pekar et al. 2021b;Yanakieva et al. 2023), while still From epitope binning experiments we were able to deduce that B.01 most likely targets an overlapping but nonidentical epitope compared to adalimumab or an epitope in close proximity.Consequently, we set out to investigate whether biparatopic versions of adalimumab with higher valencies by constructing Fc knob version of adalimumab (adaFc knob , Figure 1E) would result in augmented neutralization potencies.Strikingly, especially B.01 engrafted onto the AB loop of the CH3 domain of adalimumab (adaFc knob AB) was significantly more potent in neutralizing TNF-α compared to adalimumab alone.In this respect, the IC 50 in TNF-α was optimized by more than 15-fold.These findings were further substantiated in a TNF-α based killing assay of L929 cells.At a fixed antibody concentration of 8 nM, adalimumab only moderately inhibited killing of L929 cells, whereas both engineered adaFc knob versions robustly neutralized killing capacities of TNF-α.To the best of our knowledge, we were able to demonstrate for the first time that ultralong CDR-H3 regions can be exploited for the neutralization of proinflammatory cytokines and most importantly that cattle-derived knob domains can be harnessed to create novel versions of pre-existing therapeutic antibodies with augmented features.Notwithstanding, the foreign nature of the knob domain might pose a substantial issue in terms of immunogenicity when administered to patients.In contrast to camelid-derived VHH domains that have been proven to be versatile for biomedical applications (Duggan 2018;Keam 2023;Markham 2022) and that can be readily humanized in different ways (Arras et al. 2023b(Arras et al. , 2023a;;Rossotti et al. 2022;Sulea 2022;Vincke et al. 2009), it remains to be determined whether cattle-derived knob paratopes can be sufficiently humanized.
Cattle immunization
Two female cattle (Bos taurus) were immunized with (rh) tumor necrosis factor alpha (TNF-α, AcroBiosystems) at preclinics GmbH, Germany.All experimental procedures and animal care adhered to local laws and regulations for animal welfare.Briefly, each immunization involved the subcutaneous administration of 400 μg TNFα, dissolved in 2 mL phosphate buffered saline (PBS), and mixed with 2 mL of Fama adjuvant (GERBU Biotechnik).Injections were performed at multiple sites.A total of six immunizations were conducted over 84 days (on days 0, 28, 42, 56, 70, and 84).On day 88, 100 ml of whole blood was collected from both animals, followed by the extraction of total RNA and subsequent complementary DNA (cDNA) synthesis.
For library construction, gap repair cloning, following the method of Benatuil and colleagues was employed (Benatuil et al. 2010).In each electroporation reaction we used 12 μg of CDR-H3 PCR product and 4 μg of BsaI-HF (New England Biolabs) digested heavy chain destination plasmid (pHC).The estimated library size was determined by dilution plating on SD-Trp agar plates.To achieve Fab display, EBY100 cells containing the heavy chain diversity and BJ5464 cells with the single light chain were combined through yeast mating (Weaver-Feldhaus et al. 2004).
Yeast surface display library sorting
For library sorting, cells were cultured overnight in SD-Trp-Leu medium at 30 °C and 120 rpm.Afterwards, cells were harvested by centrifugation and used to inoculate SG-Trp-Leu medium at an OD 600 of 1.0, followed by incubation for 2 days at 20 °C.Fab expression was detected by incubating with a light chain-specific goat F(ab')2 anti-human lambda R-phycoerythrin (R-PE) during the initial sorting round, or conjugated with Alexa Fluor 647 for subsequent enrichments (both from South-ernBiotech, diluted 1:20).Simultaneous staining for antigen binding was carried out using Penta-His Alexa Fluor 647 conjugate antibody (Qiagen, diluted 1:20) for sorting round 1 or Anti-6X His tag ® PE antibody (Abcam, diluted 1:40) for sorting rounds 2 and 3.For this, cells were harvested, washed twice with PBS (Sigma Aldrich) and incubated with (rh) TNF-α at a concentration of 1 μM for 30 min on ice.After three washing steps, cells were incubated with secondary labeling reagents for an additional 30 min on ice.Subsequently, cells were again washed thrice with PBS and resuspended in an appropriate volume for FACS sorting using a BD FACSAria™ Fusion cell sorter (BD Biosciences).
DNA of the enriched yeast library was purified using the MasterPure Yeast DNA Purification Kit (Lucigen) and transformed into electrocompetent E. coli Top 10 (Inivitrogen) according to the manufacturer's manual.192 clones were picked from LB-amp agar plates and sent for Sanger sequencing (Microsynth).Analysis of the sequencing histograms and clonotyping was performed with Geneious Prime ® 2021.1.1.
Antibody expression and purification
For antibody expression, the pTT5 vector system was employed (Durocher 2002).Chimeric bovine × human IgGs as well as adalimumab and engineered derivatives were expressed using a Fc effector silenced backbone (Pekar et al. 2021a).The B.01-derived Knobbody and Fc knob engraftments as well as adaFc knob derivatives were constructed as described elsewhere (Pekar et al. 2021b;Yanakieva et al. 2023).Sequences are given in Supplementary Figure 2. All proteins were expressed in a volume of 5 ml using the ExpiCHO™ expression system (Thermo Fisher Scientific) or at 25 ml scale harnessing Expi293™ cells (Thermo Fisher Scientific) according to the manufacturers manual, employing a 2:1 ratio for the heavy chain and light chain, respectively.After 7 days the protein containing supernatants were purified exploiting MabSelect™ antibody purification chromatography resin (Cytiva).After sterile filtration, protein concentrations were determined by A 280 absorption measurement.For the assessment of protein sample quality regarding monomer content [%], analytical size exclusion chromatography (SEC) was applied using 7.5 µg protein per sample on a TSKgel UP-SW3000 column (Tosoh Bioscience).
Biolayer interferometry
To evaluate the binding capacities of all cattle-derived proteins to (rh) TNF-α, we employed the Octet RED96 system (ForteBio, Pall Life Science) with 1000 rpm agitation at 25 °C.Initial confirmation of binding of the different antibodies and antibody architectures involved loading at 3 μg/mL in PBS for 180 s on anti-human Fc (AHC2) biosensors, followed by a 60 s sensor rinsing step in PBS.The subsequent association with (rh) TNF-α was measured for 180 s using 100 nM of the antigen.
For the investigation of potential epitope overlap between the cattle-derived clone B.01 and adalimumab, a competition analysis was conducted.Each protein was loaded onto AHC biosensors at 3 mg/ml in PBS for 300 s.A biosensor quenching step followed by the association of hu IgG Fc (Sigma Aldrich) protein at 20 μg/mL for 200 s preceded sensor rinsing in PBS for 30 s. Subsequently, association with (rh) TNFα was performed for 600 s at 100 nM, followed by another association step with either adalimumab or the cattle-derived sample for 600 s at 100 nM in the presence of 20 μg/ml hu IgG Fc.Finally, association to the secondary antibody (100 nM) was determined for 600 s in the presence of 20 μg/ml hu IgG-Fc protein (Sigma Aldrich).Each biolayer interferometry experiment included appropriate negative controls, such as an unloaded sensor control and unrelated antigens to validate specificities.The resulting data were analyzed using ForteBio data analysis software 12.2 after Savitzky-Golay filtering.
Human TNF-α HEK reporter assay
To assess the activation of the NF-κB -AP-1 pathway, the TNF-α HEK-Blue™ assay (InvivoGen) was conducted following the manufacturer's instructions.
In brief, 5 × 10 4 cells were seeded into a 96-well plate and stimulated with 1 ng/ml rh TNF-α, 10 min after the addition of 25 nM or 5 nM of the anti-TNF-α antibody samples, for a 24-h incubation period at 37 °C and 5 % CO 2 .In parallel, cells were also incubated with identical concentrations of TNF-α, with and without anti-TNF-α sample treatment, serving as positive and negative controls, respectively.
For IC 50 determination, samples were titrated in a 1:5 serial dilution, ranging from 500 nM to 3.3 fM, and tested in the presence of a final concentration of 1 ng/mL (rh) TNF-α.To account for background, measurements were taken for cell culture medium only.After 24 h, 20 μL of cell culture supernatants were mixed with 180 μl QUANTI-Blue medium in a fresh 96-well plate and incubated for 1 h at 37 °C and 5 % CO 2 .Optical density was measured at 640 nm using a multi-mode microplate reader (Synergy HTX, BioTek), and the results were normalized to the positive control of 1 ng/ml TNF-α prior data analysis using one-way ANOVA with Dunnett's multiple comparisons test.p-values < 0.05 were considered statistically significant.
L929 cytotoxicity assay
To assess the inhibition of TNF-α induced cytotoxicity in L929 cells (ATCC), a neutralization assay was conducted.For this, 1 × 10 5 L929 cells were seeded into a black, clear-bottom 384-well micro titer plate (MTP, Greiner).After 1 h, anti-TNF-α samples were added and incubated for 15 min, followed by addition of 10 nM (rh) TNF-α and 30 nM SYTOX™ Green Dead Cell Stain (Invitrogen).Dead cell signals were measured using an IncuCyte system (Sartorius), incubating the MTP at 37 °C and 5 % CO 2 .Experiments were performed with biological duplicates, and cell death was normalized to TNF-α treated cells without antibody treatment.Controls included untreated cells and staurosporine-treated cells as negative and positive controls, respectively.Data was analyzed with a variable slope four-parameter fit, and significance was determined using one-way ANOVA with Dunnett's multiple comparisons test (p < 0.05).
Molecular modeling
To create three-dimensional models for structural visualization, the full-length structure of adalimumab and the chimeric ultralong CDR-H3 IgG were built using the antibody modeler tool in the molecular modeling software package moe (Molecular Operating Environment 2022.02:Chemical Computing Group Inc.; 2022).Structural models of the Knobbody and engrafted knobs into the Fc CH3 AB or EF loops were built by generating models of the knobs and adding them to the different Fc regions of linkage or engraftment via moe's protein builder and linker modeler.Finally, energy minimization of all constructs was performed.Visualization of 3D structures was done with PyMOL (The PyMOL Molecular Graphics System, Version 2.5.7 Schrödinger, LLC.).
Figure 1 :
Figure 1: Structural models of generated anti-TNF-α antibodies.Architectures of tested molecule formats based on anti-TNF-α Fab (green) and cattle-derived ultralong CDR-H3 knob (magenta) assessed for their ability to neutralize TNF-α in functional assays.Depiction of adalimumab (A) and chimeric ultralong CDR-H3 IgG (B) antibodies as well as engineered formats thereof.Ultralong CDR-H3 engraftment onto human IgG Fc portion is indicated as format C (Knobbody), while introduction of knob structure into CH3 AB or EF peripheral loop structures (Fc knob AB/EF) is shown as format D. Combination of adalimumab with knob CH3 AB/EF engraftment (adaFc knob AB/EF) is indicated as format E. Schemes were generated using PyMol software version 2.3.0.
Figure 2 :
Figure 2: Yeast surface display enables the isolation of anti-TNF-α antibodies.FACS-based enrichment of the ultralong CDR-H3 library by applying a two-dimensional sorting strategy for simultaneous detection of (rh) TNF-α binding and Fab display.Dot plots show representative 10 6 cells of the three conducted sorting rounds employing 1 µM of antigen, respectively.Applied sorting gates and corresponding cell population (as % of total cells) are shown.
, HEK-Blue™ TNF-α cells).In this assay, adalimumab was quite potent in inhibiting (rh) TNF-α based stimulation of the reporter cells (IC 50 = 0.095 nM), while chimeric ultralong CDR-H3 IgG (B.01) was attenuated in direct comparison (IC 50 = 1.348 nM).In accordance with previously published data(Pekar et al. 2021b;Yanakieva et al. 2023), generating the Knobbody version resulted into compromised functionalities i.e., reduced potencies of (rh) TNF-α neutralization by approximately 27-fold (IC 50 = 36.73nM).This is in contrast to the construction of Fc knob prototypes of B.01.Here, both, the AB loop as well as the EF loop engraftments of the B.01 knob paratope were quite similar to the IgG version in (rh) TNF-α neutralization (IC 50 = 0.877 nM for Fc knob AB and 2.973 nM for Fc knob EF).Most importantly, when the Fc knob architectures were incorporated into CH 3 domain of adalimumab, this resulted in augmented (rh) TNF-α inhibition.In this regard, the B.01 knob engraftment into the EF loop of the CH 3 domain of adalimumab only slightly optimized neutralization potencies (IC 50 = 0.052 nM for adaFc knob EF vs.
efficiently neutralizing TNF-α in a reporter cell assay at high concentrations, potency of the Knobbody version of B.01 was significantly attenuated compared to the parental IgG (by approximately 27-fold).This is in stark contrast to the Fc knob versions of B.01 which demonstrated quite similar potencies in inhibiting TNF-α mediated reporter cell activation in comparison to the parental IgG.
Figure 4 :
Figure 4: Cattle-derived ultralong CDR-H3 knob based antibodies mediate dose dependent neutralization of TNF-α and augment inhibitory capacities of adalimumab.Assessment of inhibitory capacities of bovine × human chimeric antibody B.01 and Fc-silenced adalimumab as well as engineered derivates (A) on HEK-Blue™ TNF-α reporter cells (rh TNF-α at 1 ng/ml) and in an orthogonal cytotoxicity assay (B) using TNF-α susceptible L929 cells (antibody samples at 8 nM and rh TNF-α at 10 nM).Dose response curves ±SEM were calculated using a non-linear regression variable slope four parameter fit from at least three independent experiments.Sample points were compared using an ordinary one-way ANOVA with Dunnett's multiple comparisons test.*p ≤ 0.05, **p ≤ 0.01, ****p ≤ 0.0001 (compared to (A) adalimumab or (B) TNF-α treatment only).
Expression yields were determined after protein A purification.Purities were determined by analytical size exclusion chromatography post protein A purification.Neutralization potencies were assessed in a HEK-Blue™ reporter cell assay and are presented ± the standard error. | 2024-02-21T06:17:15.070Z | 2024-02-20T00:00:00.000 | {
"year": 2024,
"sha1": "cd8dcb572483826bfc570e3cb107deee71ed785b",
"oa_license": "CCBY",
"oa_url": "https://www.degruyter.com/document/doi/10.1515/hsz-2023-0370/pdf",
"oa_status": "HYBRID",
"pdf_src": "Anansi",
"pdf_hash": "c55215b466faa912cb462df9453abe876219d73d",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
85103534 | pes2o/s2orc | v3-fos-license | Identification of a gadd45β 3′ Enhancer That Mediates SMAD3- and SMAD4-dependent Transcriptional Induction by Transforming Growth Factor β*
GADD45β regulates cell growth, differentiation, and cell death following cellular exposure to diverse stimuli, including DNA damage and transforming growth factor-β (TGFβ). We examined how cells transduce the TGFβ signal from the cell surface to the gadd45β genomic locus and describe how GADD45β contributes to TGFβ biology. Following an alignment of gadd45β genomic sequences from multiple organisms, we discovered a novel TGFβ-responsive enhancer encompassing the third intron of the gadd45β gene. Using three different experimental approaches, we found that SMAD3 and SMAD4, but not SMAD2, mediate transcription from this enhancer. Three lines of evidence support our conclusions. First, overexpression of SMAD3 and SMAD4 activated the transcriptional activity from this enhancer. Second, silencing of SMAD protein levels using short interfering RNAs revealed that TGFβ-induced activation of the endogenous gadd45β gene required SMAD3 and SMAD4 but not SMAD2. In contrast, we found that the regulation of plasminogen activator inhibitor type I depended upon all three SMAD proteins. Last, SMAD3 and SMAD4 reconstitution in SMAD-deficient cancer cells restored TGFβ induction of gadd45β. Finally, we assessed the function of GADD45β within the TGFβ response and found that GADD45β-deficient cells arrested in G2 following TGFβ treatment. These data support a role for SMAD3 and SMAD4 in activating gadd45β through its third intron to facilitate G2 progression following TGFβ treatment.
damental roles in development and adult tissue homeostasis. The epithelial response to members of this family is highly varied and includes such diverse cellular processes as proliferation, movement, differentiation, and apoptosis. Indeed, cells harboring mutations within the signal transduction proteins or the TGF target genes either fail to respond or respond inappropriately to the TGF signal, often leading to developmental problems, oncogenesis, fibrotic disease, metastasis, and autoimmune disorders. Greater understanding of how cells interpret the TGF signal will facilitate the prevention, detection, and treatment of various human diseases.
The central elements of TGF signal transduction are now known (1,2). TGF activates the serine/threonine kinase activity of a multimeric receptor complex. Activation of this complex initiates a cascade of intracellular events that culminate in altered gene expression. The SMAD proteins form the foundation of this signaling network, since they are the only proteins directly phosphorylated by the receptor complex. However, these transcription factors are by no means sufficient to impart a TGF response. To specifically target a gene for transcriptional regulation, the SMADs require assistance by accessory factors. Consequently, the presence and activity of these accessory factors is are important to the TGF transcriptional program as are the SMAD proteins. By designing the system in such a way, cell-specific responses to TGF can be achieved. Further, the logic of the TGF signaling network explains how the cell integrates multiple signals to generate highly specific phenotypic responses.
In an attempt to better understand how TGF regulates gene transcription and how those gene products contribute to TGF biology, we have partially defined the TGF transcriptional profile in normal human mammary epithelial cells (HMEC). cDNA microarray expression analysis of TGFtreated HMEC revealed a set of genes involved in cellular proliferation, differentiation, and apoptosis. One of these genes, gadd45/hMyD118, is regulated by TGF in multiple cell types, thus suggesting that this gene is of central importance to the TGF response. GADD45 and two similar small acidic nuclear proteins, GADD45␣ and GADD45␥, make up the GADD45 family (3). All three proteins regulate diverse cellular mechanisms including cell growth, DNA repair, differentiation, and apoptosis, four phenotypes that are also controlled by TGF signaling. Aside from sequence similarity, these genes share transcriptional regulation by DNA damage insult and growth factors. gadd45 is, however, the only member of this family regulated by TGF (4,5). gadd45 was first discovered as a transcript rapidly induced by either TGF treatment or the onset of terminal differentiation in M1 murine myeloid cells (6,7). Subsequent studies employing antisense-mediated silencing established * This work was supported by a predoctoral fellowship from the Pharmaceutical Research and Manufacturers of America Foundation (to M. B. M.), a grant from the Huntsman Cancer Foundation, a grant from the Willard L. Eccles Foundation, and University of Utah Core Facility Technical Support Grant CA42104. The costs of publication of this article were defrayed in part by the payment of page charges. This article must therefore be hereby marked "advertisement" in accordance with 18 U.S.C. Section 1734 solely to indicate this fact. GADD45 as an important regulator of the G 2 /M checkpoint following genotoxic stress (8) and apoptosis during M1 myeloid cell terminal differentiation (6,9). Human GADD45, which was first identified in a complex containing the p38-activating kinase MTK1 (MEKK4), is now a well established regulator of p38 activity and consequently p38-regulated biology (5,10,11). TGF activates p38 kinase activity and induces apoptosis in normal murine hepatocytes, but not in hepatocytes derived from gadd45 knockout mice (11). An initial characterization of the molecular mechanism by which TGF induces gadd45 transcription has recently been reported. First, reconstitution of SMAD4 expression in SMAD4-null pancreatic cell lines restored gadd45 induction by TGF (5). The nature of the TGF-SMAD-gadd45 link appears to be direct; exogenously expressed SMAD2 and SMAD4 or SMAD3 and SMAD4 induce gadd45 proximal promoter activity 3-4-fold (11). However, the relative importance and function of each SMAD protein to the transcriptional activation of the endogenous gadd45 gene is not known.
Utilizing RNA interference and reconstitution of SMAD3 and SMAD4 protein expression in SMAD-deficient cell lines, we exclude SMAD2 and include SMAD3 and SMAD4 as transcription factors involved in the TGF induction of gadd45. Additionally, through a genomics-based approach, we identified a SMAD-dependent TGF-responsive enhancer encompassing the third intron of gadd45. The importance of this enhancer is indicated by a 3-fold greater transcriptional induction following TGF treatment than transcriptional effects mediated by 5Ј promoter sequences. Finally, using a cell system that does not undergo TGF-induced apoptosis but does respond to TGF by gadd45 transcriptional induction, we establish an apoptosis-independent role for GADD45 as an important mediator of G 2 /M progression following TGF treatment.
MATERIALS AND METHODS
Cell Culture and Drug Treatments-The following cell lines were cultured in Dulbecco's modified Eagle's medium supplemented with 10% fetal bovine serum, 2.0 M L-glutamine, 1.0 M sodium pyruvate, penicillin, and streptomycin and split every third day or at 80% confluence: Mv1Lu (CCL64), HaCaT, HeLa, 293, and 10T1/2. HT29 adenocarcinoma colon cells were cultured in McCoy's medium supplemented with 10% fetal bovine serum. The HepG2 and JAR cell lines were cultured in minimal essential medium and RPMI supplemented with 10% fetal bovine serum, respectively. We obtained all of the cell lines from ATCC except for the HaCaT immortalized keratinocyte cell line, which was a kind gift from D. Grossman (University of Utah, Salt Lake City, UT), the Mv1Lu cells were a kind gift from D. Ayer (University of Utah), and the JAR cells were a kind gift from E. Adashi (University of Utah). Human mammary epithelial cells (HMEC) were obtained from BioWhittaker (Walkersville, MD) and cultured in complete mammary epithelial growth medium. HMEC were seeded at passage 7 or 8 and harvested at no greater than 80% confluence for all experiments. For treatments with TGF (isoform type 1; Peprotech, Rocky Hill, NJ), we found little to no difference with respect to gene transcription if the cells had been previously serum-starved. The vehicle control for TGF comprised 4 mM HCl, 1 mg/ml bovine serum albumin. Cyclohexamide and actinomycin D (Calbiochem) were used at 10 and 5 g/ml, respectively, and treated as described in Fig. 2.
RNA Interference-siRNAs were designed to specifically target either smad2, smad3, or smad4 in accordance with the guidelines developed by Tuschl et al. (12). Because the sequence of mink smad2 and smad3 cDNAs is unknown, siRNAs were designed against the human sequences. The human-designed smad2 and smad3 siRNAs efficiently and specifically silenced mink SMAD2 and SMAD3 protein expression, thus indicating that these sequences are conserved in mink. We designed the smad4 siRNA-A and siRNA-B against the mink sequence, and consequently they do not silence human smad4 (data not shown). The control siRNA (scrambled siRNA; siScr) specifically recognizes human smad4 and thus does not affect mink SMAD2, SMAD3, or SMAD4 expression. The sequences of the chemically synthesized and high pressure liquid chromatography-purified RNA oligomers are as follows (sense strand shown): Smad2 5Ј-UCUUUGUGCAGAGCCCCA-Att; Smad3 5Ј-ACCUAUCCCCGAAUCCGAUtt; Smad4-A 5Ј-GGACGA-AUAUGUUCAUGACtt; Smad4-B 5Ј-UUGGAUUCUUUAAUAACAGtt; siScr 5Ј-GGAUGAAUAUGUGCAUGACtt. To silence gadd45 expression, three siRNAs were designed (sense strand shown): siGadd45-A (5Ј-GUU GAU GAA UGU GGA CCC Att), siGadd45-B (AUC CAC UUC ACG CUC AUC Ctt), and siGadd45-C (CUU GGU UGG UCC UUG UCU Gtt). Of these three siRNAs, siGadd45-A was the most efficacious and was used to generate the data seen in Fig. 8. All RNA oligomers were reconstituted and annealed following the protocol of Tuschl et al. (12). Mv1Lu cells were plated 24 h prior to transfection and transfected at 70% confluence. All siRNAs were transfected using 18 l of LipofectAMINE 2000/10-cm 2 plate according to manufacturer's guidelines (Invitrogen). For the SMAD silencing experiments, total RNA or protein was isolated 40 -48 h after transfection. In time course experiments, we found that maximal silencing occurred 36 h after transfection for all three SMAD proteins (data not shown). For Gadd45 silencing, 3 h after the start of Gadd45 siRNA transfection, cells were treated with vehicle or TGF for an additional 2 h prior to RNA isolation or 12 h prior to flow cytometry.
Plasmids and Genomic Alignments-We electronically cloned the human, murine, and rat gadd45 genomic loci from publicly available sequence databases. Approximately 8 kb of the genomic loci, starting at 5000 kb upstream of the transcriptional start site, were aligned using the MAVID alignment algorithm (13,14). The portion of this piece of genomic DNA showing conservation among all three species is shown in Fig. 7A. The G45-1 (Ϫ1470 bp, ϩ362 bp), G45-2 (Ϫ972 bp, ϩ362 bp), G45-3 (Ϫ476 bp, ϩ362 bp), G45-A (Ϫ1535 bp, Ϫ1042 bp), G45-B (Ϫ572 bp, Ϫ79 bp), and G45-C (ϩ941 bp, ϩ1428 bp) reporter constructs were created as follows. The indicated region of the human gadd45 genomic locus was PCR-amplified from HMEC genomic DNA and cloned into the pCR2.1-TOPO vector (Invitrogen). These DNAs were then subcloned into pGL3basic, sequence-verified, and utilized in subsequent dual luciferase assays. J. Massague generously provided the 3TPLux reporter construct (Memorial Sloan-Kettering Cancer Center, New York). The murine Smad7 cDNA (generously provided by R. Derynk, University of California, San Francisco, CA) was subcloned into pcDNA3.1. Similarly, the FLAG-tagged Smad expression vectors used in the reporter experiments were created by subcloning the cDNAs from constructs provided by D. Satterwhite into pCMV2-FLAG (University of Utah, Salt Lake City, UT). For luciferase assays, all reporters were co-transfected with an SV40-Renilla luciferase reporter plasmid that was used to normalize transfection efficiencies. For retroviral infections, we PCR-amplified the smad3 or smad4 open reading frames from HMEC cDNA and then cloned them into the pBabe retroviral vector. D. Ayer generously provided the GFP-pBabe vector (University of Utah, Salt Lake City, UT).
Luciferase Assays-Fugene 6 (Roche Applied Science) was used to transfect HaCaT cells as instructed by the manufacturer. We seeded cells at a density of 80,000 cells/well in 24-well plates and transfected them the next day. Transfections were performed using 0.6 g of DNA (including either 0.1 g of normalization vector and 0.5 g of reporter vector or 0.1 g of normalization vector, 0.2 g of reporter vector, and 0.3 g of expression vector) and harvested 20 h after the start of transfection. For TGF treatment, medium containing either TGF (200 pM) or an equal volume of vehicle was added to cells 3 h after the start of transfection. Luciferase values were analyzed using a dual luciferase assay system (Promega). Dividing the firefly luciferase activity from each well by the Renilla luciferase activity from the same well normalized transfection efficiencies. Data in each experiment are presented as the mean Ϯ S.D. of triplicates from a representative experiment. All experiments were performed at least three times, producing qualitatively similar results.
Retroviral Transduction-Expression of the GFP, SMAD3, or SMAD4 retroviral constructs was verified by Western blot in a transient assay prior to virus production. To produce the retrovirus, Phoenix helper cells were seeded in 60-mm 2 plates 24 h prior to transfection with LipofectAMINE 2000. 24 h after transfection began, we split the cells 1:3 to 10-cm 2 plates. 48 h after the cells had been split, viruscontaining medium was removed from the Phoenix cells, filtered (0.22 M; low protein binding filter), and added to a 6-well plate containing the HT29 or JAR target cells (at 60% confluence). We added Polybrene (4 g/ml) to the virus immediately before transduction of the target cells to facilitate infection. 24 h after infection, the target cells were split to 10-cm 2 plates and placed under selection with 750 ng/ml puromycin for 10 days.
PCR was performed in duplicate (or triplicate for 18 S rRNA) with a master mix consisting of cDNA template, buffer (500 mM Tris, pH 8.3, 2.5 mg/ml bovine serum albumin, 30 mM MgCl 2 ), dNTPs (2 mM), TaqStart antibody (Clontech), Biolase DNA polymerase (Bioline), genespecific forward and reverse primers (10 M), and SYBR Green I (Molecular Probes, Inc., Eugene, OR). The PCR conditions are as follows: 35 cycles of amplification with 1-s denaturation at 95°C and 5-s annealing at 57°C for Gadd45 and 53°C for 18 S rRNA. A template-free negative control was included in each experiment. We determined the copy number by comparing gene amplification with the amplification of standard samples that contained 10 3 to 10 7 copies of the gene or 10 5 to 10 9 for 18 S rRNA. The relative expression level of each gene was calculated by averaging the replicates and then dividing the average copy number of Gadd45 by the average copy number of 18 S rRNA. S.E. of the ratios was calculated using a confidence interval.
Northern and Western Blotting-Total RNA was isolated using Trizol following the manufacturer's protocol (Invitrogen). Where indicated, total RNA isolation was followed by poly(A) RNA selection using a PolyATtract TM mRNA Isolation kit (Promega). Total RNA or poly(A) RNA was fractionated through formaldehyde-containing agarose gels and transferred onto NϩHybond nylon membranes (Amersham Biosciences). Labeled probes were generated using the Rediprime II random prime labeling system (Amersham Biosciences) supplemented with [ 32 P]dCTP (ICN). To generate Northern blot probes, we PCR-amplified gene-specific sequences from human, mink, or murine cDNA. Mink gadd45 was PCR-amplified using the following degenerate primers: 5Ј-CTGCARATYCACTTCACSCT and 3Ј-GGRAYCCAYTGGTTDTTGC. Hybridizations with 32 P-labeled probes were carried out using ULTRAhyb buffer (Ambion) as recommended by the manufacturer. For Western blotting, protein lysates were harvested in a buffer containing 25 mM Tris-HCl, pH 7.4, 150 mM NaCl, 1 mM CaCl 2 . 1% Triton X-100, 0.1 mM phenylmethylsulfonyl fluoride, 0.1 mM benzamidine, 1 mg/ml pepstatin A, and 1 mg/ml phenanthroline. The resulting whole cell lysates were centrifuged at 20,800 ϫ g for 10 min at 4°C. Following protein quantitation using the DC Protein Assay (Bio-Rad), equal amounts of protein were fractionated through Tris-glycine 4 -12% gradient Nu-PAGE gels using the MES buffer system (Invitrogen). The following antibodies were used to detect the SMAD proteins in both Mv1Lu and human cell lines: SMAD3 (catalog no. 51-1500; Zymed Laboratories Inc.), SMAD2 (catalog no. S66220; Transduction Laboratories), SMAD4 (catalog no. sc-7966; Santa Cruz Biotechnology), and -catenin (catalog no. 610153; Transduction Laboratories). Immune complexes were visualized using a secondary antibody conjugated to horseradish peroxidase (Amersham Biosciences) and Western Lighting chemiluminescence reagent (PerkinElmer Life Sciences).
gadd45 Is a Primary TGF-responsive Gene in Normal
Human Mammary Epithelial Cells-TGF induces a G 1 cell cycle arrest and epithelial to mesenchymal transition, but not apoptosis, in primary normal human mammary epithelial cells grown in culture (15) (data not shown). To understand the mechanisms behind these TGF-induced phenotypes, we partially defined the TGF transcriptome in normal human mammary epithelial cells (HMEC). Specifically, we used cDNA microarray expression analysis to determine the relative expression of 7000 genes at 2 and 12 h after TGF treatment in HMEC. Data analysis revealed 54 up-regulated and 10 downregulated TGF-regulated genes. Genes included in this list had a -fold change of greater than 1.3 or less 0.7 at both time points and a p value of less than 0.05 at both time points (Supplemental Table I and methods therein). Next, we identified genes within this data set that were in common to TGFregulated genes identified through transcriptional profiling in other TGF-responsive cell systems. We surmised that because genes in this subgroup were regulated by TGF irrespective of cell origin or transformation status, they would be of central importance to the TGF cytostatic program. Plasminogen activator inhibitor-1 (PAI1) is a well established TGF-induced gene and was induced 8-fold 2 h after TGF treatment in HMEC (Supplemental Table I). Consequently, PAI1 served as an important positive control in the microarray, in Northern blots, and in subsequent experiments (Fig. 1A). A second common TGF target gene identified in our expression analysis was gadd45/hMyD118. In addition to our studies in primary normal HMEC, previous findings indicate that gadd45 is a TGF-induced gene in transformed cell lines derived from my- FIG. 1. gadd45 is a TGF-inducible gene. A, various established cell lines that have been previously reported to be TGF-sensitive by some measure were treated with TGF for 1 h prior to total RNA isolation and Northern blotting for Gadd45 and PAI1. The PAI1 3.2-kb transcript is shown; we could not detect the 2.2-kb PAI1 transcript in Mv1Lu or 10T1/2 using human, mink, or murine PAI1 probe. Extended exposure of the Northern blots and additional experiments not shown verified that the human PAI1 and gadd45 Northern probes are capable of recognizing their respective orthologues. The 18 S ribosomal band was visualized in the ethidium bromide-stained gel prior to Northern blotting and serves as the loading control. B, randomly cycling HMEC were treated with either TGF (200 pM) or an equal volume of vehicle. At the indicated time, total RNA was isolated from the cells. Following mRNA purification, Northern blot analysis was performed to visualize the relative transcript abundance of the indicated genes. Both the 3.2and 2.2-kb alternatively spliced forms of the mature PAI1 mRNA are shown. C, HMEC were treated with TGF (200 pM for 2 h) or BMP-2 (4 nM) for the indicated times before RNA isolation and Northern blot analysis for the gadd45 transcript. GAPDH serves as a loading control. The gadd45 and GAPDH signals were quantitated using a Phos-phorImager, and the resulting gadd45/GAPDH ratio was plotted below the Northern blots. eloid, breast, skin, breast, pancreas, and bone (4,5,11,16). Because of its frequent presence in the TGF transcriptional response and because of its previously described role in growth arrest, differentiation, and apoptosis, we chose to characterize the upstream signal transduction pathway necessary for gadd45 transcriptional induction and, second, to examine the role of GADD45 in the TGF response.
We first determined the scope of gadd45 transcriptional activation. Specifically, we monitored its induction by TGF in several cell lines and by other members of the TGF superfamily in HMEC. To determine whether other TGF-responsive cell lines responded similarly to HMEC with respect to gadd45 transcription, several cell lines were treated with TGF or vehicle for 1 h. The gadd45 and PAI1 transcripts were induced by TGF in the following cell lines: HMEC, Ha-CaT, Mv1Lu, PANC-1, primary breast organoid outgrowths, and to a lesser extent in HepG2 and HeLa cells (data not shown) (Fig. 1A). The Madin Darby canine kidney and 293 cell lines did not respond to TGF stimulation by inducing either gadd45 or PAI1. TGF treatment of 10T1/2 murine fibroblasts caused a moderate increase in PAI1 transcription but did not affect gadd45 mRNA levels. We also asked whether other members of the TGF superfamily of growth factors could regulate gadd45 transcription. Fig. 1C illustrates that both TGF and BMP2 induced gadd45 transcription. However, the kinetics of gadd45 induction as well as the strength of induction differed between the two ligands. Finally, of the three genes that comprise the gadd45 family, only gadd45 was found to be TGF-inducible in HMEC; gadd45␣ was not affected by TGF treatment, and Gadd545␥ was not detected (Fig. 1B). Gadd153/Chop10, a GADD family member by virtue of its induction by cellular stress, was transiently repressed by TGF.
To distinguish whether TGF treatment resulted in increased gadd45 transcription or increased gadd45 mRNA stability, we measured the gadd45 mRNA half-life before and after TGF treatment. HMEC were treated with TGF for 1 h before the addition of the transcription inhibitor actinomycin D for various periods of time. Quantitative analysis of the Northern blot revealed that TGF failed to stabilize the gadd45 mRNA (Fig. 2, A and B). The accumulation of gadd45 mRNA within 2 h of TGF treatment suggested that it is an immediate early TGF-induced target gene. To test this idea, we pretreated HMEC with the protein translation inhibitor cyclohexamide 15 min before a 3-h combined TGF/cyclohexamide treatment. We found that the levels of gadd45 increased in a TGF-dependent manner irrespective of cyclohexamide pretreatment, indicating that new protein synthesis is not required for TGF induction of gadd45 (Fig. 2C). These data indicate that gadd45 is a direct TGF transcriptional target.
gadd45 Is Partly Dependent upon SMAD3 and Independent of SMAD2 in Its Regulation by TGF -We first sought to determine whether specific inhibition of SMAD2, SMAD3, and SMAD4 abrogated gadd45 responsiveness to TGF. To approach this, we employed siRNA-mediated silencing of the SMAD2, SMAD3, and SMAD4 proteins. Because we were unable to achieve silencing greater than 60% of wild-type levels in HMEC, we chose to use Mv1Lu cells for our siRNA studies. Transfection of Mv1Lu cells with siRNAs specific to SMAD2 or SMAD3 reduced the respective protein expression to nearly undetectable levels (Fig. 3A). Loss of SMAD2 caused a 70% decrease in the induction of PAI1 by TGF. In contrast, siRNA silencing of SMAD2 had no significant effect on gadd45 induction following TGF treatment (Fig. 3A). SMAD3-deficient cells, however, responded to TGF stimulation with reduced levels of induction for both gadd45 and PAI1. Although the decrease in PAI1 induction by TGF observed in the SMAD2 and the SMAD3 single-knockout cells was enhanced in the double-knockout cells, the SMAD2/SMAD3 double-knockout cells behaved similarly to SMAD3-deficient cells with respect to gadd45 induction (Fig. 3A). Dose-response curves with the Smad3 siRNA (IC 50 ϳ1 nM) further demonstrated that TGF activates PAI1 and gadd45 through a mechanism that is partly dependent upon SMAD3 (Fig. 3B).
SMAD4 Silencing Prevents gadd45 and PAI1 Induction by TGF -Of the many proteins involved in mediating the different facets of TGF signal transduction, SMAD4 is considered central to many of the responses. Two different siRNAs were designed against mink smad4, and the efficacy of their silencing was tested in Mv1Lu cells by Western blot (Fig. 4A). Consistent with the central role of SMAD4 in TGF signaling, siRNA silencing of SMAD4 resulted in a dramatic loss of gadd45 transcriptional induction following TGF treatment (Fig. 4B). As a confirmation of specificity, a human-specific SMAD4 siRNA, which contains mismatches at two positions relative to the mink sequence, did not affect SMAD4 protein expression or TGF-regulated transcription of gadd45 or PAI1. siSmad4-A and siSmad4-B both robustly silenced SMAD4 protein expression and did not interfere with SMAD3 protein expression (Fig. 4A). Examination of the gadd45 and PAI1 transcript levels in these SMAD4-deficient cells revealed a clear necessity for SMAD4 in targeting these genes for tran- gadd45 Enhancer Mediates TGF Induction scription following TGF stimulation (Fig. 4B). The small difference between siSmad4-A and siSmad4-B in silencing SMAD4 protein expression directly reflected the levels of gadd45 and PAI1 induction by TGF. The siSmad4-A silences SMAD4 protein expression with an IC 50 of less than 1 nM, which is consistent with the IC 50 of silencing imparted by siSmad3 (compare Fig. 4C with Fig. 3B). The induction of gadd45 and PAI1 transcripts by TGF in these cells showed close correlation with each other and with the levels of SMAD4 protein (Fig. 4C).
Finally, we asked whether Mv1Lu cells lacking SMAD2, SMAD3, and SMAD4 responded differently to TGF with respect to gadd45 transcriptional induction than cells deficient in only one or two of the SMADs. Mv1Lu cells were transfected with siRNAs directed against each of the SMADs alone and in all combinations thereof (Fig. 5A). Northern blot analysis of gadd45 Enhancer Mediates TGF Induction gadd45 again demonstrated a SMAD3 and SMAD4 dependence for TGF-induced transcription. Loss of SMAD2 in these SMAD3/SMAD4-deficient cells had no further effect on gadd45 induction. Interestingly, although PAI1 depends partly upon SMAD2 for TGF-induced transcription (Fig. 3A), loss of SMAD2 did not affect PAI1 induction in cells lacking SMAD3 and SMAD4 (Fig. 5B).
Our data generated with siRNA-mediated silencing have revealed no differences between gadd45 and PAI1 with respect to their regulation by SMAD3 and SMAD4 (Figs. 3 and 4). To examine the role of SMAD3 and SMAD4 more closely, we asked whether loss of SMAD3 in a SMAD4-reduced background would further inhibit gadd45 and PAI1 induction by TGF. Mv1Lu cells were transfected with a constant amount of siSmad4-A (15 nM) in the presence of an increasing concentration of siSmad3 (Fig. 5C). siRNA-mediated silencing of SMAD3 in a SMAD4-reduced background had no effect on PAI1 induction (Fig. 5, C and D). SMAD3 silencing in these SMAD4deficient cells did, however, further repress the transcriptional induction of gadd45 following TGF treatment. These data support a transcriptional model that distinguishes Gadd45 from PAI1 in their regulation by SMAD3 and SMAD4.
SMAD3 and SMAD4 Expression in SMAD3-and SMAD4null Cancer Cells Reconstitutes TGF-mediated Induction of gadd45 -The second approach we utilized to study the transcriptional regulation of gadd45 by TGF relied upon the preponderance of inactivating mutations within the SMAD proteins in human cancer cell lines. HT29 colon adenocarcinoma cells do not express SMAD4 protein because of a nonsense mutation that renders the transcript unstable (17). JAR cells, on the other hand, do not express SMAD3 (18). TGF treatment of these cell lines results in the phosphorylation of SMAD2, indicating that both cell lines express functional TGF receptor complexes and that SMAD2 phosphorylation is not dependent upon SMAD3 or SMAD4 (Fig. 6A). Retroviral transduction followed by polyclonal selection of these cells with either a GFP-encoding retrovirus or a SMAD3-or SMAD4encoding virus provided an experimental approach to further examine the role of the SMADs in gadd45 transcription. Two weeks after the transduced cells were placed under selection, expression of the transduced genes was verified by fluorescence microscopy (for GFP expression; data not shown) and Western blot (Fig. 6B). RNA harvested in parallel to the protein samples analyzed in Fig. 6B was reverse transcribed and used in real time quantitative PCR to measure the gadd45 transcript levels. TGF treatment of JAR-SMAD3 cells revealed a small but statistically significant increase in gadd45 message levels (Fig. 6C). The SMAD4-HT29 cells responded to TGF through a robust induction of gadd45 (Fig. 6C). Northern blot analysis of these RNAs confirmed the quantitative PCR results (data not shown).
gadd45 Contains a TGF-responsive Enhancer That Encompasses the Third Intron-Next, we analyzed the gadd45 genomic locus for transcriptional responsiveness to TGF. First, 1500 bp of the proximal promoter of gadd45 was cloned upstream of firefly luciferase for use in reporter assays (G45-1) (Fig. 7A). TGF stimulation of HaCaT or Mv1Lu cells increased the transcriptional activity of G45-1, G45-2, and G45-3 ϳ2-fold (Fig. 7B) (Mv1Lu data not shown). In contrast, the endogenous gadd45 transcript levels increased 8 -15-fold in responsive cell lines following TGF treatment (Fig. 1A). We were unable to see increased reporter activity when other portions of this 5Ј-flanking region were analyzed or in numerous other cell lines or when the cells were treated for different lengths of time with TGF ( Fig. 7A) (data not shown). We reasoned that because the gadd45 coding sequence is highly conserved between human, mouse, rat, and zebrafish, the region of the genomic locus mediating TGF responsiveness might also be conserved. To address this possibility, we aligned the human, mouse, and rat gadd45 genomic sequences and plotted the degree of conservation utilizing the MAVID algorithm (13,14). In addition to the coding regions, three domains of the gadd45 genomic locus demonstrate high conservation between species. Each of these regions was cloned upstream of Firefly luciferase and used in reporter experiments (Fig. 7A). Remarkably, TGF robustly activated transcription from the G45-C enhancer, which contains part of the third exon, the complete third intron, and part of the fourth exon, but not from G45-A or G45-B (Fig. 7B, Supplemental Fig. 1) (Mv1Lu data not shown). We took two approaches to test if SMAD proteins were mediating TGF-dependent transcriptional induction off of G45-C. First, the inhibitory SMAD7 protein was overexpressed to block SMAD activation. SMAD7 overexpression inhibited TGF-induced activation of the 3TPLux reporter, which contains the PAI1 promoter, and the G45-C reporter, it but did not affect an SV40-driven luciferase construct (Fig. 7C). Second, overexpression of SMAD3 and SMAD4 greatly enhanced G45-C reporter activity in HaCaT cells (Fig. 7C) and in HeLa cells (data not shown). In contrast, SMAD2 expression did not affect the transcriptional activity. Interestingly, the increase in reporter activity was dependent upon both SMAD3 and SMAD4, because neither one alone significantly affected the G45-C transcriptional activity. These data support a role for SMAD3 and SMAD4 in regulating Gadd45 transcription through a 3Ј enhancer that contains the third intron. Indeed, sequence analysis of G45-C revealed four conserved putative SMAD binding elements (SBEs) (Supplementary Fig. 1).
GADD45 Regulates G 2 Progression following TGF Stimulation-To examine the contribution of GADD45 to the TGF phenotype, a siRNA was designed to silence gadd45 expression. Dose-response analysis revealed potent (IC 50 ϳ1 nM) and specific knockout of TGF-induced gadd45 expression (Fig. 8A). TGF rapidly induces a G 1 cell cycle arrest, but not apoptosis, in Mv1Lu cells. We asked whether Mv1Lu cells deficient in GADD45 would undergo a G 1 cell cycle arrest. Introduction of a scrambled siRNA had no detectable effect on TGFinduced cell cycle arrest (Fig. 8B). However, cells containing reduced levels of gadd45 demonstrated a slight reduction in G 1 accumulation and failed to progress through G 2 following TGF treatment (Fig. 8B). Loss of the gadd45 transcript did not affect cell cycle progression in the absence of TGF treatment. Dose-response analysis further verified this finding; 0.01 and 0.1 nM siRNA did not significantly affect gadd45 transcript levels or cell cycle progression following TGF stimulation. These findings indicate that GADD45 is an important regulator of cell cycle progression following TGF treatment.
DISCUSSION
The intracellular domain of a ligand-bound TGF receptor complex ignites an intertwined cascade of signaling events that induces one of many possible phenotypic responses (1, 2). Consequently, the mechanism by which a cell decides how to respond to TGF is fundamental to many aspects of eukaryotic life. One approach to decipher the cellular interpretation of the TGF signal and how that interpretation might be altered in a diseased tissue is to define and utilize the TGF target genes as a starting point in a retrograde molecular characterization of the upstream transcriptional program. Concurrent studies would assess the gene function as it contributes to the phenotypic response. We have employed this approach to the gadd45 gene. We found that gadd45 transcriptional induction by TGF was dependent upon SMAD4 and to a lesser extent on SMAD3 but independent of SMAD2. Further, SMAD3 and SMAD4 mediated the transcriptional induction of gadd45 through an enhancer that encompasses the third intron of the gadd45 gene. Finally, TGF stimulation of 7. gadd45 is activated by TGF through a 3 enhancer. A, schematic representation of the gadd45 genomic locus. The relative position of the exons and introns are indicated above the graph. The MAVID algorithm was used to determine the relative degree of sequence conservation between human gadd45 (x axis) and rat gadd45 (bottom half of plot; y axis) and murine gadd45 (top half of plot; y axis). The degree of genomic conservation is indicated by the height of the black curve. Below the graph is a schematic diagram of six pieces of the gadd45 genomic locus that were cloned upstream of firefly luciferase for use in subsequent reporter assays. Of note, G45-C contains 93 bp of exon 3, the complete third intron (237 bp), and 98 bp of exon 4 of gadd45 ( Supplementary Fig. 1) gadd45-deficient cells, but not of gadd45-expressing cells, resulted in the activation of a G 2 /M checkpoint.
We used RNA interference as a tool to probe the upstream signal transduction components necessary for gadd45 and PAI1 transcriptional induction following TGF stimulation. We chose Mv1Lu cells as a cell system for these studies rather than a human cell line such as HMEC or HaCaTs, because we found that in these cells our siRNAs were more efficacious as compared with a panel of TGF-responsive human cells. In fact, silencing SMAD protein expression in HaCaT cells to 30% of wild type levels resulted in no detectable effect on PAI1 or gadd45 transcription following TGF treatment. 2 These results and our findings presented in Fig. 4 argue that with respect to the transcriptional activation of gadd45 and PAI1, the SMAD proteins are expressed in excess. Coupled to the immediate early transcriptional induction of gadd45 and PAI1 by TGF, these data suggest that the gadd45 and PAI1 promoters share a relatively high affinity for the SMAD proteins. Further, this provides a possible molecular mechanism explaining how gadd45 and PAI1 are regulated by TGF irrespective of tissue type. Analogous findings have recently been discovered in Caenorhabditis elegans where the FoxA protein, PHA4, achieves transcriptional discrimination among target genes through a differential affinity to gene promoter sequences (19). Consequently, high affinity PHA4 promoters are responsive to relatively low levels of PHA4 protein expression. Further studies are in progress to classify TGF transcriptional targets by their sensitivity to changes in SMAD protein expression.
With the exception of a few genes, such as p15 (20) and MMP2 (21), most well characterized immediate early TGFregulated genes appear to depend upon SMAD3 and SMAD4, but not SMAD2, for TGF transcriptional regulation. Our work places gadd45 within this SMAD2-independent, SMAD3/ SMAD4-dependent class of TGF-responsive genes. Our conclusion that gadd45 is a SMAD4-dependent TGF target gene agrees with the findings of Yoo et al. (11) and Takekewa et al. (5), who have also reported SMAD4 dependence in gadd45 regulation, although through different experimental approaches. Conversely, our findings that TGF regulated gadd45 independently of SMAD2 contradict previous findings. Yoo et al. (11) recently reported that overexpression of SMAD2 and SMAD4 together, but not separately, induced a gadd45 reporter construct in a TGF-dependent fashion. Although additional work is necessary to reconcile these outcomes, they could result from cell type-specific responses to TGF (hepatocytes versus keratinocytes and fibroblasts). It is important to note, however, that SMAD2-deficient fibroblasts show gadd45 transcriptional induction following TGF with kinetics and efficacy similar to that of wild-type cells, an observation that is consistent with a SMAD2-independent model of gadd45 regulation (22).
In contrast to gadd45, we found that SMAD2, SMAD3, and SMAD4 all contributed to TGF regulation of PAI1, although to varying degrees (Figs. [3][4][5]. Extensive research on PAI1 has not implicated SMAD2 (23)(24)(25) in its regulation with the notable exception that fibroblasts derived from smad2 knockout mice failed to induce PAI1 following TGF treatment (21). The ability of smad2 siRNAs to phenocopy the Smad2 knockout fibroblasts in this respect strongly supports the use of siRNAmediated gene silencing in future TGF transcriptional studies. Clearly, genome-wide analysis of TGF responsiveness in SMAD-silenced or SMAD knockout cells will be of great importance.
Yoo et al. (11) have recently shown that 220 bp of the gadd45 proximal promoter is activated by TGF and that this activation is enhanced by overexpression of SMAD2, SMAD3, and SMAD4, but not dominant negative forms of SMAD2 or SMAD3. Our data support their results in that we have also found the 5Ј promoter sequence to be TGF-responsive (Fig. 7B). However, through a genomics-based alignment strategy, we identified a second TGF-responsive domain encompassing the highly conserved third intron of the gadd45 gene. In contrast to the 2-fold activation we observed with 5Ј promoter sequences, the 3Ј enhancer is activated 5-7-fold following TGF treatment. It will be important to determine which of the conserved transcription factor binding sites within this enhancer account for induction by TGF. Notably, we identified four conserved SMAD binding elements, three of which are located in exonic sequence ( Supplementary Fig. 1). The endogenous gadd45 gene may likely respond to TGF through a concerted action of the 3Ј enhancer and 5Ј promoter sequences. A similar transcriptional model has been reported for the gadd45␣ gene where highly conserved sequences within the third intron or fourth exon facilitate transcriptional induction following genotoxic stress (26,27) and vitamin D3 (28), respectively. Thus, in addition to primary sequence and genomic organization, gadd45␣, gadd45, and gadd45␥ might also 2 share an intronic/exonic enhancer as an important transcriptional regulatory element.
Last, utilizing the power of siRNA mediated gene silencing, we discovered that Mv1Lu cells made deficient for gadd45 arrested at the G 2 /M checkpoint following TGF treatment. Previous research has established GADD45 as a negative regulator of cell cycle progression, and several molecular mechanisms behind this inhibition have been put forth (3). Following genotoxic stress, GADD45 acts to inhibit Cdc2/cyclin B1 kinase to induce a G 2 /M cell cycle checkpoint in RKO lung carcinoma cells (8). In contrast, normal fibroblasts microinjected with a GADD45 expression vector fail to undergo a G 2 /M arrest, although GADD45 was found to associate with Cdc2 in these cells (29). Our findings support these previous data in that we also see a GADD45-dependent effect on the G 2 /M cell cycle checkpoint. However, we show that GADD45 acts to promote G 2 /M progression following TGF treatment in Mv1Lu cells (Fig. 8). This finding supports the notion that GADD45 does not act to modulate cell cycle progression in isolation, but rather the presence of other proteins might ultimately determine how cells respond to increases in GADD45 protein levels (3). Indeed, GADD45 associates with many nuclear proteins involved in cell cycle progression, including proliferating cell nuclear antigen, p21, GADD45␣, and Cdc2/ cyclin B2 (8,30,31).
Perhaps the most well understood function of the GADD45 family of proteins is their ability to regulate apoptosis through the activation of MTK1 (MEKK4) (32) and subsequently p38 kinase (5,10). Although gadd45 is rapidly induced by TGF in Mv1Lu and HMEC, we have not detected an apoptotic response following TGF treatment in these cells. 2 An apoptosis-independent cellular response to GADD45 induction was recently shown, where tumor necrosis factor ␣ signaling through NF-B induced gadd45 transcription to prevent c-Jun N-terminal kinase activation and cell death (33,34). Further, several research laboratories have been successful in generating gadd45 overexpression systems and have not observed cell death (35). Future studies utilizing siRNA silencing of gadd45 following transcriptional agonists other than TGF will be invaluable in determining the functional consequences of GADD45 expression. | 2019-03-22T16:13:30.364Z | 2004-02-13T00:00:00.000 | {
"year": 2004,
"sha1": "f5b4183b41d61c8240fdb6f3d240e54ffe9973ae",
"oa_license": "CCBY",
"oa_url": "http://www.jbc.org/content/279/7/5278.full.pdf",
"oa_status": "HYBRID",
"pdf_src": "Highwire",
"pdf_hash": "68ad87e5695be50362af3db255fe3957cdcfb5e5",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"extfieldsofstudy": [
"Biology"
]
} |
134287700 | pes2o/s2orc | v3-fos-license | New evidence for the Palaeolithic in Attica, Greece
Despite Greece’s key geographic position between southeast Europe and southwest Asia, and its potential for documenting hominin dispersals, Lower and Middle Palaeolithic sites are rare. This suggests the need for research to identify deposits that may contain Palaeolithic artefacts. Here we describe 165 quartz and quartzite artefacts with Palaeolithic characteristics (based on technical and morphotypological definitions) from a private collection that was made from erosional lag deposits on the southeastern slopes of Mt. Pendeli and the northern edge of the Spata polje (a large karstic depression filled with terra rossas) in northeast Attica. Artefacts of the same type occur in the region of Ano Souli, another karstic depression. These karstic depressions are of interest because they resemble artefact-bearing deposits found at similar features such as Kokkinopilos in Epirus that have provided datable geologic contexts for Lower and Middle Palaeolithic artefacts. Our study suggests that Attica was frequented by hominins in the Lower and Middle Palaeolithic and that Pleistocene deposits in karstic depressions in Attica may preserve datable contexts for documenting early human activity. The lithic collection described here provides a glimpse of the potential of the region, and we recommend continued archaeological efforts in Attica to investigate the likelihood for buried Palaeolithic sites.
Introduction
Lower and Middle Palaeolithic sites in Greece are rare, particularly in the Middle Pleistocene. Renewed efforts over the last decade have begun to shed light on this time period (Tourloukis & Harvati 2018), yet despite growing evidence for the importance of the Aegean Basin as a dispersal route for early hominins (Runnels 2014;Tourloukis & Karkanas 2012), lithic assemblages associated with the Lower and Middle Palaeolithic have yet to be documented in Attica. Avocational archaeologist Evangelos Sachperoglou collected stone tools of Palaeolithic type from erosional lag deposits of rocks chiefly on the southeastern slopes of Mt. Pendeli and the northern edges of the Spata region of eastern Attica in 2015-2016 ( Figure 1). These lag deposits are not sites in the sense of exposures of stratified deposits, geologic outcrops, or places of residence in the past, but resulted from downslope erosion. Despite this problem, several depositional settings were noted by our team to have the potential to contain such deposits. The collection was donated to the Ephorate [Directorate] of Palaeoanthropology and Speleology in Athens, and although these materials are not from archaeological sites per se, they indicate that sites or geologic outcrops may still be found in Attica. For this reason, coupled with the fact that Palaeolithic stone tools have not yet been documented in the region, we were granted permission by the Directorate of Prehistoric and Classical Antiquities of the Hellenic Ministry of Culture and Sports to evaluate and publish the finds to illustrate the potential for further Palaeolithic research in Attica. Unfortunately, most of the collection consists of natural stones, but we were able to identify 165 artefacts where the completeness of the artefact preserved recognizable technotypological characteristics. Notably, some of the stone tools were collected in the area of Ano Souli, a karstic depression (polje) containing Pleistocene terra rossas and palaeosols similar to the artefact-bearing deposits found at karstic depressions in Epirus, like Kokkinopilos (van Andel & Runnels 2005;Runnels & van Andel 2003;Tourloukis et al. 2015). Our study suggests that Lower and Middle Palaeolithic sites may also exist in northeast Attica, which, if confirmed by future research, would have implications for the migration of hominins in this region (see below).
Methods
The sample of artefacts for analysis was selected from about seven hundred pieces in the collection, the majority of which, however, were either natural stones or fragmentary and undiagnostic pieces of debitage. The sample selected for study comprises definite artefacts that are sufficiently complete to allow for the identification of their technomorphological features. Quartzite was used as a raw material, although the majority of the artefacts were made on vein quartz, which has been used for stone tool-making around the world, especially for thicker artefacts where specific methods of knapping were used to overcome the refractory tendency of quartz for fragmentation (Manninen 2016). The reflective surface of quartz makes it difficult to see flake removals, especially in direct incandescent or fluorescent light; therefore we used raking natural diffused light to discriminate flake removals from other surface features. For illustrating quartz artefacts we suggest the use of photorealistic models via photogrammetry ( Figure 2) produced with software such as AgiSoft Photoscan Pro with the application of shaders in MeshLab to create 3D images to aid in interpreting patterns of flake removals (e.g., Magnani 2014;Vergne et al. 2010). Another added benefit of 3D scans is the ability to use 3D printers to manufacture copies for direct study (Olson et al. 2014). The classification of the quartz objects as artefacts was based on widely accepted criteria for distinguishing knapped tools (Shea 2013 : table 3.2), requiring the presence of extensive symmetrical flake scarring, removal of most of the cortex, the presence of negative bulbs of percussion along multiple edges, and patterns or series of parallel or sub-parallel removals as evidence that the knapping was intended to shape the outline of the artefact. (2018)
Results
Table 1 details the observed technotypological designations for the artefacts. Possible Lower Palaeolithic morphotypes were knapped by direct hard-hammer percussion and include pebble cores (chopping tools) and large cutting tools such as handaxes, cleavers, and picks. A massive scraper was also noted. Pebble cores have bifacial flaking defining one sinuous edge opposite an unworked or minimally-worked butt. The handaxes are large (>10 cm) core tools with two relatively straight edges converging to a distal tip; the handaxe subtypes are ovate to subtriangular in outline (Figure 3). Two handaxes were made on side-struck flakes, and seven were made on flakes where the axis of flaking is unknown. Other bifaces (cleavers, picks) were made on cobbles (where the original form can be recognized). Possible Middle Palaeolithic morphotypes include scrapers (end, déjété, single, and double sided), flakes from preferential cores, large blades (>8 cm) from preferential cores, and small cordiform handaxes ( Figure 4). Possible Upper Palaeolithic morphotypes are end scrapers, burins, becs, and a perçoir. The artefacts are patinated and have reddish brown staining or red clay deposits on their surfaces. It is noteworthy that the patinas, weathering, staining, and deposits cover the flake removals, an indication that the tools are as old or older than the erosional lag deposits where they were found. The red staining and red clay may point to an original deposition in terra rossa. The artefacts also have unabraded edges suggesting minimal transport by lowenergy processes before their deposition within lag deposits, suggesting that any sites that remain are likely to be nearby.
The Collection in Context
In the absence of stratigraphic contexts, geologic associations, or radiometric dating, these lithic artefacts can be used for only one purpose, to suggest that Pleistocene deposits in this region, particularly large karstic depressions (poljes) filled with terra rossas and palaeosols such as those occurring at Ano Souli and Spata near the Venizelos International Airport (Figure 5), may preserve Palaeolithic remains. The red staining and clay on the artefacts certainly suggest that they were derived from such deposits, which are often Journal of Lithic Studies (2018) vol. 5, nr. x, p. xx-xx DOI: https://doi.org/10.2218/jls.2665 associated with springs, wetlands, or seasonal lakes, where the sediments are rich in aeolian sand and can potentially be dated by luminescence dating, such as optically stimulated luminescence (OSL), infrared stimulated luminescence (IRSL), or post-IRSL, other radiometric methods such as uranium-series, and possibly by horizons of aeolian volcanic ash from Ischian eruptions (van Andel & Runnels 2005;Runnels & van Andel 2003;Tourloukis & Karkanas 2012). In other areas in Greece, such as Epirus, these types of deposits have been dated to as early as 220,000 ka using IRSL (e.g., the site of Kokkinopilos; Tourloukis et al. 2015).
Conclusions
Our study suggests that Attica was frequented by hominins as early as the Middle Pleistocene and that geologic contexts in the region, particularly karstic depressions, may preserve datable associations with Palaeolithic artefacts. These data suggest that Attica was potentially a corridor at times in the Pleistocene leading to and from the Greek islands, including Crete, and should not be neglected in any study of early hominin dispersals. We call for future archaeological efforts in the region. We also suggest that research aimed at identifying archaeological sites and the assessment of the palaeogeography must consider the wider territory, incorporating the adjacent coastal lowlands and nearby islands. Underwater geoarchaeology, including seabed mapping, will no doubt add valuable data to this discussion and allow for a broader interpretation of Lower and Middle Palaeolithic archaeology in Attica. (2018) | 2019-04-27T13:13:28.462Z | 2018-03-15T00:00:00.000 | {
"year": 2018,
"sha1": "f39043f48e3df2536bdf385368a571bbfce75e81",
"oa_license": "CCBY",
"oa_url": "http://journals.ed.ac.uk/lithicstudies/article/download/2665/3977",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "a2cae93fbd85c840d628cb7c70caadccdddd70b0",
"s2fieldsofstudy": [
"Geology",
"History"
],
"extfieldsofstudy": [
"Geography"
]
} |
209492215 | pes2o/s2orc | v3-fos-license | Revealing interactions of layered polymeric materials at solid-liquid interface for building solvent compatibility charts for 3D printing applications
Poor stability of 3D printed plastic objects in a number of solvents limits several important applications in engineering, chemistry and biology. Due to layered type of assembling, 3D-printed surfaces possess rather different properties as compared to bulk surfaces made by other methods. Here we study fundamental interactions at the solid-liquid interface and evaluate polymeric materials towards advanced additive manufacturing. A simple and universal stability test was developed for 3D printed parts and applied to a variety of thermoplastics. Specific modes of resistance/destruction were described for different plastics and their compatibility to a representative scope of solvents (aqueous and organic) was evaluated. Classification and characterization of destruction modes for a wide range of conditions (including geometry and 3D printing parameters) were carried out. Key factors of tolerance to solvent media were investigated by electron microscopy. We show that the overall stability and the mode of destruction depend on chemical properties of the polymer and the nature of interactions at the solid-liquid interface. Importantly, stability also depends on the layered microstructure of the sample, which is defined by 3D printing parameters. Developed solvent compatibility charts for a wide range of polymeric materials (ABS, PLA, PLA-Cu, PETG, SBS, Ceramo, HIPS, Primalloy, Photoresin, Nylon, Nylon-C, POM, PE, PP) and solvents represent an important benchmark for practical applications.
Supplementary_Movie_S1.mp4
This video illustrates the effect of the extrusion multiplier on the stability of FDM parts made of PLA in dichloromethane. For comparison, the effect of the solvent on the extruded part is shown. All parts have the same dimensions and similar weight. Three FDM parts were manufactured with different extrusion multiplier values: 0.8; 0.9; 1.0. That is, this experiment demonstrates the stability of FDM parts manufactured with extrusion multipliers that are most commonly used in practice of FDM printing. A brass cylinder was used as an indicator. A video is a time-lapse shot taken at 6 seconds intervals during 1.0 hour. During the shooting, 600 individual frames were obtained, which were then combined into this video at a frame rate of 25 fps.
Supplementary_Movie_S2.mp4
This video also demonstrates the effect of the extrusion multiplier on the stability of FDM parts made of PLA in DCM media. Unlike the Movie-S1 video, this video shows parts made with increased extrusion multipliers: 1.1; 1.2; 1.3. An extruded part is selected for comparison. All parts are characterized by almost the same dimensions and weights. The indicator is a brass cylinder. A video is a time-lapse shot taken at 6 seconds intervals during 1.0 hours. During the shooting, 600 individual frames were obtained, which were then combined into this video at a 25 fps frame rate.
Supplementary_Movie_S3.mp4
This video shows various types of degradation of FDM products made from various materials. Disintegration of the FDM parts is shown by the example of a PLA material filled with copper particles in a methylene chloride medium. Delamination is displayed using PLA filled with copper particles in acetone. True dissolution is characteristic of SBS in methylene chloride, and swelling is shown by the example of Primalloy material in toluene. All destruction models presented in this video were shot using the time-lapse method of photoshooting. The time-lapse was 3 seconds; the total duration of the shooting was 80 minutes. The resulting 1600 frames were combined into a video with a 25 fps frame rate.
Supplementary_Movie_S4.mp4
This video shows the absence of the influence of Archimedes force on the destruction time of FDM parts when using indicator beads of different masses. In the experiment steel beads and glass beads of the same size and FDM parts made of PLA were used. The experiment was performed in methylene chloride media. Three different volumes of solvent were used for each type of beads: a small volume into which a smaller part of the bead was immersed; the average volume into which most of the bead was immersed; the large volume into which the entire bead was immersed. In all cases, the destruction of the part occurs at about the same time. The video was obtained as a result of time-lapse shooting at 6 seconds intervals during 2.5 hours. Obtained 1500 frames were combined in a video with a 25 fps frame rate.
Supplementary_Movie_S5.mp4
This video shows the absence of the influence of the shape of the indicator loading on the dynamics of the destruction of FDM parts. FDM parts are made of PLA. The tests were carried out in methylene chloride media. A steel bead and a brass cylinder were used as an indicator loading. Both loads have the same weight. The loss of part integrity in experiments with different loads occurs almost simultaneously. The video is the result of time-lapse shooting, taken at 6 seconds time-lapse during 1.5 hours. The resulting 900 frames were combined in a video of 25 fps frame rate. Table S1. FDM parameters used in this study for a set of materials. 14. Experiments in water media Figure S41. Snapshot of the experiment in water in the beginning. Figure S42. Snapshot of the experiment in water after 1 h. Figure S43. Snapshot of the experiment in water after 20 h. Table S2. Qualitative analysis of stability of FDM parts made of different materials in organic and inorganic liquid media: (•) material is stable during experimental time i. e. shape of the part does not change; dissolution of outer layers of material does not occur; (•) material is not stable during experimental time, change of shape is observed, dissolution (DS), disintegration (DI), or/and delamination (DL) of the part occur; (•) material is moderately stable during experimental time: its swelling (SW) or slight dissolution of outer layers are observed, shape of the part does not change. | 2019-12-28T15:04:38.740Z | 2019-12-01T00:00:00.000 | {
"year": 2019,
"sha1": "11ecf0313bfeae51fb7215c09ccce335b7d4bbfe",
"oa_license": "CCBY",
"oa_url": "https://www.nature.com/articles/s41598-019-56350-w.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "162b1fb022bc26177f7d2e3c5a98921b6b4f85aa",
"s2fieldsofstudy": [
"Materials Science",
"Engineering",
"Chemistry"
],
"extfieldsofstudy": [
"Medicine",
"Materials Science"
]
} |
221340040 | pes2o/s2orc | v3-fos-license | Contributions Vo2max on the Dribbling Agility of the Football Club Players
The method used in this study is quantitative with a correlational approach that aims to see the weight or closeness of the relationship between variables without giving treatment. From the normality test on the VO2Max Lo = 0.1772 <Lt = 0.1808, it is said to be normal, and on the test of agility dribbling ability Lo = 0.1538 <Lt = 0.1808, it is said to be normal. Hypothesis test results of statistical analysis using simple product-moment correlation analysis obtained r count 0.62642> r table 0.404, meaning that there is a significant relationship between VO2Max on the agility of dribbling soccer player SSB Merpati Hiang. It can be concluded that there is a contribution of VO2Max to the ability of dribbling
I. INTRODUCTION
One of the efforts to improve the quality of quality Indonesian human resources is through sports, this is in accordance with the objectives of the national sports listed concerning the national sports system which reads: "Maintaining and improving health and fitness, achievements, human qualities, instilling moral values and noble morals, sportsmanship, discipline, strengthening and fostering national unity, uniting national security and enhancing national dignity, national dignity and honor " [1].
Based on the aforementioned elements, in the end there can be an increase in sports achievements that can arouse regional pride, national and national endurance in general. Therefore, the development and development of sports need to get good attention through systematic planning and implementation in regional and national development.
From the description above it appears that among the various goals and objectives of Indonesian sports activities one of them is fostering achievement. This means that sports activities in Indonesia are not only for physical fitness or recreation, but must also think towards improving sports achievements in order to improve the nation's name in the international arena.
To achieve high sports achievements in sports can be done through fostering the achievements of talented athletes evenly throughout the country. Because through fostering athlete achievement seen from his continuing interest, programmed and integrated will produce athletes who excel. Athlete achievement is a pride that is not only for athletes but for family, community and country.
In Indonesia soccer is one of the many sports that are developed and developed, this development is marked by the birth of associations or clubs and soccer schools (SSB) in various regions in the country, not only in cities but has spread to the villages. So now the game of football can be regarded as a people's sport. Along with the development, there will be seeds of soccer players for the future.
In addition, a soccer player must have a good mental quality, when based on opponents the player must be able to hold his emotions, and a soccer player must have high motivation to become a great soccer player. If some of the above are collaborated with good game tactics, then a soccer player will be able to carry out the idea of a good soccer game in an effort to win the game to achieve high performance.
To be able to do this, a soccer player must have good technical, physical, tactic, and mental components, [who said that "the four components namely the physical condition component, the technical skill component, the tactic component and the mental component are needed by every athlete in both individual and team sports to reach the top achievements of a sport " [2].
The nature and situation of the game that refers to agility and skills to outperform opponents, running throughout the game takes place, speed, agility, and kicking power, must be supported by elements of physical conditions, especially VO2max prime. A player who has good agility will be able to adjust to the ever-changing ball movements. When the player loses the ball, then with the ability and agility, it is more possible for him to get the ball back, of course with effort and hard training.
Talking about the components of physical conditions, in soccer games, VO2max is one component of physical conditions that is very important in soccer games. This is certainly clearly necessary because together we know that this soccer game lasts for 90 minutes. This means that soccer players must be able to survive within 90 minutes to be able to follow the game.
Thus, in basic technical mastery training, especially when dribbling, the element of agility and VO2max should receive special attention. Because the exercise is an overall movement that activates the ankles and hips as well as the endurance of the player. The agility in shaking the ball causes the player to save energy.
At present, in the jambi area, especially in the Kerinci regency the development of football is very rapid. This is proven by the number of clubs and Football Schools (SSB) popping up including IPPOS, PSTK, PSPT, PS Hiang Karya, Hiang Sakti Fc and others. From this it appears that in Kerinci District there are already many clubs and football schools that are organized in an organized manner which at the end of the results of the formation of each club is expected to produce quality football players who can support good performance in the club each of them can represent Indonesia in the international arena.
SSB Merpati Hiang is one of the soccer schools in Kerinci Regency under PSSI. SSB Merpati Hiang was founded in 1998. Currently Merpati Hiang's football school is chaired by Reki Asdeni and Secretary of the Kasmir brothers and trained by brothers Muhammad Amin and Heri Kiswanto, S.Pd. SSB soccer player Merpati Hiang consists of a combination of junior and senior players who are aged 20-24 years, totaling 24 players. The presence of Merpati Hiang SSB is also expected to produce players who will later be proud of Merpati Hiang's name both in Kerinci Regency itself and to the national arena.
But the fact that happened in the last few years is the lack of achievement produced by SSB Merpati Hiang. The achievements of the SSB soccer player Merpati Hiang today are very much different from those produced a few years ago. From observations made by the authors in the field, the low achievement of SSB soccer player Merpati Hiang is caused by many factors. Among these factors include mastery of technique, physical condition, tactics / strategy and mental, as well as VO2max players, many players are not able to play until the final whistle is sounded.
Considering the importance of dribbling agility in a soccer game, it needs to be investigated from several factors that can affect the agility of dribbling, including the player's VO2max. For this reason, researchers are very interested in bringing up a study with the title "The Contribution of VO2Max to the Agility of Football Player Dribbling in Merpati Hiang, Sitinjau District, Kerinci Regency."
II. RESEARCH METHODOLOGY
Based on the problems that will be discussed, "the research conducted is quantitative with a correlational approach" [3]. that aims to see the weight or closeness of the relationship between variables: the VO2Max independent variable with the dependent variable agility dribbling SSB soccer player Merpati Hiang. Thus this study will reveal how much the contribution of VO2Max to the agility of the SSB soccer player dribbling.
The research site was conducted in the soccer field of SSB Merpati Hiang, Sitinjau Laut Subdistrict, Kerinci Regency, and the time of the study was approximately 2 weeks in February and March 2016 starting on February 2, 2016 from 15:00 WIB until completion. the population used is the Merpati Hiang SSB soccer player who is still actively participating in training and is registered as a Merpati Hiang SSB soccer player in 2015, amounting to 24 people consisting of a combination of senior and junior players aged between 20-24 years. The sampling technique in this study is total sampling, where all populations are sampled as many as 24 samples.
To get data on VO2max endurance and agility of dribbling SSB soccer player Merpati hiang, the instruments used in this study were through the MFT or Bleep Test and measurement tests and the dribbling test (Zig-Zag).
III. RESULTS
From the results of VO2Max Traffic Test measurements conducted on 24 SSB Merpati Hiang Football Players, the highest score was 43.6 and the lowest score was 27.2, while the range (measurement distance) was 16.4. Based on the group's data the mean count is 33.21 and the standard deviation (standard deviation) is 5.019, it can be concluded that of the 24 people of Merpati Hiang SSB Football Players, who have the results of the VO2Max class interval data 27.2 -30.48 there were 9 people (37.50%), interval classes 30.49 -33.77 were 8 people (33.33%), and interval classes 33.78 -37.06 were 2 people (8.33%). While the interval classes 37.07 -40.35 are 1 person (4.17%) and the interval class 40.36 -43.64 is 4 people (16.67%).
From the results of the measurement of dribbling agility test conducted on 24 Football Players in Merpati Hiang SSB, the highest score was 30.8 and the lowest score was 23.89, while the range (measurement distance) was 6.91. Based on the group data, the mean count is 26.66 and the standard deviation (standard deviation) is 1.931, it can be concluded that of the 24 soccer players, who have the results of the agility data of dribbling class intervals of 23.89 -25.27 is 7 people (29%), class intervals 25.28 -26.66 are as many as 7 people (29%), and class intervals 26.67 -28.06 are as many as 5 people (21%). While the interval classes 28.07 -29.45 are 2 people (8%) and the interval classes 29.46 -30.84 are 3 people (13%).
The results of the correlation analysis between VO2Max (X) with dribbling agility (Y) soccer player SSB Merpati Hiang Sitinjau Laut Subdistrict Kerinci Regency is obtained rcount 0.62642> rtable 0.404, meaning that there is a significant (meaning) relationship between VO2Max and Dribbling agility Merpati Hiang SSB soccer player, Sitinjau Laut District, Kerinci Regency.
IV. DISCUSSION
The hypothesis proposed in this study is VO2max contributes to the agility of the Dribbling Football Player SSB Merpati Hiang. Based on the results of data analysis, it turns out VO2max has a significant contribution to the agility of dribbling empirically accepted truth. Furthermore, VO2max contributed 39.24% to the agility of dribbling soccer sports. It means that the better VO2max, then the better the agility of dribbling SSB Merpati Hiang Football Players.
From the description above, it is clear that VO2Max has quite a large contribution to Dribbling Agility made by SSB soccer player Merpati Hiang. It is expected that good VO2Max can improve the quality of athlete performance. A good VO2Max can also improve the physical fitness of athletes and can improve the physical condition of athletes so that they can last longer in the match. If VO2Max owned by the players is less meaningful in this case the physical fitness of the player decreases so that the player cannot last long enough in the match. This can affect the tempo of agility in dribbling, according to Stalin in "the Advances in Social Science, Education and Humanities Research, volume 460 greater the aerobic capacity will be the greater the ability of a person to carry a heavy workload and will more quickly recover physical fitness after the heavy work is completed because the volume maximal oxygen is one of the important factors to support athlete achievement [4]". The importance of VO2Max when conducting dribbling agility, also seen when carrying or moving the ball from one place to another, by the way the ball continues to roll on the field, without a good VO2Max where maybe a player can pass an opponent with agility dribbling a ball good, or move the direction of movement to approach the opponent's goal area with the ball still well controlled V. CONCLUSION Based on the results of the research described in the previous chapter, it can be concluded that the sport of soccer really needs VO2max because in this sport a lot of motion activities are carried out continuously in a long | 2020-08-20T10:01:52.046Z | 2020-08-06T00:00:00.000 | {
"year": 2020,
"sha1": "4fb77c40b19448a4b5970be143bc8e90f5d804f2",
"oa_license": "CCBYNC",
"oa_url": "https://www.atlantis-press.com/article/125943025.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "e98a4c6bbd6892b2ece2fe162af1ce9d9a762ee7",
"s2fieldsofstudy": [
"Education"
],
"extfieldsofstudy": [
"Psychology"
]
} |
4659106 | pes2o/s2orc | v3-fos-license | Near-drowning-associated pneumonia with bacteremia caused by coinfection with methicillin-susceptible Staphylococcus aureus and Edwardsiella tarda in a healthy white man: a case report
Background Edwardsiella tarda is an Enterobacteriaceae found in aquatic environments. Extraintestinal infections caused by Edwardsiella tarda in humans are rare and occur in the presence of some risk factors. As far as we know, this is the first case of near-drowning-associated pneumonia with bacteremia caused by coinfection with methicillin-susceptible Staphylococcus aureus and Edwardsiella tarda in a healthy patient. Case presentation A 27-year-old previously healthy white man had an episode of fresh water drowning after acute alcohol consumption. Edwardsiella tarda and methicillin-sensitive Staphylococcus aureus were isolated in both tracheal aspirate cultures and blood cultures. Conclusion This case shows that Edwardsiella tarda is an important pathogen in near drowning even in healthy individuals, and not only in the presence of risk factors, as previously known.
Background
The World Health Organization defines drowning as "the process of experiencing respiratory impairment from submersion/immersion in liquid" [1] emphasizing the importance of respiratory system damage in drowning pathophysiology, complications, and prognosis. More than 500,000 people die each year due to unintentional drowning [2]. According to the Center for Diseases Control, drowning was the tenth major cause of death related to injuries in the USA from 1999 to 2010 [3]. Approximately 50 % of drowning victims are under 20-years old [4]. In developing countries this incidence is even greater [5].
Lung infections are one of the most serious complications occurring in victims of drowning [6]. They may represent a diagnostic challenge as the presence of water in the lungs hinders the interpretation of radiographic images [5]. Both fungi and bacteria have been reported as etiological agents of after-drowning pulmonary infections [6]. Aerobic Gram-negative bacteria are the most frequently implicated in these infections [6].
Edwardsiella tarda is a facultative anaerobic flagellated Gram-negative bacilli member of the Enterobacteriaceae family found in aquatic environments [7]. This bacteria causes gastroenteritis predominantly. The main risk factors for extraintestinal infections are hepatobiliary diseases, iron overload syndromes, cancer, immunosuppression, and diabetes mellitus [8,9].
As far as we know, the case about to be presented is the first documented episode of near-drowning-associated pneumonia with bacteremia caused by coinfection with methicillin-susceptible Staphylococcus aureus and E. tarda in a healthy patient. These data could motivate a different approach to antibiotic use for sepsis related to a neardrowning episode.
Case presentation
A 27-year-old previously healthy white man had an episode of fresh water drowning after acute alcohol consumption. Friends quickly removed him from the water. A rescue team was activated and identified cardiopulmonary arrest in a non-shockable rhythm. Oral intubation was quickly performed. Neither stool reflux/vomiting nor aspiration was reported by the team. After two cycles of cardiopulmonary resuscitation (for about 4 minutes) and orotracheal intubation, return of spontaneous circulation occurred. During transportation bradycardia was reported, which reverted after one dose of atropine.
He was admitted to the emergency room of a tertiary academic hospital. On examination he was hemodynamically stable and comatose with 3 points on Glasgow Coma Scale (GCS) and nonreactive pupils. No other relevant physical findings upon arrival. He was placed on mechanical ventilation and transferred to the intensive care unit (ICU).
A few hours after admission to the ICU he presented decreased consciousness level (GCS 4), hypotension, and signs of poor peripheral perfusion. A blood gas analysis showed hypoxemia with respiratory acidosis. He underwent hypothermia for neuroprotection after cardiac arrest, received protective ventilation for acute respiratory distress syndrome (ARDS), and vasoactive drugs (norepinephrine plus epinephrine, which were maintained for 24 hours) through right subclavian central venous catheter (postpuncture pneumothorax was drained with a pigtail catheter uneventfully). He developed acute renal failure due to rhabdomyolysis, renal ischemia, and multiple organ failure, requiring hemodialysis for 15 days.
Gram's staining of his tracheal aspirate taken 3 days after the accident showed Gram-positive cocci isolated and in pairs, and frequent Gram-negative bacilli. Tracheal aspirate cultures isolated methicillin-sensitive S. aureus, Enterobacter aerogenes, Aeromonas species, and E. tarda. Blood cultures (first set obtained) isolated methicillin-sensitive S. aureus and E. tarda, which led to the introduction of oxacillin and ceftriaxone on the sixth day of hospitalization. Five more sets of blood culture were performed after the introduction of the antibiotics. All were negative. Computed tomography performed on the 11th day of hospitalization showed bilateral pleural effusion, and multiple pulmonary consolidations and cavities with thickened walls and air-fluid levels, consistent with lung abscesses (Figs. 1 and 2).
Twenty days after the ICU admission, he was transferred to the regular infirmary ward where the ongoing clinical, laboratory, and radiological improvement continued. On the 45th day of hospitalization, he was discharged home for out-patient monitoring with prescription of ciprofloxacin and clindamycin to be taken orally. He returned for follow-up consultation 14 days after taking the antibiotics. He reported no symptoms since the hospital discharge.
Discussion
Although most victims of near drowning are previously healthy, the morbidity and mortality associated with these events are high, mainly due to pulmonary and neurological complications associated with tissue damage by hypoxia, acidosis, and hypoperfusion [6]. After submersion, the victim's conscious response leads to a period of voluntary apnea, which stimulates the respiratory drive, leading to involuntary aspiration [5]. Aspirated water, in contact with the alveoli, leads to surfactant dysfunction and an increase in the alveolarcapillary membrane permeability, causing extensive pulmonary edema, atelectasis, and bronchospasm [10]. The combined effects of alveolar damage, contaminated material inoculation in the airways, and the frequent need of mechanical ventilation respiratory support result in an up to 12 % risk of after-drowning pneumonia [11]. This risk may vary according to the volume aspirated, the degree of water contamination and its temperature, as well as to the occurrence of aspiration of gastric content [6]. When admitted to an ICU, drowning victims should be managed following ARDS guidelines [5].
Lung infections are one of the most serious complications occurring in victims of drowning [6]. They may represent a diagnostic challenge as the presence of water in the lungs hinders the interpretation of radiographic images [5]. However, prophylactic antimicrobial therapy is not recommended due to the potential selection of resistant bacteria [12]. Both fungi and bacteria have been reported as etiological agents of after-drowning pulmonary infections [6]. Aerobic Gram-negative bacteria are the most frequently implicated bacteria in these infections, among which stand out Aeromonas species (in particular, Aeromonas hydrophila), Burkholderia pseudomallei, and Chromobacterium violaceum [6]. Grampositive cocci such as S. aureus and Streptococcus pneumoniae and some Enterobacteriaceae are also reported as etiological agents of pneumonia, although it is often difficult to distinguish whether the infection was due to drowning or nosocomial related [6].
E. tarda is a facultative anaerobic flagellated Gramnegative bacilli member of the Enterobacteriaceae family found in aquatic environments [7]. Pathogenicity in humans, although rare, has been demonstrated predominantly in gastroenteritis, which represents more than 80 % of the infections by this agent [7][8][9][10][11][12][13]. Nonetheless, there are also reports of extraintestinal infections such as cellulitis and cutaneous abscesses, meningitis, endocarditis, osteomyelitis, liver abscess, tubo-ovarian and peritoneal abscess, as well as bacteremia and sepsis [8,9,[11][12][13][14]. There is no report of pneumonia cases in immunocompetent patients so far. In the present case, only blood and tracheal aspirate cultures were performed. An endotoxin test was not available at the hospital. Nonetheless, the endotoxin is of secondary pathogenic importance when compared to infections caused by Salmonella, Shigella and Yersinia [15].
The most important risk factor for E. tarda infection is exposure to aquatic environments [13], and the main risk factors for extraintestinal infections are hepatobiliary diseases, iron overload syndromes, cancer, immunosuppression, and diabetes mellitus [8,9].
Conclusions
This is the first report of near-drowning-associated pneumonia with bacteremia by coinfection with methicillinsusceptible S. aureus and E. tarda in a patient without comorbidities, documented by isolation of the bacteria from blood cultures and in tracheal aspirate cultures. The only reported case of pneumonia caused by E. tarda (isolated only in sputum) occurred in a patient hospitalized for diabetic ketoacidosis, with no history of drowning [9]. There are no reports of pneumonia caused by E. tarda in a patient without previous medical history, nor reports of E. tarda bacteremia from pulmonary infection. The capacity of E. tarda to form abscesses in other parts of the body such as skin, ovaries, and liver has already been well documented [8,9,14]. This may suggest its involvement in the formation of extensive lung abscesses in this case in association with the S. aureus, although there are also no reports of such clinical presentation.
This case widens the spectrum of extraintestinal presentations of E. tarda infection to include bacteremia from lung infection. Thus, the monitoring of drowning victims for pulmonary infection should be thorough and should always include sputum cultures to allow detection of waterborne bacteria which, although rarely isolated, can cause highly lethal infections.
Abbreviations ARDS, acute respiratory distress syndrome; GCS, Glasgow Coma Scale; ICU, intensive care unit | 2018-04-03T05:13:47.557Z | 2016-07-16T00:00:00.000 | {
"year": 2016,
"sha1": "19f8ddd1cd459335f6ac75b547eab08f1328bf46",
"oa_license": "CCBY",
"oa_url": "https://jmedicalcasereports.biomedcentral.com/track/pdf/10.1186/s13256-016-0975-7",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "19f8ddd1cd459335f6ac75b547eab08f1328bf46",
"s2fieldsofstudy": [
"Medicine",
"Environmental Science"
],
"extfieldsofstudy": [
"Medicine"
]
} |
428700 | pes2o/s2orc | v3-fos-license | Indian Journal of Orthopaedics: Journey continues
Indian Journal of Orthopaedics | January 2013 | Vol. 47 | Issue 1 The ideal strategy to effect relief to a suffering patient is the one which is predictive, cost effective, and based on scientific principles. Scientific journals play a big role in evolving treatment strategies for clinical problems. Peer reviewed opinions are without a clinician’s bias, conflict of interest, and are also the basis of evidence based medicine (EBM) to provide the best available rationale to alleviate pain. The best evidence can be generated when methodically conducted studies are published in peer‐reviewed journals, and are available to orthopedic surgeons far and wide and are a part of metaanalysis to produce validated evidence.1 The journals also have the responsibility to spread knowledge and evidence, generate curiosity among readers to upgrade the knowledge, as well as inculcate the art of critical analysis of the published data and use the evidence rationally.2 The editorial team is the backbone of any journal and association. The current editorial team has completed 6 years in office. It is time we analyze our achievements and make projections and plan for the next 3 years.
T he ideal strategy to effect relief to a suffering patient is the one which is predictive, cost effective, and based on scientific principles. Scientific journals play a big role in evolving treatment strategies for clinical problems. Peer reviewed opinions are without a clinician's bias, conflict of interest, and are also the basis of evidence based medicine (EBM) to provide the best available rationale to alleviate pain. The best evidence can be generated when methodically conducted studies are published in peer-reviewed journals, and are available to orthopedic surgeons far and wide and are a part of metaanalysis to produce validated evidence. 1 The journals also have the responsibility to spread knowledge and evidence, generate curiosity among readers to upgrade the knowledge, as well as inculcate the art of critical analysis of the published data and use the evidence rationally. 2 The editorial team is the backbone of any journal and association. The current editorial team has completed 6 years in office. It is time we analyze our achievements and make projections and plan for the next 3 years.
The Indian Journal of Orthopaedics (IJO) has taken tremendous strides from 2007 to 2012. We are now indexed with Pubmed and Science Citation Expanded [ Figure 1]. Our website has fairly advanced features. IJO has now good visibility. Our impact factor is 0.503 this year, which is 76% more than that of last year. This is truly the first impact factor and is encouraging.
For any journal of repute, we need consistency and regularity in publication, manuscript quality, excellent visibility, and constant flow of finances. IJO is now in sound financial state. The funds are provided by IOA and IJO also generate thru advertisements and subscriptions. IJO is provided free of cost to IOA members (over 9000 members), with approximately 300 library subscriptions. India has approximately over 350 medical colleges, 150 institutions conducting Diplomate of national board (DNB), and a large number of state-of-the-art institutions. We only had 284 subscriptions last year, which is very low. The IOA members can play an active role in increasing subscriptions by ensuring that IJO is available in their institute library. The orthopedic surgeons of SAARC can be provided IJO on only print cost, provided the postage is borne by the members.
RegulaRity of Publication
Four issues per year in 2007 had been increased to six issues per year in 2011. The issue is released on time, i.e., 3 weeks before the print version. For the last 6 years, we have been able to release all issues in time. This year, we published 732 pages of academic content, which is 35% more than last year's [ Figure 2] with consistent print quality.
SubmiSSion
The manuscript submission has shown persistent rising trend during last 6 years [ Figure 3]. It had increased 3 times from 2007 and approximately 16% from the last year. The ratio of overseas manuscript to Indian manuscript is 37:63 [ Figure 4]. We have received manuscripts from 37 countries, and China leading with 88 manuscripts in 2012 [ Figure 4]. Our submission/decision and submission/ publication timeline has improved substantially. It is mean 74.4 days from submission to publication. The decision to publication time is mean 22.7 days and submission to decision mean 51.7 days [ Figure 5]. The decision depends on the timelines of review. If one set of reviewers do not respond or submit adequate reviews and new set of reviewer are allotted, the time increases by 4 weeks. This has been possible because of prompt review/editorial decision and efforts by the publishing team.
Indian Journal of Orthopaedics: Journey continues
IJO has a tremendous potential to be one of the leading journals in the world. India covers one-sixth of the world population. Almost two-thirds of the world's population is having similar clinical profile. India is unique with its diversity. At one place we provide best of infrastructure and treatment in metropolis. No treatment or grossly inadequate treatment is available to huge population in remote rural areas. We have the intellectual know-how to fund and conduct research to evolve innovative rational solutions to the clinical problems, which can be executed with limited resource. 3,4 This journal can be the most important knowledge bank to major population of the world. We have to plan the progress, and with collective efforts, it can attain greater heights. Certain small steps that can collectively raise the standard are as follows: 1. Submission -We need to inculcate the scientific temper and conduct need based research, and ensure that the best of our clinical research is submitted to IJO for publication. It is still observed that our members do not submit their best research to IJO. Publication in IJO has many advantages. a.
It is an open access journal, hence it allows free download to one and all in the world. From 2007 to 2012, our downloaded has increased in the last 6 years [ Figure 6]. The free download allows articles from IJO to be widely read and cited globally. 5 c. It is also reviewed by members who might be facing similar clinical dilemma, and therefore understanding of the research question and solution offered is better.
d. IJO is available in Pubmed and Science Citation, hence gets the same credit rating as any other international journal. e. We have no restriction of printed pages, so no article will ever be rejected for want of pages and every methodically conducted research will get a space in IJO. 2. Peer review a. We have the best talents to review objectively the specific clinical problems of our subcontinent.
Reviewing for the journal also helps by improving personal writing skills also. 6 The IOA members should feel honored in reviewing for IJO. It is commonly observed that our members work/ review for international journals and decline to work for IJO. They should devote some of their time to review for IJO as well. 3. Impact factor -Impact factor is widely used method to select a most valued journal 7 and an indirect reflection of its scientific standard. The impact factor for a particular year depends on the citation of the articles in that year for those published during the last 2 years. The members have to ensure that the articles published in IJO are used as references in the articles published by them anywhere anytime. 4. Thematic issues -We have to make efforts to publish thematic issues. The subspecialties of arthroscopy, arthroplasty, spine, hand, pediatric orthopedics, and trauma should actively contribute quality research in IJO.
The distant dream is that IJO remains an official journal of all subspecialties affiliated to Indian Orthopaedic Association. Each issue can have one section of General Orthopedics (50%), while the remaining 50% can be devoted to subspecialties. If need arises, IJO is prepared to increase the number of issues per year.
The January 2013 issue devoted to articles on arthroplasty. This has been possible due to persistent efforts by members of Indian Society of Hip and Knee Surgeons (ISHKS). The January issue has an annotation sensitizing readers on various issues related with animal experimentation. 8 The review article by Aggarwal, Rasouli and Parviji discusses extentively the diagnosis management and recent advances in periprosthetic joint infection. 9 13 original articles on TKA are published. Helwig et al. compared the various method of tibial cleaning in cemented TKA on a cadevaric study and found the use of pulsatile jet lavage as the best method of tibial cleaning. 10 Maniar et al. compared a group of unilateral and bilateral sequential TKA and reported a delayed early functional recovery while late recovery was the same in bilateral sequential TKA. 11 Dahiya et al. reported good outcome of cruciate retaining TKA at midterm followup in patients with prior patellectomy. 12 Rajgopal et al. analysed a series of TKA in extraarticular deformities. 13 Vaidya et al. analysed femoral component rotation on a computed tomography evaluation. 14 Jain, et al. compared a series of cases of TKA operated by medial parpatellar approach and subvastus approach and found later approach superior interm of pain relief and postoperative mobilization. 15 Sancheti et al. found preoperative range of motion (ROM) and preoperative functional status as the most important factor affecting the ROM in TKA using high flexion prosthesis. 16 Maniar et al. reported 93.2% survival at 12.3 years for low contact stress rotating platform knee. 17 Mohanlal et al. reported no significant difference in blood loss in computer assisted and conventional TKA. 18 Mohanty et al. reported a positive correlation between metaphysiodiaphyseal angle (MPA) and posterior tibial slope in Indian patients and believes that MPA is an independent factor affective accuracy of extramedullary jigs in TKA. 19 Vaidya et al. reported a series where significant reduction of blood pressure was reported due to increased physical activity following TKA. 20 Lad et al. reported a significantly improved placement of tibial component in coronal and sagittal plane by computer assisted TKA over jig based TKA. 21 This issue also has 3 manuscripts on total hip replacement. Sanjay Agarwala et al. conducted a retrospective analysis of uncemented distal locked prosthesis in revision hip Arthroplasty with proximal femoral bone loss. 22 Mohanty et al. reported the results of total hip Arthroplasty for failed infected internal fixation of hip fractures and found it a suitable alternative. 23 Che-Wei Liu et al. reported a series of acute presentation of late infected TKA and reported arthroscopic debridement combined with antibiotic irrigation and suction as an effective treatment. 24 We would like to thank Dr. SV Vaidya, and Dr. Suryanarayan Pichai for their efforts. This endeavour will go a long way in organizing thematic subspecialty issues.
The new editorial team led by Prof. Sudhir Kumar has started working for IJO. Only with collective efforts, IJO will attain greater heights. | 2018-04-03T03:07:42.677Z | 2013-02-01T00:00:00.000 | {
"year": 2013,
"sha1": "43da71de89367b08b9d4d687316014bbba8b9bc8",
"oa_license": "CCBYNCSA",
"oa_url": "https://europepmc.org/articles/pmc3601221",
"oa_status": "GREEN",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "e50d905c4a1e414b6bec9cfb935af4cd6422b2e4",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
267636404 | pes2o/s2orc | v3-fos-license | Using Voice-to-Voice Machine Translation to Overcome Language Barriers in Clinical Communication: An Exploratory Study
Background Machine translation (MT) apps are used informally by healthcare professionals in many settings, especially where interpreters are not readily available. As MT becomes more accurate and accessible, it may be tempting to use MT more widely. Institutions and healthcare professionals need guidance on when and how these applications might be used safely and how to manage potential risks to communication. Objectives Explore factors that may hinder or facilitate communication when using voice-to-voice MT. Design Health professionals volunteered to use a voice-to-voice MT app in routine encounters with their patients. Both health professionals and patients provided brief feedback on the experience, and a subset of consultations were observed. Participants Doctors, nurses, and allied health professionals working in the Primary Care Division of the Geneva University Hospitals, Switzerland. Main Measures Achievement of consultation goals; understanding and satisfaction; willingness to use MT again; difficulties encountered; factors affecting communication when using MT. Key Results Fourteen health professionals conducted 60 consultations in 18 languages, using one of two voice-to-voice MT apps. Fifteen consultations were observed. Professionals achieved their consultation goals in 82.7% of consultations but were satisfied with MT communication in only 53.8%. Reasons for dissatisfaction included lack of practice with the app and difficulty understanding patients. Eighty-six percent of patients thought MT-facilitated communication was easy, and most participants were willing to use MT in the future (73% professionals, 84% patients). Experiences were more positive with European languages. Several conditions and speech practices were identified that appear to affect communication when using MT. Conclusion While professional interpreters remain the gold standard for overcoming language barriers, voice-to-voice MT may be acceptable in some clinical situations. Healthcare institutions and professionals must be attentive to potential sources of MT errors and ensure the conditions necessary for safe and effective communication. More research in natural settings is needed to inform guidelines and training on using MT in clinical communication.
BACKGROUND
][21][22][23] A "bewildering diversity of apps" 24 have been developed to overcome language barriers, both fixed-phrase translators and general machine translation (MT) apps, which may be rules-based, statistical, or deep learning-based (neural). 25ixed-phrase translators propose pre-translated sentences that are then returned in the patient's language, in either text or audio.General machine translation (MT) apps such as Google Translate or Microsoft Translate, and MT devices such as Pocketalk 26,27 or Jarvisen 28 offer voice-to-voice machine translation, which involves speech recognition and transcription, translation of the transcript, and speech generation of the translation.
Both types of apps have their strengths and limitations.The translation quality of fixed-phrase apps is generally reliable, provided they have been produced by professional interpreters.However, because such apps contain a finite number of mostly declarative sentences and closed questions, communication tends to be limited and phrases cannot be reformulated if the listener has trouble understanding.While fixed-phrase translators may be useful when interpreters are unavailable and for low-stakes, everyday conversations, 29 some users have found them too time-consuming to use in relation to their expected benefits. 30T apps and devices have potential to allow unlimited and more natural exchanges and tend to offer more languages than fixed-phrase apps but can require considerable effort on the part of users to initiate and carry out multi-turn conversations. 31In addition, numerous concerns have been raised about the accuracy of MT, [32][33][34][35][36] which can vary considerably depending on the languages involved, speakers' speech patterns, and conversation content.In one systematic review of MT in healthcare, most of the studies reviewed concluded that "MT error rates were currently unacceptable for actual deployment in health settings". 37 move from statistical (SMT) to neural machine translation (NMT) models has significantly improved the quality of MT results.Whereas SMT looks for statistical patterns and uses probabilities in words and phrases to make translations, NMT examines translated phrases to identify linguistic patterns and structures which are then used to predict translation outputs on new data.NMT tends to be more accurate than SMT due to its ability to learn more diverse and complex language patterns. 380][41][42] Very few studies have been conducted using voice-to-voice MT in natural, unscripted settings, 43,44 conditions that pose additional challenges such as accent and dialect recognition, fast or complex speech, and ambient noise. 45,468][49][50] However, we found no studies that specifically examined whether clinicians and patients in real-life situations are able to successfully adjust their speech behavior, and whether such adjustments allow for satisfactory communication when using MT. 51T apps are already being used informally and unofficially by healthcare professionals for languages and situations where interpreters are not easily available. 52,53s MT becomes more accurate and accessible, it may become tempting to forego the costs and inconveniences of scheduling human interpreters and rely on MT more broadly.This underscores the need for more research on the use of MT in everyday clinical practice and guidance on when and how such apps might be used safely and efficiently. 54Towards this aim, we explored the use of voice-to-voice MT in routine clinical encounters to identify conditions and practices that may affect communication with MT.
Study Context
The project was conducted in the Primary Care Division at the Geneva University Hospitals (HUG).The HUG is a 2000bed, public hospital group, serving a socially, culturally, and linguistically diverse population of over 500,000. 55At the HUG, about half of patients are of non-Swiss nationality and speak more than 70 different languages.About 12% of patients speak no French at all. 56Community interpreters (inperson and over-the-phone) have been available to HUG staff since 1999, and a range of actions have been developed to facilitate timely and appropriate use of interpreter services. 57se of MT apps is currently neither officially encouraged nor prohibited, but anecdotal evidence suggests widespread use when interpreters are unavailable or impractical.
The Primary Care Division consists of several units providing outpatient consultations for problems of primary care medicine 58 and is the hospital Division with the greatest number of interpreter missions at the HUG.We chose this Division because we were interested in the opinions and experiences of health professionals and patients who are accustomed to using interpreters and could reflect on the comparative advantages and disadvantages of using a translation app to communicate.
Study Participants
All health professionals (doctors, nurses, allied health professionals) working in the Primary Care Division were eligible to participate in the study.Participants were recruited through several methods.Unit heads were asked to propose staff who might be interested and available to participate, who were then contacted directly.The study was also presented in a weekly training session for residents, who were invited to participate in the study.Social workers and dieticians were contacted individually to explain the study and invite them to participate.In all instances, the study objectives were explained, anticipated difficulties were discussed, and the translation app and device were demonstrated.
App Selection
Participants were requested to use either the Microsoft Translator app 59 on their (personal or professional) android smartphone or the translation device Pocketalk W. 60 Microsoft Translate (MST) is a free app that provides voice-to-voice translation for a wide range of languages.While several such apps exist, we chose this app for its user-friendly interface that facilitates two-way conversations, and for the option to choose among different voices for audio translations (male/female; accent).Pocketalk W (PW) is a purchasable translation device providing voiceto-voice translation for a wide range of languages and that can be used with Wi-Fi or cellular data.We proposed the Pocketalk as an alternative to Microsoft Translator for participants who were unable or preferred not to use their professional or personal cell phones for translation.
Both MST and PW are certified compliant with the USA Health Insurance Portability and Accountability Act (HIPAA) which sets standards for protection of health information, and the EU GDPR regulation which sets standards for all sensitive personal data including race, religion, political affiliations, sexual preferences, biometric or genetic data, and any other information relating to health. 61,62To further enhance data privacy, health professionals were instructed to decline voice clip contributions for review (in the app settings).
Study Procedures
Participants (health professionals) were asked to conduct at least 5 consultations using the selected MT app or device, so that they had a chance to become familiar with the app.While most volunteers had used Google Translate, none were familiar with Microsoft Translator.Volunteers were provided with basic instructions on how to install and open the app, and how to select languages and tap the mic before speaking.They were advised to speak in complete sentences and to use plain language.
Health professionals were free to choose the consultations in which they would use the app or device but were asked to select languages for which both speech recognition and audio translation were available (voice-to-voice translation), and to avoid consultations where they anticipated an emotionally charged discussion or informed consent discussions.These minimal instructions were designed to mimic what might happen in real-world practice, but at the same time avoiding situations where communication is likely to be particularly difficult or high-stakes.
At the end of each consultation, participants filled a brief questionnaire that included 8 closed questions, plus space for open comments (see Box 1).In addition, HPs were requested to ask 3 closed questions to their patients (Box 2).
Box 1 Post MT-use questionnaire for health professionals.Consultations could be planned or unplanned, with or without prebooked interpreters.
Interpreter services used by the hospital were informed of the project, and for consultations where an interpreter had already been booked, the interpreter was asked to wait outside the consultation in case the health professional was unable to adequately communicate with the patient using the translation app.For unplanned consultations, participants were instructed to call a telephone interpreter in the case of communication difficulties.
As a complement to the questionnaire responses, PH observed a small number of planned consultations where the apps were used (and where an interpreter was pre-booked).PH explained to patients that their health professional would be using the app to communicate, and that the interpreter would be available in the case of communication difficulties.PH obtained verbal consent to observe the consultation and to ask a few brief questions after the consultation.Patients were informed that no health-related or identifying information would be collected, only information pertaining to use of the translation app.
Observations focused on whether the health professional and patient seemed comfortable using the app, whether the professional and patient made eye contact while speaking, and what, if any, strategies were used to ensure understanding (Box 3).Obvious translation errors and any difficulties encountered were also noted.After observing the consultation, PH asked patients a few brief questions, using either the app or an interpreter to translate (Box 2).
Data Analysis
Data analysis included descriptive statistics of patients' and health professionals' answers to questionnaire items and summaries of observed speech practices and difficulties.
Ethical Approval
While research ethics review is typically not required for quality improvement activities that are within professional practice, we submitted our project to the Geneva Cantonal of the Research Ethics Commission (CCER) who considered it exempt because the aim is outside the scope of the law.
RESULTS
Fourteen health professionals conducted 60 consultations (4 with PW, 56 with MST) in 18 languages.Fifteen consultations were observed (2 with PW, 13 with MST).No patients refused the observer presence.Health professionals included 5 doctors, 6 nurses, and 3 allied health professionals.
All four consultations attempted with the PW device (Albanian, Tamil, Italian, English) were wholly unsatisfactory due to technical difficulties.Speech recognition tended to be poor, which led to nonsensical translations.In addition, audio translations were delayed and sometimes absent, probably due to unstable Wi-Fi or cellphone networks.Users also thought the PW device and its interface were awkward.Due to these difficulties, we decided to abandon the PW.Below, we present results from the 52 consultations using the voiceto-voice option in MST (in 4 cases, text translation was used because voice-to-voice translation was not available for the selected language).
Questionnaire Responses
Health professionals (HPs) used MST in 52 consultations and 13 languages.Thirty-four consultations involved European languages, including English, Bulgarian, Spanish, Portuguese, Romanian, Russian, and Ukrainian.Eighteen consultations involved non-European languages, including Arabic, Bengali, Chinese, Hindi, Tamil, and Turkish.
Overall, HPs successfully achieved their goals in 43/52 consultations (82.7%) but were satisfied with communication in only 28/52 (53.8%).Spontaneous reasons given for dissatisfaction were lack of practice with the app (their own and patients'), which could lead to poor translations and slow down communication.
Totals vary for patients' responses because health professionals did not always remember to ask patients to answer the questions.Thirty-six out of 41 patients (87.8%) thought MT-facilitated communication was easy, and most participants were willing to use MST again: 71.2% of professionals (37/52) and 88.0% of patients (37/42).Seventy-seven percent (23/30) of patients thought the app would be preferable or equal to an interpreter for discussing intimate or sensitive topics with their health professional.
Experiences were more negative for non-European languages (Table 1), mainly due to non-recognition and poor translation of patients' speech.
Open-Ended Comments on the Questionnaire.Thirtysix HPs wrote brief comments on the questionnaire form.Sixteen noted that their patients' speech was poorly translated (Arabic, Turkish, Tamoul, non-native speaker of Russian); 8 commented on circumstances where the app worked well (with practice it gets easier; works well for simple exams, when using simple phrases, with patients who speak standardized language); 6 noted that their patient had difficulty learning to use the app; 2 said they found it difficult to use the app for emotional discussions; and 4 commented that communication went well despite the occasional translation error.
In two consultations (a Romanian-speaking Roma patient and a Turkish-speaking patient, both illiterate), an interpreter was called to ensure successful communication.In both cases, patients were reluctant to try using the app, were visibly flustered and upset, and had difficulty remembering to tap the mic and to speak in short turns.
When using MST, speakers tended to look at the listener just before tapping the mic, then looked at the phone to verify that their speech was correctly recognized.Speakers often watched the listener while the text and audio translations were produced, which allowed them to monitor the listener's reaction and detect any comprehension problems.
Speech recognition problems and translation errors occurred when speech was disfluent (fillers, stutters, pauses), when speakers used only intonation to indicate a question that was then translated as an affirmation (e.g., "You don't have hypertension?"),with some numbers (e.g., "one, two, three" translated as 123), when using non-standard dialects (e.g., Maghrebi Arabic) or mixing words from different languages (e.g., a Spanish speaker who used the French work "rendez-vous" instead of "cita" for an appointment).Speakers sometimes forgot to tap the mic or spoke before the mic was activated which also contributed to poor or incomplete speech recognition.
Speech recognition and translation errors were quickly noticed and communicated through facial expressions (furrowed brow, laughter).When this occurred, both health professionals and patients generally either reformulated or asked for clarification.Occasionally, listeners would ignore a poor translation if overall understanding was good.
The smoothest exchanges occurred when health professionals took the time to explain and demonstrate the app to patients, created an unrushed atmosphere, spoke in short turns, used simple language and visual or written supports to ensure understanding (e.g., writing down medication names or numbers).When speakers were stressed from lack of practice with the app or rushed due to time pressures, speech was more disfluent, which could lead to recognition and translation problems.
Technical issues were rare but a few times the app had trouble detecting speech, possibly due to internet connection problems.Waiting or closing and reopening the app usually corrected the problem but caused stress and interrupted the flow of communication.
Potential Advantages of Using a Translation App: Remarks from Participants.
Several patients commented on their experience with MST after the consultation.One patient who is hard of hearing said she appreciated being able to read the translations and commented that it was the first time she had understood everything without having to ask health professionals or interpreters to repeat themselves.Another patient thought the app would help her sister be more autonomous and less dependent on her overly controlling husband for translation.A patient with prostate problems said he would be more at ease using the app to talk with his doctor about his symptoms.Several patients asked for help in downloading the app onto their phones so they could use it in other contexts.
Health professionals commented that MST would be most appropriate in consultations involving the exchange of factual information (acute problems, medicine checks, followup appointments, simple exams), consultations with literate patients (who could verify and correct speech recognition), in situations where there was only a partial language barrier (when one or the other spoke and understood some of the other's language, but not enough to forego an interpreter), and potentially with patients who were known to frequently miss appointments (to avoid unnecessary billing for interpreter services).Several nurses found MST to be a welcome and superior alternative to telephone interpreters, who were not always quickly available and were often in noisy environments.Both patients and health professionals commented that the app had potential to facilitate patients' communication autonomy and to ensure confidentiality.
Potential Disadvantages of Using a Translation App:
Remarks from Participants.Both patients and health professionals commented that their lack of familiarity and practice with the app made communication more difficult.A few health professionals commented that communication could take even longer than with an interpreter if they had to take time out of the consultation to explain the app to patients.They also commented that having to pay attention to how they spoke (rather than relying on interpreters to make sense of their or their patients' sometimes disordered or incomplete phrases) could be tedious at first, but that with practice it became easier.Finally, some health professionals thought that developing a relationship and eliciting patients' (sometimes emotional) social and illness narratives could be difficult and time-consuming because of the need to speak in relatively short (unnatural) turns.
DISCUSSION
Participants in our study were able to communicate in a majority of interactions using voice-to-voice MT, and most patients and healthcare professionals were moderately to very satisfied with the MST-translated interactions and willing to repeat the experience in the future.However, experiences and satisfaction varied depending on the language being translated, the type of interaction, and speakers' ability to adapt speech patterns to accommodate the app.
To our knowledge, ours is the first study to explore the use of voice-to-voice MT in real-world clinical situations, for a wide range of languages, and with health professionals and patients who are accustomed to using interpreters to communicate.We identified only two previous studies that explored the use of voice-to-voice MT in natural settings.While reactions to MT were positive, both studies were limited to a single language (Spanish) and conducted in contexts with limited or no access to interpreters, conditions that may increase the likelihood of satisfaction. 63,64Health professionals and patients in our study found that voice-to-voice MT was useful and acceptable, but only for some languages and in some clinical situations.
While more experience and feedback from a wider range of medical specialties and clinical situations is needed to inform the development of guidelines for safe and effective use of MT, our preliminary results suggest that voice-tovoice MT is likely to be more successful: • With speakers of European languages, or speakers of non-European languages who can produce and understand "standardized" forms of their language. 65 With speakers who are comfortable with smartphone technology • With speakers who are able to modulate their speech to accommodate MT, in particular to speak in full sentences using plain language Health professionals who use voice-to-voice MT need to be aware of common sources of speech recognition problems and translation errors and know how to avoid or manage them.Compared to human interpreters, voice-to-voice MT has several disadvantages, including difficulty detecting contextual clues and translating non-standard language, cultural expressions and disfluency (fillers, stutters, pauses).This underscores the importance of general communication skills for detecting and addressing potential communication problems, such as using plain language, pacing one's speech, being attentive to nonverbal cues, verifying understanding, and using visual and written supports.
Study Limitations
Our study has several limitations.First, we did not systematically examine the accuracy of translations produced by MST.We were not interested in specific translation errors, but rather in whether and how health professionals successfully managed communication when using MT.Although we observed that listeners signaled when strange or unclear translations were produced and that speakers responded by repeating, reformulating, or using visual aids, it is possible that undetected and potentially important misunderstandings occurred.It would be useful to examine more closely how different kinds of translation errors affect communication and understanding.
Second, we had limited feedback from patients.HPs often failed to ask feedback questions to patients, and therefore responses may not adequately reflect patient experiences.Most HPs said they simply forgot or did not have time to ask the questions, but it is possible that they chose (consciously or unconsciously) not to ask the questions in situations where communication was more difficult, and where patients may have had a more negative experience.Some patients may not have felt comfortable giving negative feedback to (or about an interaction with) their HP.
Finally, our findings are limited to a self-selected group of HPs working in a single, hospital-based primary care service, and therefore may not be relevant to other HPs or clinical contexts.More research is needed on whether and how HPs in other medical specialties and healthcare contexts can communicate effectively with patients using MT before more general guidelines and recommendations can be proposed.Nonetheless, our results suggest that under certain conditions voice-tovoice MT can be an acceptable and effective means to overcome language barriers.
CONCLUSION
Effective communication is essential for the delivery of quality healthcare, and trained, professional interpreters continue to be the gold standard for overcoming language barriers in healthcare.Nonetheless, time and cost pressures, limited access to interpreters, and easy access to mobile translation apps have led to increased interest in and use of MT apps to overcome language barriers with patients.
While voice-to-voice MT may be a potentially useful and cost-saving strategy for addressing language barriers in some clinical situations, its effective use requires an understanding of its limitations as well as significant speech adaptions.Healthcare institutions and professionals must be attentive to the potential sources of translation and communication errors and ensure the conditions necessary for effective communication.
Box 3
Observation checklist.• Does the HP explain and demonstrate the app to the patient?• Do the HP & patient maintain eye contact?• Does the HP speak in simple phrases?• Does the HP use simple language?• Does the HP verify the patient's understanding?• Does the HP verify his/her own understanding?• Does the HP reformulate when necessary?• Does the HP use pen and paper to compliment or clarify the translation?• What technical difficulties were encountered?• What translation errors occurred?• What other difficulties were encountered?
Questions asked to the patient. | 2024-02-14T06:18:32.253Z | 2024-02-12T00:00:00.000 | {
"year": 2024,
"sha1": "1243d3fd8aec5de4875e7f484be3d94a27b9a7a7",
"oa_license": "CCBY",
"oa_url": "https://link.springer.com/content/pdf/10.1007/s11606-024-08641-w.pdf",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "ffc8c2f65d2278cf187492ceb9ecf4269bbc07ea",
"s2fieldsofstudy": [
"Medicine",
"Linguistics",
"Computer Science"
],
"extfieldsofstudy": [
"Medicine"
]
} |
139930756 | pes2o/s2orc | v3-fos-license | Study on Microstructure and Properties of 105mm Thick 5083 Aluminum Alloy Hot-Rolled Plate
The microstructure and tensile property of 105mm thick 5083 aluminum alloy hot-rolled plate in different thickness layers were investigated by optical microscopy, scanning electron microscopy, transmission electron microscopy and tensile tests. The results show that along the thickness direction, the microstructure and tensile property are inhomogeneous, and a “double low” phenomenon, which has low strength and plasticity, occurs in the central area. There are many sub-micron sized second phases in the center and near-center area, which cause two completely different dislocation distribution states. they can effectively prevent the dislocation movement during plastic deformation, and improve the strength of the alloy.
Introduction
With the development of global industrialization, the requirement for mechanical properties of 5083 aluminum alloy thick plate become higher and higher. In order to meet the need of large deformation, a wider and thicker plates are produced. However, in actual manufacture, the larger the size is, the more difficult it will be. This is principally because the solidified structure is inhomogeneous for the existence of a large temperature gradient along the direction of thickness and width. At present, researches on 5083 aluminum alloy plate primarily focus on the microstructure, heat treatment process, and the relationship between deformation and properties [1][2][3][4], while there are few reports about the inhomogeneity of microstructure and properties, and no test standard for the performance of plates beyond 20mm thick at home. So, it is very necessary to carry out the research in this field, and improve the detection mechanism as soon as possible. In this paper, the microstructure and tensile properties of 105mm thick 5083 aluminum alloy hotrolled plate were studied to discuss the factors affecting the inhomogeneity, so that it can provide a theoretical basis for further improving the quality of 5083 aluminum alloy plate.
Experiments
The material used in this experiment is 2500mm×105mm(width×thickness) 5083 aluminum alloy hotrolled plate with a free machining state, and its chemical composition is shown in Table 1. First, take samples from one end of the plate to the center along the transverse direction, and sign as sample A, B, C and D, respectively. Then, equally divide each sample into ten pieces of blank-parts along the thickness direction, and number them from the top to the bottom, as shown in figure 1. Finally, make tensile specimens, according to the GB/T228-2002 standard. The tensile tests are carried out at room temperature with a constant displacement rate of 1 mm/min to determine the average value of tensile strength, yield strength and elongation. Cut off metallographic samples from the clamped end of 1-5# tensile specimens, and have a observation on the metallurgical microscope and scanning electron microscope which equips with EDS. Take some wafers near the deformation zone of 4# and 5# tensile specimen, and have a microscope observation by Tecnai G2F20 transmission electron microscope after mechanical thinning. Figure 2 shows the microstructure of 5083 aluminum alloy. From the figure 2a~2e, the second phases and impurities are mainly distributed in the grain boundaries and elongated along the rolling direction. The phases precipitated in the surface layer of hot-rolled plate is small, dense and continuous, and those in the center is relatively large, dispersive and intermittent. The reason for the result is that in the beginning of hot rolling, the large deformation in the surface layer of plate and the cooling effect of rolling oil emulsion can make the second phase broken and prevent them from growing up suddenly by heat, and in the later period of rolling, the second phase in the center will grow up rapidly because of the high temperature. A further observation can be found the alloy is mainly composed of lamellar grains, which are also elongated along the rolling direction. From the figure 2f~2j, the grains in the center(position 5 in figure 1b) and near-center area(position 4) are nearly equiaxed, and the grain boundary edges are serrated, which means the partial recrystallization occurs in the hot rolling process, and the closer to the center, the greater the degree of recrystallization is. figure 3 is the SEM image of 5083 aluminum alloy. it can be found that the alloy is composed of α-Al, (Fe, Mn)Al 6 , Al 12 Fe 3 Si, and a little Mg 2 Si via energy spectrum analysis [5]. The results of EDS analysis are shown in Table 2. Figure 4 shows the dislocation distribution along the crystallographic direction <111> α in the center and near-center area of 5083 aluminum alloy. It can be observed there are a number of sub-micron sized second phases, and a serious phenomenon of dislocation congestion in these areas. Through the energy spectrum analysis, it is found that these sub-micron sized second phases are mainly AlMg(Mn, Cr). The distribution of dislocation in the central area is very uneven, and approximately persents a characteristic of cell structure. While, the distribution of dislocations in the near-center area is uniform, and it is characterized as Taylor lattice distribution [6]. Comparing figure 4a and 4b, it is observed the number of the sub-micron sized second phase in the center of plate is less than that in the near-center, and the distribution of dislocation clusters is scattered, which may be the reason for the different distribution of dislocations in two adjacent areas. According to the theory of plate rolling [7], the thicker the aluminum alloy ingot is, the larger the difference of deformation in different stages of hot rolling is. Generally, in the initial stage of rolling, the deformation mainly occurs in the surface layer of the plate. In this case, the dislocation motion throughout the central area of the plate is mainly planar slip, and the dislocation state presents Taylor crystal lattice distribution, as shown in figure 4b. In the later period of rolling, deformation is deep into the central area. At this point, dislocation motion is dominated by dislocations climb and cross-slip, and cell structure starts to appear. However, taking into account the partial recrystallization, the opposite sign dislocations cancel out in the cell wall, which results in the weakening of the characteristic of cell structure, as shown in figure 4a. From the surface to the center, the strength of the plate shows a trend of declining-rising-declining, and the elongation increases at first, then decreases. In particular, a "double low" phenomenon, which has low strength and plasticity, occurs in the central area. Compared with the high strength of the plate in the near-center area, there are two important reasons for the decrease in the center. On the one hand, the second phase in the central area is more coarser and easy to cause stress concentration. Meanwhile, a large number of inclusions and voids can reduce the effective area of the external load [8,9]. On the other hand, recrystallization occurs in the hot rolling, which makes the microstructure softening. From the interaction between dislocations and particles [10], when dislocations meet a couple of particles that are close to each other, they will not be able to cut through these paticles at the same time, and can not be back, so turn to be local bending, and eventually trapped by these particles, forming a relatively independent "inner packing" dislocation group, as shown in figure 4a. The experimental results show that the effect of recrystallization on the elimination of dislocation groups is weak. The size of the dislocaiton group is much bigger than that of a single particle, therefore, it has a weak blocking effcet on the dislocation motion. This dislocation group is easier to cause stress concentration, resulting in a decrease in the strength of the plate.
Conclusions
(1) The microstructure and and tensile property of 5083 aluminum alloy hot-rolled plate along the thickness direction is inhomogeneous. The "double low" phenomenon appears in the center of the plate with low strength and low plasticity.
(2) there are two important reasons for the appearance of the "double low" phenomenon: on the one hand, a larege number of coarse phases and impurities gathered in the central area, and on the other hand, recrystallization in this area is severer.
(3) There are a number of sub-micron sized second phases throughout the central area, they have a precipitation strengthening effect on the 5083 aluminum alloy hot-rolled plate. However, in the center of the plate, dislocations are trapped by these particles and formed a relatively independent "inner packing" dislocation group, resulting in a decrease in the strength. | 2019-04-30T13:04:32.212Z | 2017-09-01T00:00:00.000 | {
"year": 2017,
"sha1": "626b370382745fde150ae613c000925bb9d7c2dc",
"oa_license": null,
"oa_url": "https://doi.org/10.1088/1757-899x/230/1/012037",
"oa_status": "GOLD",
"pdf_src": "IOP",
"pdf_hash": "0d4c33e67c52dc06e225ac5f91d4df9639da27c7",
"s2fieldsofstudy": [
"Materials Science",
"Engineering"
],
"extfieldsofstudy": [
"Materials Science",
"Physics"
]
} |
39389426 | pes2o/s2orc | v3-fos-license | trans-Acting Arginine Residues in the AAA+ Chaperone ClpB Allosterically Regulate the Activity through Inter- and Intradomain Communication*
Background: Two neighboring, trans-acting arginines in the N-terminal AAA+ domain are essential for oligomerization and activity of ClpB/Hsp104. Results: Both arginines couple nucleotide binding to oligomerization and allosterically regulate the ATPase activity. Conclusion: Site-specifically engineered, cross-linked dimers of AAA+ subunits can be utilized to study allosteric regulation. Significance: This study elucidates the mechanistic role of an essential arginine pair conserved in different AAA+ proteins.
The molecular disaggregation machine ClpB/Hsp104 (caseinolytic peptidase B/heat shock protein 104) is crucial for maintaining protein homeostasis because it reactivates aggregated proteins under cellular stress conditions in concert with the DnaK/Hsp70 chaperone system (1)(2)(3)(4). Belonging to the superfamily of AAAϩ proteins (ATPases associated with various cellular activities), ClpB/Hsp104 functions as a hexameric complex that converts the chemical energy from ATP hydrolysis into mechanical force (5). Protein disaggregation by ClpB/ Hsp104 involves the threading of single polypeptide chains out of the aggregate through the central pore of the hexameric ring (6). High-resolution structural information is available for ClpB from Thermus thermophilus, showing a domain architecture that consists of a small N-terminal domain and two highly conserved AAAϩ domains, also called nucleotide binding domains (NBD1 2 and NBD2), per monomer (7). There is a long helical insertion into NBD1, named the M-domain, which was recently identified as a major regulatory element as well as the interaction site for the co-chaperone DnaK/Hsp70 (8 -13).
The ATPase modules NBD1 and NBD2 are the motors that drive the molecular machine in a cooperative fashion. The catalytic sites, which are located at the interface between two subunits in the hexameric complex, are built up by highly conserved motifs, namely the Walker A and B motifs that are crucial for nucleotide binding and ATP hydrolysis, respectively (14,15). Furthermore, there are essential arginine residues, often termed arginine fingers, that contribute to the active sites in trans because they are in close proximity to the nucleotide bound to the adjacent subunit. The role of such conserved arginines in AAAϩ proteins has been investigated extensively (16 -19). However, it is not trivial to distinguish a truly catalytic arginine finger, as initially identified in GTPase-activating proteins (20), from conserved arginines that either stabilize the hexameric state or are crucial for allosteric regulation. The complexity is even increased by the fact that several AAAϩ proteins, such as ClpB/Hsp104, ClpA, ClpC, and p97/VCP/ Cdc48, possess two highly conserved, neighboring arginines in their NBD1 subunit interface. With this study, we aimed at understanding the mechanistic role of this essential, trans-acting pair of arginines in allosteric communication between AAAϩ subunits.
We showed previously that NBD1-M and NBD2 of ClpB from T. thermophilus can be expressed and purified separately (21), which allowed a detailed and quantitative characterization of both AAAϩ motor domains with regard to nucleotide bind-* This work was supported by the Max Planck Society and by a Ph.D. scholar-ing, oligomerization, and activity (22)(23)(24)(25). Here, we used the construct NBD1-M. Inspired by successful work on the mechanism of ClpX, another AAAϩ protein (26 -28), we applied a combined approach of covalently linking NBD1-M subunits and introducing Walker A/B and arginine finger mutations. Using these fixed and well determined arrangements of wildtype and mutated subunits in a direct neighborhood, it was possible to dissect the mechanisms of allosteric regulation and intersubunit communication in the AAAϩ chaperone ClpB/Hsp104.
Intermolecular Disulfide Bond Formation to Generate Covalently Linked ClpB Dimers-Cysteine residues were introduced to facilitate the formation of intermolecular disulfide bonds in the ClpB NBD1 subunit interface. The design was based on an available planar hexameric model of ClpB (30). Prior to the reaction, the reducing agent -mercaptoethanol was removed by buffer exchange. The formation of covalently linked dimers of full-length ClpB or NBD1-M variants using the single cysteine mutant pair P221C/M394C was performed in 50 mM Tris/ HCl, pH 7.5, 50 mM KCl, 5 mM MgCl 2 . The buffer was EDTAfree because 50 M copper phenanthroline was used as the oxidizing agent. Equimolar amounts of the respective cysteine variants (50 M each) were used in a 5-ml reaction volume. 2 mM ADP was added to trigger oligomerization of ClpB variants. The mixture was incubated for 1 h at 37°C. Subsequently, the reaction mixture was applied to a Superdex 200 26/60 size exclusion column equilibrated with buffer A to separate the formed dimer from unreacted monomer. The purity of the dimer products was evaluated by non-reducing SDS-PAGE. Test reactions were performed to ensure that homodimer formation was negligible. Another pair of cysteines (Q184C/ A390C) was also used to form a covalently linked dimer as described above. However, because this cross-linked variant showed severely impaired ATPase activity, it was not considered for further experiments.
Steady-state ATPase Measurements-Steady-state ATPase activity was measured in a coupled colorimetric assay at 25°C using a JASCO V-650 spectrophotometer (JASCO Germany GmbH, Gross-Umstatt, Germany). ClpB NBD1-M variants were incubated at 25°C in assay buffer (50 mM Tris/HCl, pH 7.5, 100 mM KCl, 2 mM EDTA, 0.4 mM phosphoenolpyruvate, 0.4 mM NADH, 0.1 g/liter BSA, 4 units/ml pyruvate kinase, 6 units/ml lactate dehydrogenase, and 10 mM MgCl 2 ). Importantly, reducing agents were strictly excluded from the assay buffer to maintain the intermolecular disulfide bonds of covalently linked dimers. The reaction was started by adding ATP (0.01-8 mM). Measurements were performed for protein concentrations of 1-30 M (with respect to monomeric units). The decreasing absorption at 340 nm was monitored over time, and the maximal slope was used to determine the ATPase turnover rate per monomer (⑀(NADH) ϭ 6220 M Ϫ1 cm Ϫ1 ). The data were analyzed with the Hill equation (Equation 1) using the program GraphPad Prism version 5.0.
Stopped Flow Experiments (Determination of MANT-dADP Binding Parameters)-Nucleotide binding experiments were performed with a BioLogic SFM-400 stopped flow instrument in single mixing configuration (BioLogic Science Instruments, Claix, France) in buffer A at 25°C essentially as described previously (25). The fluorescently labeled nucleotide MANT-dADP was purchased from BIOLOG (Bremen, Germany). The excitation wavelength was set to 296 nm, and the fluorescence signal was observed using a 400-nm long pass filter (400FG03-25, LOT Oriel Group). This setup was used to selectively excite protein-bound MANT-dADP via fluorescence resonance energy transfer (FRET) from the initially excited tryptophan residues of the protein. Kinetic traces were recorded as triplicates and averaged. Data analysis was performed using the program GraphPad Prism version 5.0.
Kinetic traces from direct binding experiments (2 M ClpB NBD1-M mixed 1:1 with 10 -50 M MANT-dADP) were fitted to exponential functions. The extracted rate constants were plotted against the nucleotide concentration. The association rate constants k on for MANT-dADP binding were obtained from the slope of the resulting linear functions. Kinetic traces from dissociation experiments (2 M ClpB NBD1-M incubated with 15 M MANT-dADP and subsequently mixed 1:1 with 5 mM Mg-ADP) were fitted to exponential functions; the extracted rate constants correspond to the dissociation rate constants k off of MANT-dADP binding. The K D was calculated from the ratio k off /k on . Given protein concentrations refer to monomeric units.
Fluorescence Equilibrium Titrations (Determination of ADP/ ATP Binding Parameters)-Fluorescence titrations were performed at 25°C in buffer A using a JASCO FP-8500 fluorescence spectrometer (JASCO Germany GmbH) as described previously (25). The excitation wavelength was set to 296 nm to facilitate selective excitation of protein-bound MANT-dADP via FRET from nearby tryptophan residues. The MANT fluorescence signal was monitored at 441 nm. Direct titrations of ClpB NBD1-M variants (at 2 or 20 M) with MANT-dADP (2-50 M) were used to determine the binding affinity of MANT-dADP, which was subsequently applied as the refer-ence K D in displacement titrations to determine K D (ADP) or K D (ATP). Here, ClpB NBD1-M variants (at 2 or 20 M) were incubated with MANT-dADP (15-40 M) and subsequently titrated with ADP (2.5-300 M) or ATP (125-20,000 M). ATP titrations were performed in the presence of 2 mM phosphoenolpyruvate and 0.01 mg/ml pyruvate kinase (Roche Applied Science) as an ATP-regenerating system. The data were corrected for dilution effects and analyzed with a cubic equation for competing ligands using the initial concentrations of protein and MANT-dADP as well as the K D (MANT-dADP) from the direct titration as input values (31). The program GraFit version 5.0 was used for data fitting.
Gel Filtration Experiments with Static Light Scattering (SLS) Analysis-Gel filtration experiments were performed on a Superdex 200 10/300 GL column connected to a refractive index detector (2414 from Waters (Milford, MA)), a photodiode array detector (2996 from Waters), and a multiangle light scattering detector (Dawn Heleos, Wyatt (Santa Barbara, CA)) in buffer A as described previously (25). The running buffer was either nucleotide-free or supplemented with 2 mM ADP or 2 mM ATP. 40 l of 100 M NBD1-M (with respect to monomeric units) were injected, resulting in a final concentration of about 2 M at the detector due to a 1:50 dilution by the gel filtration column. Molecular mass values were extracted from the multiangle light scattering data using the ASTRA software (Wyatt).
Dynamic Light Scattering (DLS) Experiments-DLS experiments were performed at protein concentrations of 1-30 M (with respect to monomeric units) in buffer A, which was either nucleotide-free or supplemented with 2 mM ADP or 2 mM ATP, respectively. In the case of ATP, 2 mM phosphoenolpyruvate and 0.01 mg/ml pyruvate kinase (Roche Applied Science) were present as an ATP-regenerating system. The measurements were performed with a Viscotek 802DAT DLS instrument (Viscotek, Waghäusel, Germany). 40 scans with a measuring time of 5 s/scan were recorded. Hydrodynamic radii and molecular mass values were extracted from the DLS data using the Omni-SIZE version 3.0 software package.
Disaggregation Assay (Chaperone-assisted Reactivation of Heat-aggregated ␣-Glucosidase)-The assay was performed to test whether disulfide cross-linking using the cysteine pair P221C/M394C affects the chaperone activity of full-length ClpB. 0.2 M ␣-glucosidase from Bacillus stearothermophilus was denatured for 8 min at 75°C in reaction buffer containing 50 mM MOPS, pH 7.5, 150 mM KCl, 10 mM MgCl 2 , and 5 mM ATP. Chaperones were added prior to refolding at 55°C. The total chaperone concentrations were c(ClpB) The co-chaperones DnaK, DnaJ, and GrpE from T. thermophilus were expressed and purified as described previously (32). Samples were taken after 30, 60, and 120 min and diluted 1:10 into assay buffer containing 50 mM KP i , pH 6.8, 2 mM paranitrophenyl-␣-D-glucopyranoside, 0.1 mg/ml BSA. The ␣-glucosidase activity was measured at 40°C using a microplate spectrophotometer (Varioskan, Thermo Electron, Vantaa, Finland). The average rate of absorption increase at 405 nm was monitored and normalized against a positive control containing ␣-glucosidase that was not heat-aggregated. Importantly, reducing agents were strictly excluded from all buffers to main-tain the intermolecular disulfide bonds of covalently linked ClpB dimers.
Conserved Arginines in ClpB NBD1 Mediate the Coupling between Nucleotide Binding and Oligomerization-A pair of conserved, neighboring arginines in ClpB NBD1 (Arg-322 and
Arg-323) is located at the interface between two subunits in the oligomeric ClpB complex (7). Both residues are in close proximity to the ATP molecule bound to the neighboring active site and were shown to be essential for the catalytic activity (17). In order to gain insights about the mechanistic role of these arginines, we decided to work with the truncated ClpB construct NBD1-M, which comprises only the N-terminal nucleotide binding domain (NBD1) and the helical M-domain of ClpB (Fig. 1A). We showed recently that this separately expressed construct is a fully active ATPase displaying a strong coupling between nucleotide binding and oligomerization as well as a highly cooperative ATPase activity, thereby reflecting important properties of full-length ClpB (25). We replaced both conserved arginine residues by alanine in ClpB NBD1-M. In agreement with previous experiments on full-length ClpB by Yamasaki et al. (17), the single and double mutants R322A, R323A, and R322A/R323A, respectively, showed about 1000fold reduced ATPase activity compared with the wild type, indicating that both conserved arginines are crucial for ATP hydrolysis in ClpB NBD1. Next, we tested whether the mutations R322A and R323A influence the nucleotide binding behavior of ClpB NBD1-M. First, we performed stopped flow experiments to extract the kinetic nucleotide binding parameters for fluorescently labeled MANT-dADP ( Fig. 1B and Table 1). Direct mixing and dissociation experiments showed that both the single and double mutants were able to bind MANT-dADP with similar affinities compared with the wild type. This result is in agreement with previous studies on full-length ClpB (17). Furthermore, we determined the binding affinities for the unlabeled nucleotides ADP and ATP by displacement titrations at low and high protein concentration (2 and 20 M, respectively) ( Fig. 1C and Table 1). Notably, the nucleotide binding affinities of the single mutants R322A and R323A and the double mutant R322A/R323A did not increase at higher protein concentrations as observed for the wild type, indicating that the conserved arginines are involved in coupling nucleotide binding and oligomerization. To further substantiate this hypothesis, we characterized the oligomerization behavior upon nucleotide binding for the single and double mutants using both DLS and SLS experiments ( Fig. 1D and Table 1). In the presence of ATP, the wild-type protein NBD1-M oligomerizes. With increasing protein concentration, a shift toward trimeric species is observed, which correlates tightly with the increase in ATP hydrolysis rates (Fig. 1E). This together with Hill coefficients higher than 2.5 indicates that the trimer represents the smallest hydrolysis-competent unit (Fig. 1F). The nucleotideinduced oligomerization of NBD1-M is severely impaired by the R322A and R323A mutations. In contrast to the observed trimers for the wild-type protein, the molecular masses obtained for the single and double mutants indicate only a monomer/dimer equilibrium ( Fig. 1D and Table 1). Due to sub- Single exponential fits are shown as colored lines. Similar traces were obtained for the single mutants R322A and R323A. The rate constants k obs obtained from fitting the kinetic traces are plotted against the MANT-dADP concentration (right). The association rate constants k on for MANT-dADP binding were obtained from the slope of the linear functions. The dissociation rate constants k off can be estimated from the y axis intercepts, but they were determined separately in dissociation experiments, as described under "Experimental Procedures." Nucleotide binding parameters extracted from these data are listed in Table 1. C, fluorescence equilibrium titrations. NBD1-M was incubated with MANT-dADP and subsequently titrated with ADP or ATP, respectively. For the ATP titration, phosphoenolpyruvate and pyruvate kinase were present as an ATP-regenerating system. The volumecorrected data were fitted with the cubic equation for competing ligands (31), using K D (MANT-dADP) as an input value. This figure shows the titrations for NBD1-M R322A; similar curves were obtained for all mutants at different protein concentrations. Nucleotide binding parameters extracted from these data are listed in Table 1. D, analytical gel filtration with SLS analysis. Elution profiles of NBD1-M in nucleotide-free buffer (red) and with 2 mM ADP (blue) and 2 mM ATP (green) present in the running buffer are shown for NBD1-M wild type (top) and NBD1-M R322A/R323A (bottom). The ATP-containing buffer was supplemented with phosphoenolpyruvate and pyruvate kinase as an ATP-regenerating system. Solid line, refractive index signal; dotted line, calculated molecular mass of the eluted species. The actual molecular mass of the NBD1-M monomer is 45 kDa. E, correlation between oligomeric state and ATPase activity. The steady-state ATPase turnover (orange circles) and the molecular mass of oligomeric NBD1-M species (green bars) measured by DLS are plotted in the same diagram for different protein concentrations. The DLS experiments were performed in the presence of 2 mM ATP and phosphoenolpyruvate and pyruvate kinase as an ATP-regenerating system. The increase in ATPase activity correlates with the formation of NBD1-M trimers (F), which represent the smallest ATP hydrolysis-competent unit. a.u., arbitrary units. stantial dilution during gel filtration, the molecular masses obtained from SLS data are not as high as in the DLS measurements performed at about 10-fold higher protein concentration. Still, the nucleotide-induced shift of the elution peak is suppressed for the mutated variants, and the obtained masses are significantly lower. It can be concluded that, although the conserved arginines Arg-322 and Arg-323 are not essential for nucleotide binding competence, they mediate the coupling between nucleotide binding and oligomerization. They are key structural elements required for the nucleotide-induced oligomerization of ClpB, a prerequisite for activity.
Covalently Linked ClpB Dimers Facilitate Mechanistic Studies on Allosteric
Regulation-With the intention to generate a well defined and fixed ClpB subunit interface, we designed covalently linked NBD1-M dimers with an intermolecular disulfide cross-link ( Fig. 2A). Two pairs of cysteines were tested, namely P221C/M394C and Q184C/A390C. The positions were chosen on the basis of an available planar, hexameric ClpB model (30) using the program SSBOND, which suggests positions for cysteine pairs according to the optimal distances and dihedral angles for disulfide bonds (33). Whereas the Q184C/ A390C dimer was severely impaired in ATPase activity (100- (25). The parameters are given here for comparison. b From stopped flow experiments. c From fluorescence equilibrium titrations. d Measurements were performed in the absence and presence of nucleotide. The molecular mass of monomeric NBD1-M is 45 kDa. e ATP-containing solutions were supplemented with phosphoenolpyruvate and pyruvate kinase as an ATP-regenerating system. fold lower than the unlinked wild-type, data not shown), the P221C/M394C dimer showed ATP hydrolysis rates comparable with the wild-type. Furthermore, we tested whether the intermolecular disulfide bond between P221C and M394C affects the chaperone activity of ClpB. To this end, we generated an identically cross-linked variant of the full-length protein, which was active in an ␣-glucosidase disaggregation assay with about 70% activity of the unlinked wild-type protein (Fig. 2B). Thus, the P221C/M394C disulfide cross-link was considered "minimally invasive" and was used for all further experiments.
First, we determined the nucleotide binding parameters of the cross-linked NBD1-M dimer. Stopped flow experiments showed a biphasic fluorescence signal change upon direct mixing with MANT-dADP (Fig. 3A). Both kinetic phases were nucleotide concentration-dependent, indicating the presence of an asymmetric dimer with two unequal nucleotide binding sites. In order to assign the observed phases, we utilized NBD1-M dimers carrying the Walker A mutation K204Q in one of the two subunits (Fig. 2C). These dimers are deficient in nucleotide binding either in the cross-linked or the free active site. Indeed, MANT-dADP binding was monophasic for both variants but with significantly different kinetic parameters (k on , k off , and K D ), which allowed an unambiguous assignment (Table 2). Notably, K D (MANT-dADP) is significantly lower compared with the unlinked NBD1-M wild type for both active sites of the cross-linked dimer, mainly due to a decrease in the dissociation rate constant k off . This finding again confirms the strong coupling between nucleotide binding and oligomerization in ClpB NBD1. Both association (k on ) and dissociation (k off ) of nucleotide are slower for the cross-linked active site than for the free one. We furthermore determined the binding affinities for unlabeled ADP and ATP by fluorescence displacement titrations and observed 10-fold stronger ADP and 18-fold stronger ATP binding compared with the unlinked NBD1-M ( Fig. 3B and Table 2).
Next, we characterized the steady-state ATPase activity of the cross-linked NBD-M dimer (Fig. 4 (A and B) and Table 3). In agreement with the observed improvement in ATP binding, the dimer showed a significantly lower K m and less pronounced dependence of the activity on protein concentration compared with the unlinked NBD1-M. However, the maximum k cat observed for high protein concentrations of unlinked wild-type protein was not reached by the cross-linked dimer. It can be speculated that this is due to the fact that ADP release becomes rate-limiting, considering the measured k off (MANT-dADP) is lower than 0.1 s Ϫ1 for the cross-linked active site. We next generated NBD1-M dimers carrying the Walker B mutation E271Q in one of the two subunits (Fig. 2D). Using these dimers, which are fully nucleotide binding-competent but deficient in ATP hydrolysis either in the cross-linked or the free active site, we showed that indeed both of the two different active sites contribute to the overall activity of the dimer (Fig. 4 (A and B) and Table 3). When having the Walker B mutation in the free active site, the remaining activity originating from the crosslinked site is 25% of the cross-linked wild-type, whereas when mutating the cross-linked active site, the free active site is even 30% more active than the wild-type dimer carrying two intact subunits. This somewhat unexpected result emphasizes the importance of allosteric regulation. It seems that a tightly bound ATP molecule in the cross-linked active site activates the neighboring, free active site. To further study this phenomenon, we next measured the ATPase activity of the NBD1-M dimers carrying the Walker A mutation K204Q in one of the two subunits (Fig. 2C) that were already used to assign the nucleotide binding phases. Notably, both variants were inactive (Fig. 4, A and B). Independent of whether the cross-linked or free active site was mutated, it was sufficient to provide a nucleotide binding-deficient nearest neighbor to totally abolish the activity of the intact active site.
Furthermore, it was important to characterize the nucleotide-induced oligomerization of the cross-linked NBD1-M dimer variants. We performed analytical gel filtration runs with SLS analysis (Fig. 3C). The cross-linked wild-type dimer as well as both variants with the Walker B mutation showed pronounced ATP-induced oligomerization. A clear shift toward The obtained rate constants k obs plotted against the MANT-dADP concentration result in two linear functions (circles and triangles). In order to assign the respective association rate constants k on and dissociation rate constants k off to the two different binding sites, cross-linked NBD1-M dimers with Walker A mutations (see Fig. 2C) were used, which showed only monophasic kinetic traces upon direct mixing with MANT-dADP. Nucleotide binding parameters extracted from these data are listed in Table 2. B, fluorescence equilibrium titrations. Cross-linked NBD1-M dimer variants were incubated with MANT-dADP and subsequently titrated with ADP or ATP, respectively (wild type (circles), R322A (squares), and R323A (triangles)). For the ATP titrations, phosphoenolpyruvate and pyruvate kinase were present as an ATP-regenerating system. The volume-corrected data were fitted with the cubic equation for competing ligands (31), using K D (MANT-dADP) as an input value. Nucleotide binding parameters extracted from these data are listed in Table 2. C, analytical gel filtration with SLS analysis. Elution profiles of cross-linked NBD1-M dimers in nucleotide-free buffer (red) and with 2 mM ADP (blue) and 2 mM ATP (green) present in the running buffer are shown. Top, nucleotide-induced association of cross-linked, dimeric species was observed for wild type, Walker B mutants, and mutants of the conserved arginines (only one representative data set is shown). Bottom, mutants carrying a Walker A mutation do not form higher oligomers (only one representative data set is shown). The different cross-linked dimer variants are illustrated schematically, as introduced in Fig. 2. The ATP-containing buffer was supplemented with phosphoenolpyruvate and pyruvate kinase as an ATP-regenerating system. The refractive index signal is shown as a solid line, and the calculated molecular mass of the eluted species is shown as a dotted line. The actual molecular mass of the NBD1-M monomer is 45 kDa. a.u., arbitrary units. the size of tetramers was observed, indicating that also the cross-linked dimers associate with each other to form higher oligomers in the presence of ATP. In contrast, for both dimer variants carrying the Walker A mutation no nucleotide-induced oligomerization was observed, which is especially interesting for the variant with the cross-linked site being mutated. Here, the absence of nucleotide must be somehow communicated such that the intact and nucleotide binding-competent neighboring subunit cannot associate with another molecule. In summary, the cross-linked NBD1-M dimers allowed a dissection of allosteric effects in ClpB by site-specifically introducing Walker A and B mutations.
Allosteric Regulation between ClpB Subunits Is Communicated by Conserved Arginines-As a next step, it would be desirable to understand the molecular basis of how the observed allosteric regulation is communicated throughout the ClpB oligomer and whether the conserved arginines, Arg-322 and Arg-323, located in the subunit interfaced are involved in this task. To this end, we studied covalently linked NBD1-M dimer variants carrying either the R322A or R323A mutation in the cross-linked interface (Fig. 2E). This approach was chosen to distinguish a truly regulatory function of the arginines from effects associated with oligomeric stability, the latter of which were assumed to be blanked by covalently fixing the subunit interface. Indeed, SLS measurements confirmed that nucleotide-induced oligomerization of the cross-linked dimers was not impaired by the arginine to alanine mutations (Fig. 3C). In line with this result, both crosslinked dimer variants (R322A and R323A) were nucleotide binding-competent. The biphasic kinetic traces observed upon mixing with the fluorescently labeled MANT-dADP indicated the presence of two intact nucleotide binding sites per dimer. However, slightly higher K D values for MANT-dADP were obtained for the mutants compared with the cross-linked wild-type dimer ( Fig. 3A and Table 2). In fluorescence displacement titrations, ADP binding was essentially not affected by the arginine to alanine mutations in the cross-linked interface, whereas ATP binding was significantly impaired (Fig. 3B and Table 2). This result suggests a The terms "free" and "cross-linked" refer to the two different nucleotide binding sites, one being located in the free (outside) interface and the other one in the cross-linked (inside) interface.
that Arg-322 and Arg-323 interact primarily with the ␥-phosphate group of the ATP molecule bound to the neighboring subunit, presumably sensing the nucleotide binding state that way. Next, we measured the steady-state ATPase activity of the NBD1-M dimers carrying the R322A or R323A mutation in the cross-linked interface (Fig. 4 (C and D) and Table 3). For both variants, the obtained K m values were significantly increased compared with the cross-linked wild-type dimer, indicating again that the conserved arginines contribute essential interactions that generate cooperativity. Notably, the R323A mutation caused a more severe loss in activity than the R322A mutation (90 and 35% decreased k cat compared with the wild-type dimer, respectively), which might indicate that Arg-322 is mainly involved in stabilizing the subunit interface, which was provided here by the disulfide cross-link. We next asked whether the conserved arginines indeed regulate the activity of a ClpB oligomer by allosteric communication, thus affecting a neigh-boring catalytic site that they do not directly interact with. To this end, we combined the R322A and R323A mutation with the Walker B mutation E271Q in the cross-linked interface, which allowed studying the direct effect of the arginine mutation on the activity of the catalytic site located in the neighboring, free interface (Fig. 2F). Although fully competent in ATP-induced oligomerization, these dimers showed a severely impaired steady-state ATPase activity compared with the dimer carrying only the Walker B mutation (Fig. 4 (E and F) and Table 3). The cooperativity seemed to be totally blocked (Hill coefficient n ϭ 1), and very high K m , low k cat , and a strong dependence on protein concentration were observed. These results clearly confirm that the conserved arginines Arg-322 and Arg-323 regulate the highly cooperative ATPase activity of ClpB by allosterically communicating between neighboring subunits, which exceeds the simple mediation of ATP-induced oligomerization. D, and F). The corresponding kinetic parameters K m , k cat , and Hill coefficient n are listed in Table 3. The key at the bottom uses the schematic representations of the different cross-linked dimer variants as introduced in Fig. 2 as follows: gray circles, unlinked NBD1-M wild type; green circles, cross-linked NBD1-M dimer wild-type; red circles, cross-linked NBD1-M dimer with Walker A mutation K204Q in the cross-linked interface; red triangles, cross-linked NBD1-M dimer with Walker A mutation K204Q in the free interface; orange circles, cross-linked NBD1-M dimer with Walker B mutation E271Q in the cross-linked interface; orange triangles, cross-linked NBD1-M dimer with Walker B mutation E271Q in the free interface; blue circles, cross-linked NBD1-M dimer with R322A mutation in the cross-linked interface; pink circles, cross-linked NBD1-M dimer with R323A mutation in the cross-linked interface; blue squares, cross-linked NBD1-M dimer with R322A mutation and Walker B mutation E271Q in the cross-linked interface; pink squares, cross-linked NBD1-M dimer with R323A mutation and Walker B mutation E271Q in the cross-linked interface.
DISCUSSION
In this study, we investigated the role of the conserved, transacting arginines Arg-322 and Arg-323 in allosteric regulation and intersubunit communication in the molecular disaggregation machine ClpB (Fig. 5). Using a simplified system, namely the separate N-terminal ATPase subunit NBD1-M, it was possible to study the interplay between nucleotide binding, oligomerization, and activity in a quantitative manner. We utilized a set of well defined NBD1-M dimers with intermolecular disulfide cross-links and site-specifically introduced Walker A/B mutations to draw conclusions about allosteric effects mediated by the conserved arginine pair.
First, we showed that both arginines are involved in coupling nucleotide binding to oligomerization of ClpB NBD1-M. We identified the NBD1-M trimer as the smallest ATP hydrolysiscompetent unit, which is formed upon ATP binding, but only if both arginines are present. The finding that trimer formation is essential is in agreement with previously performed mixing experiments showing that the random incorporation of two mutant ClpB subunits into the hexamer is sufficient to abolish activity (34). A previous study using full-length ClpB came to the conclusion that the trans-acting arginines are not involved in nucleotide binding (17). However, using the cross-linked dimers, we showed that the arginines are indeed crucial for strong and cooperative ATP binding.
The next goal was to obtain a better understanding of allosteric regulation mechanisms implemented in ClpB NBD1-M. A comprehensive mechanistic interpretation of the nucleotide binding, oligomerization, and activity data that we obtained for the different cross-linked dimer variants would greatly benefit from additional structural information about the ClpB subunit interface. The available crystal structure of ClpB from T. thermophilus (Protein Data Bank code 1QVR) exhibits a helical arrangement of subunits, thereby displaying a shifted subunit interface (7). The two conserved arginines Arg-322 and Arg-323 are located 5 and 11 Å away from the ␥-phosphate of ATP bound to the neighboring subunit, respectively (see Fig. 2A), which may not reflect the active conformation. Several cryo-EM studies on ClpB and its yeast homolog Hsp104 generated models of a planar hexameric ring, which is believed to be the active form (7,(35)(36)(37). However, structural details, such as the conformation of the conserved arginines, could not be resolved. When using the hexameric crystal structure of the highly homologous AAAϩ protein ClpC together with its adaptor protein MecA (Protein Data Bank code 3PXG) as a template for a planar ClpB model, both conserved arginines are at a 4 -6-Å distance from a modeled ATP molecule (38). Still, at this point, there is no reliable knowledge about the exact positioning of the conserved arginine pair in the ClpB subunit interface. Thus, we put great emphasis on control experiments using different Walker A/B mutants to verify our results. Fig. 4, A, C, and E, for corresponding experimental data. The ATPase activity was calculated per monomer. The Hill equation was used for data fitting (Equation 1).
a The characterization of ClpB NBD1-M wild-type has been published previously (25). The parameters are given here for comparison. b The data showed no sigmoidal shape and were therefore fitted using the Michaelis-Menten equation, which is represented by a simple hyperbolar function (Hill coefficient n ϭ 1).
FIGURE 5. Allosteric regulation and intersubunit communication in ClpB
NBD1. Putative pathways of allosteric regulation (orange arrows) are illustrated schematically. The N-terminal ATPase subunit of ClpB, NBD1, is shown as a gray sphere. ATP is depicted in green and the trans-acting arginines Arg-322 and Arg-323 are shown in blue and pink, respectively. The active site is located at the interface between two subunits in the oligomeric ClpB complex. The conserved arginines interact with the ␥-phosphate of the ATP molecule bound to the neighboring subunit, which presumably is the structural basis for intersubunit communication. The nucleotide state of the adjacent active site is sensed and communicated throughout the AAAϩ domain, which allosterically regulates the activity of the whole oligomeric complex. This process may involve other highly conserved residues that are part of nucleotide sensor motifs in cis and induce conformational changes throughout the ATPase cycle. | 2018-04-03T04:33:04.241Z | 2014-09-24T00:00:00.000 | {
"year": 2014,
"sha1": "5f559e0147223464e181d8df1156d1d9871c3201",
"oa_license": "CCBY",
"oa_url": "http://www.jbc.org/content/289/47/32965.full.pdf",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "5ed614e0e67c19695cf0a122a37d6d50447d8809",
"s2fieldsofstudy": [
"Biology",
"Chemistry"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
} |
237997725 | pes2o/s2orc | v3-fos-license | Graph Theory for Primary School Students with High Skills in Mathematics
Graph theory is a powerful representation and problem-solving tool, but it is not included in present curriculum at school levels. In this study we perform a didactic proposal based in graph theory, to provide students useful and motivational tools for problem solving. The participants, who were highly skilled in mathematics, worked on map coloring, Eulerian cycles, star polygons and other related topics. The program included six sessions in a workshop format and four creative sessions where participants invented their own mathematical challenges. Throughout the experience they applied a wide range of strategies to solve problems, such as look for a pattern, counting strategies or draw the associated graph, among others. In addition, they created as challenges the same type of problems posed in workshops. We conclude that graph theory successfully increases motivation of participants towards mathematics and allows the appearance and enforcement of problem-solving strategies.
Introduction
In this paper we show the results of the implementation of the didactic proposal developed in [1] with minor changes. We state main previous results and briefly sketch the structure of the proposal. All details can be found in [1]. In this proposal we introduce graph theory [2] for gifted primary school students as a motivational tool of representation and problem solving. This could be a first step to finally include graph theory at school levels.
A graph is a set of vertices (points) with some of them connected by edges (lines). Graph theory is not included in current school curriculum in Spain, neither in primary nor secondary education [3,4]. In other countries, discrete mathematics is proposed to be included in school levels [5] and highly recommended for students aged between 5 and 18 years old [6,7].
Several authors suggest the inclusion of graph theory at school and highlight its importance as a tool to model real-life situations, learn mathematical styles of thinking and solve problems [8][9][10]. In this line, some experimental studies have been carried out, especially with secondary education students, but there are hardly any experiences with primary education or gifted students. We mention here previous work with primary education students.
Starting from map coloring [8], an experience with students aged between 5 and 14 years old is presented in [11]. They propose that map coloring activities for this age range should be ordered by level of difficulty but neither their methodology nor results are clearly presented. They conclude students can translate a map into a graph from 7 years old, and when over 9 years old they can construct the graph corresponding to a map and color it.
In Italy, some teaching experiments have been performed. About fifty [8][9] year old students participated in 12 sessions, one weekly meeting along 3 months, involving Eulerian and semi-Eulerian graphs, Hamiltonian graphs, planar graphs and graph coloring [12].
In Italy, some teaching experiments have been performed. About fifty [8][9] year old students participated in 12 sessions, one weekly meeting along 3 months, involving Eulerian and semi-Eulerian graphs, Hamiltonian graphs, planar graphs and graph coloring [12]. Using the laboratorian methodology, the authors propose activities such as problems to be solved with the help of paper and pencil, and online games such as YED Graph Editor. They claim that most students were able to solve the activities, increased their logical skills and active participation in class, and moreover, after this experience, they understood mathematics as a game. Following this line of work, in [13] they present an approach to mathematics and connections of mathematics with real life by means of Eulerian graphs. Contents treated with primary education students include definition of graph, planar graph, vertex coloring, Eulerian graph, and Hamiltonian graph. A planar graph is a graph that can be drawn with no crossing edges. A Eulerian graph is a graph having a Eulerian cycle, that is a closed path (starting and finishing in the same vertex) passing only once for every edge, although it is allowed to pass more than one time for the vertices. A Hamiltonian graph is a graph having a Hamiltonian cycle, that is, a closed path passing only once for every vertex. Figure 1 shows several examples of these types of graphs. At the end of the experience, students were able to recognize main parts of a graph, draw and represent a graph, solve problems using graphs, and they found graphs were fun and motivating. A learning trajectory to teach Eulerian paths is developed in [14]. An outdoor didactical approach is used to give the participants a realistic point of view of the mathematics involved in the classical Köningsberg bridges problem [8], that it is used to motivate and introduce graphs. They use a combination of maps and paper and pencil activities to lead the students to a better understanding of the existence or not of a Eulerian path (a path passing only once for every edge) inside a graph. Along the experience they notice an improvement of collaborative problem-solving skills in the students.
Graph theory [2] can be introduced in different ways, such as map coloring or Eulerian paths, among others. Our proposal aligns with this classical approach, it begins with map coloring, posed as a challenge to solve, and carries on with Eulerian cycles, we further relate graphs with classical handshake problem [15] and other applications of graphs.
The original proposal developed in [1] is structured in four stages according to the teaching-learning methodology of Dienes [16]. The first activity is presented as a challenge (Adaptation), followed by some activities to practice the new concept with concrete examples (Structuring); then the student should be able to abstract the underlying mathematical concepts (Abstraction), and finally he/she reasons about what has been learned, discovering connections to other mathematical concepts (Reasoning). Table 1 summarizes activities and sessions proposed at each stage. A learning trajectory to teach Eulerian paths is developed in [14]. An outdoor didactical approach is used to give the participants a realistic point of view of the mathematics involved in the classical Köningsberg bridges problem [8], that it is used to motivate and introduce graphs. They use a combination of maps and paper and pencil activities to lead the students to a better understanding of the existence or not of a Eulerian path (a path passing only once for every edge) inside a graph. Along the experience they notice an improvement of collaborative problem-solving skills in the students.
Graph theory [2] can be introduced in different ways, such as map coloring or Eulerian paths, among others. Our proposal aligns with this classical approach, it begins with map coloring, posed as a challenge to solve, and carries on with Eulerian cycles, we further relate graphs with classical handshake problem [15] and other applications of graphs.
The original proposal developed in [1] is structured in four stages according to the teaching-learning methodology of Dienes [16]. The first activity is presented as a challenge (Adaptation), followed by some activities to practice the new concept with concrete examples (Structuring); then the student should be able to abstract the underlying mathematical concepts (Abstraction), and finally he/she reasons about what has been learned, discovering connections to other mathematical concepts (Reasoning). Table 1 summarizes activities and sessions proposed at each stage. This proposal aims to work individually with each participant, using a methodology based on the Pólya problem-solving model [17], posing questions that guide the participant in solving the problem. All the detailed questions that can be used throughout the experience can be found in [1]. This proposal [1] is designed for students 8 years and older.
Here, we briefly describe proposed activities. We started with map coloring, asking the participant if he/she could color the map of Spain with as few colors as possible, such that neighboring regions do not have the same color. This motivates the appearance of the associated graph to the map; we suggested setting a point in every region and joining points in neighboring regions with a line. In the second session, we presented the Köningsberg bridges problem and suggested the use of graphs after a few unsuccessful attempts. In the third session, we presented Eulerian cycles by giving the participant some examples to Mathematics 2021, 9, 1567 3 of 15 check and then proposing him/her to create more Eulerian and non-Eulerian cycles. In the fourth session, we posed the handshake problem (see Appendix A), and in the last session we worked on drawing star polygons and related this topic to the previous one by counting the edges of the star polygons. Two gifted students of 8 and 9 years old participated in the exploratory phase of the study. They correctly carried out the proposed tasks, showing great interest in them, but having some difficulties in some parts. This led us to make some changes to the original didactic proposal. The results of this exploratory study were presented in [18].
Our main goal is to encourage the use of graphs as a tool for representation and problem solving in primary education classrooms from a playful perspective. As a first step, we implemented this proposal with primary education students having high skills in mathematics, as part of an enrichment program. This experience provided students with suitable mathematical contents to improve the development of their problem-solving skills.
Description of the Settings
The experience was developed inside an enrichment program during the 2018/2019 academic year. Weekly sessions of 45 min in format of workshops were performed with 7 participants, within school hours, outside the normal classroom. The program included graph theory, classical problems of combinatory, logic, probability and geometry using a wide range of manipulative resources to motivate participants. Results obtained with manipulatives can be found in [19].
The original proposal [1] was intended to be developed individually with each participant, so we adapted the work situation to make it suitable for 7 participants working at the same time. When a question is settled, participants start attacking the problem individually, and after a while, they work in collaborative groups. We distributed participants in 3 groups using age criteria. Participants with the same age (or very close) were assigned to the same group. Youngest participants worked mainly in a group (G1 group). Workshops alternated with sessions where participants invented their own mathematical challenges. Working in groups, they had to propose a challenge and a suitable solution two weeks later, uploading the problem and its solution to a web page available for all students at their school.
Our proposal is structured following the teaching stages of the Dienes teachinglearning methodology [16].
•
Initial challenge (Adaptation): Activity posed as a challenge aimed to create curiosity in the students so that they want to learn about the subject.
•
Practice of the new concept (Structuring): We use graphs to represent routes or pathways under certain conditions.
• Abstraction (Abstraction): Graphs are used to solve a problem of greater difficulty than previous ones, where more abstract mathematical processes appear, in this case induction and generalization. • Closing activities (Reasoning): The objective is to relate the graphs that were studied with other mathematical concepts.
The methodology employed in every workshop is inspired by the Pólya problemsolving model [17]. We pose a question as a challenge and let the participants try to solve it freely, then, we go on by giving some hint, and we continue asking them new questions. We avoid theoretical explanations and formal definitions; the students build the concepts involved and their own knowledge in a guided learning process.
The sessions and topics treated are summarized in Table 2, together with activities proposed to participants. We have modified the order of some activities with respect to the original proposal explained in [1] (suitable for students aged 8 and over) to make it accessible for younger participants, simplifying the structuring stage and extending the reasoning phase. We decided to vary the order of the activities according to their degree of difficulty, moving the Köningsberg bridges problem to the last stage. We also lowered the degree of difficulty of some activities. A description of activities proposed and basic definitions can be found in Appendix A.
Description of the Participants
The participants belong to a public school, located in Toledo (Castilla-La Mancha, Spain), and were selected by faculty because of their high academic performance in mathematics. Participants do not have a formal diagnosis of giftedness, but both their teachers and their families agree that they have great creativity, learn faster than mates, like to investigate tasks that interest them, and go beyond what is proposed in the classroom. Therefore, they meet characteristics appropriate of gifted people [20,21]. Table 3 shows data for the participants. Following age criteria, the participants were organized into 3 groups (Table 3).
Results
In the following sections, we describe the results obtained during the experience.
Map Coloring
The first session was devoted to map coloring. Figures 2 and 3 show the first attempt of coloring the map of Spain [22] with as few colors as possible, such that neighboring regions do not have the same color. Participants A3 and A6 use five colors and participant A4 uses four colors, but two neighboring regions have the same color. Students A3 and A4 make several efforts to complete the map with four colors.
Results
In the following sections, we describe the results obtained during the experience.
Map Coloring
The first session was devoted to map coloring. Figures 2 and 3 show the first attempt of coloring the map of Spain [22] with as few colors as possible, such that neighboring regions do not have the same color. Participants A3 and A6 use five colors and participant A4 uses four colors, but two neighboring regions have the same color. Students A3 and A4 make several efforts to complete the map with four colors. The other participants color the map with four colors starting from left to right, from top to bottom or using the same color and jumping from one region to another.
Then, they are asked to draw a continuous line joining all regions such that regions with same color are not directly joined. They use the previous map and try to join the points without lifting the pencil from the paper. Participants A1, A4 and A6 do not get a cycle, as seen in Figure 4. Participant A4 even forgets some regions. 6 Female G1 A3 9 Male G2 A4, A5 10 Female G2 A6, A7 11 Female G3
Results
In the following sections, we describe the results obtained during the experience.
Map Coloring
The first session was devoted to map coloring. Figures 2 and 3 show the first attempt of coloring the map of Spain [22] with as few colors as possible, such that neighboring regions do not have the same color. Participants A3 and A6 use five colors and participant A4 uses four colors, but two neighboring regions have the same color. Students A3 and A4 make several efforts to complete the map with four colors. The other participants color the map with four colors starting from left to right, from top to bottom or using the same color and jumping from one region to another.
Then, they are asked to draw a continuous line joining all regions such that regions with same color are not directly joined. They use the previous map and try to join the points without lifting the pencil from the paper. Participants A1, A4 and A6 do not get a cycle, as seen in Figure 4. Participant A4 even forgets some regions. The other participants color the map with four colors starting from left to right, from top to bottom or using the same color and jumping from one region to another.
Then, they are asked to draw a continuous line joining all regions such that regions with same color are not directly joined. They use the previous map and try to join the points without lifting the pencil from the paper. Participants A1, A4 and A6 do not get a cycle, as seen in Figure 4. Participant A4 even forgets some regions. However, all participants understood the problem, they do not draw a cycle because the first and last region of the path both have the same color (pink in A1 drawing, red in A4 drawing, blue in A6 drawing). The other participants draw a Eulerian cycle, as seen in Figure 5. However, all participants understood the problem, they do not draw a cycle because the first and last region of the path both have the same color (pink in A1 drawing, red in A4 drawing, blue in A6 drawing). The other participants draw a Eulerian cycle, as seen in Figure 5. However, all participants understood the problem, they do not draw a cycle because the first and last region of the path both have the same color (pink in A1 drawing, red in A4 drawing, blue in A6 drawing). The other participants draw a Eulerian cycle, as seen in Figure 5. We can observe several strategies to draw the cycle. Some participants labeled the vertices (A3, A4, A5 and A7); they go from left to right, going up and down; going from exterior regions to interior regions (A4 and A5). The cycle made by participant A2 is the only one that follows an ordered pattern with no crossing edges (planar graph). All participants looked for patterns as a problem-solving strategy.
Star Polygons
Using a template with regular polygons of 4, 5, 9, 17 and 37 edges, we ask participants to draw star polygons by jumping a certain number of vertices without lifting the pencil from the paper. They choose jumps randomly, even larger than the number of vertices of the polygon. Figure 6 shows several examples. Participants A1, A2 and A3 only work with a pentagon and enneagon. All participants draw the star polygon 5/2 inside the pentagon, but only A1, A2, A3 and A7 find 9/2 and 9/4 inside the enneagon. In the case of 17 and 37 edges, participants A4, A5, A6 and A7 obtain some star polygons, the findings of A6 and A7 being more elaborated, as shown in Figure 7. We can observe several strategies to draw the cycle. Some participants labeled the vertices (A3, A4, A5 and A7); they go from left to right, going up and down; going from exterior regions to interior regions (A4 and A5). The cycle made by participant A2 is the only one that follows an ordered pattern with no crossing edges (planar graph). All participants looked for patterns as a problem-solving strategy.
Star Polygons
Using a template with regular polygons of 4, 5, 9, 17 and 37 edges, we ask participants to draw star polygons by jumping a certain number of vertices without lifting the pencil from the paper. They choose jumps randomly, even larger than the number of vertices of the polygon. Figure 6 shows several examples. Participants A1, A2 and A3 only work with a pentagon and enneagon. All participants draw the star polygon 5/2 inside the pentagon, but only A1, A2, A3 and A7 find 9/2 and 9/4 inside the enneagon. However, all participants understood the problem, they do not draw a cycle because the first and last region of the path both have the same color (pink in A1 drawing, red in A4 drawing, blue in A6 drawing). The other participants draw a Eulerian cycle, as seen in Figure 5. We can observe several strategies to draw the cycle. Some participants labeled the vertices (A3, A4, A5 and A7); they go from left to right, going up and down; going from exterior regions to interior regions (A4 and A5). The cycle made by participant A2 is the only one that follows an ordered pattern with no crossing edges (planar graph). All participants looked for patterns as a problem-solving strategy.
Star Polygons
Using a template with regular polygons of 4, 5, 9, 17 and 37 edges, we ask participants to draw star polygons by jumping a certain number of vertices without lifting the pencil from the paper. They choose jumps randomly, even larger than the number of vertices of the polygon. Figure 6 shows several examples. Participants A1, A2 and A3 only work with a pentagon and enneagon. All participants draw the star polygon 5/2 inside the pentagon, but only A1, A2, A3 and A7 find 9/2 and 9/4 inside the enneagon. In the case of 17 and 37 edges, participants A4, A5, A6 and A7 obtain some star polygons, the findings of A6 and A7 being more elaborated, as shown in Figure 7. In the case of 17 and 37 edges, participants A4, A5, A6 and A7 obtain some star polygons, the findings of A6 and A7 being more elaborated, as shown in Figure 7. Now we ask participants (except A1 and A2) why they think we get the same star polygons with different jump numbers. They try several wrong arguments: because the jump is a prime number, the jump is a multiple of a certain number, some jumps are even numbers and others are odd numbers, etc. Only A3 reaches the correct answer arguing over the enneagon, concluding that a jump of 2 vertices gives the same star polygon as a jump of 7 vertices because 7 + 2 = 9 (and repeats the same argument for 4 + 5 = 9; 3 + 6 = 9).
Participants used the trial and error method as a problem-solving strategy; however, they fail to generalize their conclusions. Only one participant, A3, found the correct answer in the particular case of the enneagon, although he does not verify this result with the other regular polygons. Now we ask participants (except A1 and A2) why they think we get the same star polygons with different jump numbers. They try several wrong arguments: because the jump is a prime number, the jump is a multiple of a certain number, some jumps are even numbers and others are odd numbers, etc. Only A3 reaches the correct answer arguing over the enneagon, concluding that a jump of 2 vertices gives the same star polygon as a jump of 7 vertices because 7 + 2 = 9 (and repeats the same argument for 4 + 5 = 9; 3 + 6 = 9).
Participants used the trial and error method as a problem-solving strategy; however, they fail to generalize their conclusions. Only one participant, A3, found the correct answer in the particular case of the enneagon, although he does not verify this result with the other regular polygons.
Give Me Five
We reformulate the handshake problem, giving to the students a list of questions to solve (see Appendix A), including how many times 2, 3, 4, 5, 9, 17 and 27 people high-five when they meet. In the last question, they were asked to explain the process followed to get the results.
All participants use a drawing to support a counting strategy, but not all of them discover a correct counting pattern. Figure 8 shows drawings made by participants that reach best performance. Now we ask participants (except A1 and A2) why they think we get the same star polygons with different jump numbers. They try several wrong arguments: because the jump is a prime number, the jump is a multiple of a certain number, some jumps are even numbers and others are odd numbers, etc. Only A3 reaches the correct answer arguing over the enneagon, concluding that a jump of 2 vertices gives the same star polygon as a jump of 7 vertices because 7 + 2 = 9 (and repeats the same argument for 4 + 5 = 9; 3 + 6 = 9).
Participants used the trial and error method as a problem-solving strategy; however, they fail to generalize their conclusions. Only one participant, A3, found the correct answer in the particular case of the enneagon, although he does not verify this result with the other regular polygons.
Give Me Five
We reformulate the handshake problem, giving to the students a list of questions to solve (see Appendix A), including how many times 2, 3, 4, 5, 9, 17 and 27 people high-five when they meet. In the last question, they were asked to explain the process followed to get the results.
All participants use a drawing to support a counting strategy, but not all of them discover a correct counting pattern. Figure 8 shows drawings made by participants that reach best performance. As shown in Figure 8, participants A6 and A1 use graphs to count the number of high fives, counting the edges between points in cases where the number of people is small. When they realize that every person greets the number of people minus one (participant A1 even writes it) they employ an inductive counting strategy. Participant A1 needs some help to perform the computations and finally uses a calculator.
Participant A5 uses induction but finds a wrong pattern, adding all the numbers from the given number of people until 1. The other participants give some correct answers, but As shown in Figure 8, participants A6 and A1 use graphs to count the number of high fives, counting the edges between points in cases where the number of people is small. When they realize that every person greets the number of people minus one (participant A1 even writes it) they employ an inductive counting strategy. Participant A1 needs some help to perform the computations and finally uses a calculator.
Participant A5 uses induction but finds a wrong pattern, adding all the numbers from the given number of people until 1. The other participants give some correct answers, but they do not reach the use of induction. Figure 9 shows their performance from highest (Figure 9a
Path Tracing/Eulerian Cycles
We explain to them the meaning of Eulerian cycle (see Appendix A). Next, we ask them to draw a graph with a Eulerian cycle stating the number of edges left out or arriving to each vertex, in a set of six vertices. Figure 10 summarizes the answers given by the
Path Tracing/Eulerian Cycles
We explain to them the meaning of Eulerian cycle (see Appendix A). Next, we ask them to draw a graph with a Eulerian cycle stating the number of edges left out or arriving to each vertex, in a set of six vertices. Figure 10 summarizes the answers given by the participants. They surround the starting vertex with a circle, except participant A7.
Path Tracing/Eulerian Cycles
We explain to them the meaning of Eulerian cycle (see Appendix A). Next, we ask them to draw a graph with a Eulerian cycle stating the number of edges left out or arriving to each vertex, in a set of six vertices. Figure 10 summarizes the answers given by the participants. They surround the starting vertex with a circle, except participant A7. Afterward, we gather their work in common, and every participant shows and explains his/her graphs to the others. We then encourage them to create more graphs with Eulerian cycles and other graphs with no Eulerian cycles. We show some of the created graphs in Figure 10.
We give to the participants a template with nine points (see Appendix A) and ask them to join all the points using only four straight lines, without lifting the pencil from the paper and crossing only once through each edge. All participants solve the puzzle correctly, except participants A1 and A2 who do it with help of the researcher.
Participants use the trial and error method again, together with use of symmetry. In this session and the following ones, participants A1 and A2 solve all the activities with the help of the researcher; therefore, we do not include the rest of their performance.
Seven Bridges of Köningsberg
After telling the story of the Seven Bridges of Köningsberg, we give the participants an image [23] with the representation of the city and pose some questions (see Appendix A). Figure 11 shows the participants' attempts to find a closed walking path through the four areas of the city, crossing each bridge only once (Eulerian cycle). Afterward, we gather their work in common, and every participant shows and explains his/her graphs to the others. We then encourage them to create more graphs with Eulerian cycles and other graphs with no Eulerian cycles. We show some of the created graphs in Figure 10.
We give to the participants a template with nine points (see Appendix A) and ask them to join all the points using only four straight lines, without lifting the pencil from the paper and crossing only once through each edge. All participants solve the puzzle correctly, except participants A1 and A2 who do it with help of the researcher.
Participants use the trial and error method again, together with use of symmetry. In this session and the following ones, participants A1 and A2 solve all the activities with the help of the researcher; therefore, we do not include the rest of their performance.
Seven Bridges of Köningsberg
After telling the story of the Seven Bridges of Köningsberg, we give the participants an image [23] with the representation of the city and pose some questions (see Appendix A). Figure 11 shows the participants' attempts to find a closed walking path through the four areas of the city, crossing each bridge only once (Eulerian cycle). Only participants A7 and A6 set a dot in every region and draw the associated graph to the city map. The other participants forget to connect some points or they focus directly on the bridges and do not consider points to mark each region. However, all participants affirm that is not possible to cross the city in that way. They apply the associated graph to the problem, and use the guess-and-check method and direct reasoning as problem-solving strategies.
When we ask if they observe any differences between their first/last vertex chosen and other vertices, all of them say yes. We ask them to count the number of edges that leave out or reach to each vertex (degree of a vertex) and what should be the number of edges passing though each vertex for such path to be possible. That is, how many times would it be necessary to cross every bridge to complete a closed path across all the bridges (at least once). Figure 12 summarizes answers given by students, where they drew the associated graph indicating the paths. All of them conclude that the number of edges must Only participants A7 and A6 set a dot in every region and draw the associated graph to the city map. The other participants forget to connect some points or they focus directly on the bridges and do not consider points to mark each region. However, all participants affirm that is not possible to cross the city in that way. They apply the associated graph to the problem, and use the guess-and-check method and direct reasoning as problem-solving strategies.
When we ask if they observe any differences between their first/last vertex chosen and other vertices, all of them say yes. We ask them to count the number of edges that leave out or reach to each vertex (degree of a vertex) and what should be the number of edges passing though each vertex for such path to be possible. That is, how many times would it be necessary to cross every bridge to complete a closed path across all the bridges (at least once). Figure 12 summarizes answers given by students, where they drew the associated graph indicating the paths. All of them conclude that the number of edges must be an even number.
Only participants A7 and A6 set a dot in every region and draw the associated graph to the city map. The other participants forget to connect some points or they focus directly on the bridges and do not consider points to mark each region. However, all participants affirm that is not possible to cross the city in that way. They apply the associated graph to the problem, and use the guess-and-check method and direct reasoning as problem-solving strategies.
When we ask if they observe any differences between their first/last vertex chosen and other vertices, all of them say yes. We ask them to count the number of edges that leave out or reach to each vertex (degree of a vertex) and what should be the number of edges passing though each vertex for such path to be possible. That is, how many times would it be necessary to cross every bridge to complete a closed path across all the bridges (at least once). Figure 12 summarizes answers given by students, where they drew the associated graph indicating the paths. All of them conclude that the number of edges must be an even number.
Chess Routes
We motivate the activity by showing a chess set, reminding the rules for moving each chess piece. Then, we let the participants practice the different movements on the chessboard. Later, we give a drawn chessboard [24] to the participants and ask if the knight can go through all the squares, passing only once through each one. That is, we are asking the participants to find a Hamiltonian graph. Answers given by participants are collected in Figure 13 and ordered by achievement; none of them manages to complete the tour.
Chess Routes
We motivate the activity by showing a chess set, reminding the rules for moving each chess piece. Then, we let the participants practice the different movements on the chessboard. Later, we give a drawn chessboard [24] to the participants and ask if the knight can go through all the squares, passing only once through each one. That is, we are asking the participants to find a Hamiltonian graph. Answers given by participants are collected in Figure 13 and ordered by achievement; none of them manages to complete the tour. In Figure 13, we can observe that the best performance is reached by participant A5. Repeating some patterns, A5 is able to pass through almost all squares. Participant A7 is also able to pass through all squares around a chosen one, but forgets to go through that one, although at the end gets a connected graph. A5 and A6 get a graph with two nonconnected components, and A3 and A4 get more than two components. All of them except A6 follow some patterns and repeat them along the chessboard. A6 makes some mistakes joining the vertices, following a wrong knight movement.
After the knight, we let them choose another chess piece and try again, and in this case, some of them are able to complete the path. Results can be seen in Figure 14, ordered by achievement. We observe that participants A5, A3 and A7 choose a pattern to complete the path along the chessboard. Participants A4 and A6, however, do not choose a pattern and therefore cannot complete the path, missing some squares or passing twice through some of them. In Figure 13, we can observe that the best performance is reached by participant A5. Repeating some patterns, A5 is able to pass through almost all squares. Participant A7 is also able to pass through all squares around a chosen one, but forgets to go through that one, although at the end gets a connected graph. A5 and A6 get a graph with two nonconnected components, and A3 and A4 get more than two components. All of them except A6 follow some patterns and repeat them along the chessboard. A6 makes some mistakes joining the vertices, following a wrong knight movement.
After the knight, we let them choose another chess piece and try again, and in this case, some of them are able to complete the path. Results can be seen in Figure 14, ordered by achievement. We observe that participants A5, A3 and A7 choose a pattern to complete the path along the chessboard. Participants A4 and A6, however, do not choose a pattern and therefore cannot complete the path, missing some squares or passing twice through some of them.
Participants show good performance looking for patterns and apply them to solve the problem. Some of them also use symmetry as a problem-solving strategy. joining the vertices, following a wrong knight movement.
After the knight, we let them choose another chess piece and try again, and in this case, some of them are able to complete the path. Results can be seen in Figure 14, ordered by achievement. We observe that participants A5, A3 and A7 choose a pattern to complete the path along the chessboard. Participants A4 and A6, however, do not choose a pattern and therefore cannot complete the path, missing some squares or passing twice through some of them. Figure 14. Chess routes. Drawing made by participant: (a) A5 using a rook; (b) A3 using a rook; (c) A7 using the queen; (d) A4 using a rook; (e) A6 using the king.
Participants show good performance looking for patterns and apply them to solve the problem. Some of them also use symmetry as a problem-solving strategy.
Mathematical Challenges
Challenges were created by participants working in groups. They start by brainstorming and then choose an idea as a challenge; they write it, solve it and finally they record a video with the question and another one with the answer. Proposals involving graph theory are collected in Table 4.
Mathematical Challenges
Challenges were created by participants working in groups. They start by brainstorming and then choose an idea as a challenge; they write it, solve it and finally they record a video with the question and another one with the answer. Proposals involving graph theory are collected in Table 4. Table 4, participants create similar activities to that ones already solved during the experience, using analogy, simplification and variation of the original problems.
All groups pose challenges consisting of drawing star polygons; G2 and G3 do it in two of the challenges. All groups use routes and cycles or Eulerian cycles, or problems joining points, but only G1 creates a challenge similar to the handshake problem (challenge number 3).
These activities created by participants prove that they are able to apply the learned knowledge to similar situations as the ones already solved.
Discussion
In the first activity, all participants except A6 color the map with four colors. This matches with results of [11], where all participants color correctly between three and six different maps after several attempts. To perform the activities, they start looking for a Draw a Eulerian cycle in this set of points using straight lines Same problem already invented As observed in Table 4, participants create similar activities to that ones already solved during the experience, using analogy, simplification and variation of the original problems.
All groups pose challenges consisting of drawing star polygons; G2 and G3 do it in two of the challenges. All groups use routes and cycles or Eulerian cycles, or problems joining points, but only G1 creates a challenge similar to the handshake problem (challenge number 3).
These activities created by participants prove that they are able to apply the learned knowledge to similar situations as the ones already solved.
Discussion
In the first activity, all participants except A6 color the map with four colors. This matches with results of [11], where all participants color correctly between three and six different maps after several attempts. To perform the activities, they start looking for a Draw a Eulerian cycle in this set of points using straight lines Same problem already invented As observed in Table 4, participants create similar activities to that ones already solved during the experience, using analogy, simplification and variation of the original problems.
All groups pose challenges consisting of drawing star polygons; G2 and G3 do it in two of the challenges. All groups use routes and cycles or Eulerian cycles, or problems joining points, but only G1 creates a challenge similar to the handshake problem (challenge number 3).
These activities created by participants prove that they are able to apply the learned knowledge to similar situations as the ones already solved.
Discussion
In the first activity, all participants except A6 color the map with four colors. This matches with results of [11], where all participants color correctly between three and six different maps after several attempts. To perform the activities, they start looking for a Draw a Eulerian cycle in this set of points using straight lines Same problem already invented As observed in Table 4, participants create similar activities to that ones already solved during the experience, using analogy, simplification and variation of the original problems.
All groups pose challenges consisting of drawing star polygons; G2 and G3 do it in two of the challenges. All groups use routes and cycles or Eulerian cycles, or problems joining points, but only G1 creates a challenge similar to the handshake problem (challenge number 3).
These activities created by participants prove that they are able to apply the learned knowledge to similar situations as the ones already solved.
Discussion
In the first activity, all participants except A6 color the map with four colors. This matches with results of [11], where all participants color correctly between three and six different maps after several attempts. To perform the activities, they start looking for a Same problem already invented As observed in Table 4, participants create similar activities to that ones already solved during the experience, using analogy, simplification and variation of the original problems.
All groups pose challenges consisting of drawing star polygons; G2 and G3 do it in two of the challenges. All groups use routes and cycles or Eulerian cycles, or problems joining points, but only G1 creates a challenge similar to the handshake problem (challenge number 3).
These activities created by participants prove that they are able to apply the learned knowledge to similar situations as the ones already solved.
Discussion
In the first activity, all participants except A6 color the map with four colors. This matches with results of [11], where all participants color correctly between three and six different maps after several attempts. To perform the activities, they start looking for a pattern to successfully color the map with four colors. To highlight the patterns followed, we ask them to draw a Eulerian cycle. All of them understand the problem and use patterns to carry out the activity.
The next two sessions offer an original approach which differs from those found in previous work [11][12][13][14]. In the second session, we use star polygons to practice Eulerian cycles. Only one participant generalizes his results, but all participants understand the notion of star polygon. In the third session, we apply graphs to solve the handshake problem. Participants develop graph-supported counting strategies and some of them successfully use induction. The results of both sessions show that graphs can be used to develop and support other thinking strategies such as generalization and induction.
The reasoning stage is mainly devoted to deepening the understanding of Eulerian paths and cycles; Hamiltonian graphs appear briefly in the last session. We finally explain the definition of Eulerian cycle and propose to the participants some activities to distinguish Eulerian and non-Eulerian cycles before presenting the Köningsberg bridges problem, which is frequently used in the literature to introduce graphs [12][13][14].
In [12], the author introduces the notion of a graph with this problem and guides the students to solve it using graphs as a mathematical model. Keeping the problem unsolved during some lessons motivates the curiosity of participants, but when the author explains the concept of Eulerian cycle, some participants get confused and think that all the graphs fulfill the condition.
Our approach avoids this confusion; all participants understand the notion of degree of a vertex and draw different examples of Eulerian graphs before tackling the Köningsberg bridges problem. All the participants affirm that it is impossible to find such a path, and some of them use the associated graph with the problem by themselves.
Therefore, our findings are in line with the previous results obtained by other authors, who state that graphs increased the logical abilities of the participants [12]; they were able to draw and represent graphs and to solve problems using graphs [13].
Moreover, our results prove that graphs have also brought out other problem-solving strategies, which have not been described in previous studies.
The involvement of the participants in the challenges shows that they have enjoyed the experience, so we have fulfilled the objective of motivating them through graph theory, as was also shown in previous research [12,13].
Conclusions
As we have seen throughout the activities performed by the participants, they have applied several strategies for problem solving, such as look for a pattern, trial and error method, counting strategies, induction, use symmetry, associated graph, guess and check method and direct reasoning.
Introducing the concept of graphs and some basic results of graph theory allowed participants to graphically represent abstract situations and develop several problemsolving strategies.
They also created their own problems by analogy, using a simpler problem, related problem or variation of the problem. This is also evidence of learning by imitation. Participants propose similar activities to those worked during workshops, simplifying them when they found it difficult to solve them.
Challenge 3 also shows that participants applied graph theory to real-life problems, making mathematical connections through graphs, relating mathematics to everyday life situations.
Graphs are shown to be a useful and powerful tool for increasing participants' problem-solving skills and their motivation towards mathematics. | 2021-09-28T18:25:50.120Z | 2021-07-03T00:00:00.000 | {
"year": 2021,
"sha1": "f92f6c0d662f21687119615c04c2fd5331109d98",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2227-7390/9/13/1567/pdf",
"oa_status": "GOLD",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "ed66fe1c34abb1d2163ef0a54632000d89709b58",
"s2fieldsofstudy": [
"Education"
],
"extfieldsofstudy": []
} |
59444356 | pes2o/s2orc | v3-fos-license | Factors influencing career choice : the Romanian business and administration students ’ experience
The paper aims to identify and rank the factors influencing Romanian business and administration students’ career choice. The main assumptions refer to: (a) good university education is critical factors influencing career choice; (b) business and administration students’ career choices are influenced by extrinsic factors; (c) students’ early exposure to profession contribute to successful careers. The paper findings are based on data collected from 496 undergraduate and master programs students enrolled in business and administration university education programs at Bucharest University of Economic Studies (BUES) during 2014/2015 academic year. BUES undergraduate and graduate students were distributed a questionnaire with 17 questions. Crosstabulation, frequency analysis and descriptive statistics were used for processing data collected. The survey conducted had broader objectives; therefore, there will be presented only results obtained after processing some of the questions of the survey which considered relevant for the aim of this paper. The findings indicate that extrinsic and interpersonal factors are significantly influencing career choice of business and administration students. Results are relevant for the university management: successful integration of graduates is becoming part of the quality assurance for any university. Accurate knowledge about factors influencing students’ career choice is needed to university staff so that to design and implement tools to support students to make the “right” career choice and to contribute to sustainable insertion of its graduates to the labor market. Thus, university staff has to identify perceptions and factors influencing students’ career choice so that to design and implement tools to support students’ career development. 1. Factors influencing students’ career choices Career choice implies students’ or recent graduates’ decision of selecting the occupation and professional field which fit best to their individual needs (Gokuladas, V.K., 2010). Apart of the competencies and skills self-assessment and the evaluation of career alternatives, the decision made refers to the field of activity and the employer profile to work for and is dependent to the individual preferences over alternative career options. According to Carpenter and Foster (1977) and Beyon et al. (1998), the career choice is dependent of three categories of factors: extrinsic, intrinsic and interpersonal. Extrinsic factors are not inherent in the nature of the tasks or of the occupational role (Willis, S.C et al, 2009) and may include labour market conditions (Edvardsson Stiwne, E. 2005), employer brand, salaries and income, job security (Gokuladas, 2010, Aycan et Fikret268 European Journal of Sustainable Development (2016), 5, 3, 267-278 Published by ECSDEV, Via dei Fiori, 34, 00172, Rome, Italy http://ecsdev.org Pasa), job availability, good remuneration and/or prestige of the occupation/job (Carpenter et Foster, 1977; Beyon et al, 1998). Intrinsic factors focus on the job content and the opportunities it provides for further training, career and professional development. The intrinsic factors include specific job related factors, such as authority and power related to the job (Bai, 1998; Aycan et Fikret-Pasa, 2003), working conditions and professional and career advancement opportunities (Aycan et Fikret-Pasa, 2003, Gokuladas 2010), creativity and professional challenges incurred (Feton S et al, 1994), interest for a specific job, content of the work and satisfaction (Carpenter et Foster, 1977; Beyon et al, 1998), training and professional development opportunities (Gokuladas, V.K., 2010). Interpersonal factors may include parental, family, relatives and/or friends and/or professors’ influence (Carpenter et Foster, 1977; Beyon et al, 1998, Gokuladas, 2010), early exposure to profession (Willis, S.C., 2009). Willis et al (2009) concluded that career choice decision is the result of a combination of factors and values located in personal experience. Willis et al (2009) regrouped these factors and values under 4 categories: personal identity (gender, ethnicity), social (family, professional status, career myths), instrumental (financial security, job security, flexibility, autonomy, vocational training) and personal experience related factors. Students’ ranking of factors influencing their career choice may differ across cultures (Ng et al, 1998), field of studies and industry (Gokuladas, V.K., 2010). Agricultural economics students’ career choice (McGraw, K., 2012) across industries is influenced by previous professional experience, sector preference, career goals, skills and experience and job attributes (combination of intrinsic factors related to job responsibility and career advancement opportunities and extrinsic factors related to salaries). Indian engineering students first career choice (Gokuladas, 2010) is mostly influenced by intrinsic factors (in particular, training, professional development and career opportunities in engineering available within company, working conditions), while extrinsic factors (such as company brand, job security and remuneration) were less relevant; students from rural area are more influenced by extrinsic and interpersonal factors as compared to urban students for whom intrinsic factors are clearly prevailing. Differences were observed across study fields in engineering: computer sciences and IT branch students were more influenced by intrinsic or extrinsic factors when choosing a career in ICT field, unlike non computer science and IT students for whom an ICT career choice is rather dependent on interpersonal factors (Gokuladas, 2010). UK pharmacy students’ career choice is influenced by a combination of intrinsic and extrinsic factors (Willis et al, 2009), most relevant motivators for their choice covering opportunities to earn a high income and favorable employment prospects, the content and nature of the pharmacy work, professional status, working conditions, flexibility (Willis et al, 2009; Gleeson et al, 1993); available studies also identifies early exposure to pharmacy work as factor influencing the choice of a pharmacy career (Willis et a, 2009), in particular, previous personal contacts with a pharmacist (Carlson and Wertheimer, 1992; Rascati, 1989), as an important interpersonal factors. According to Willis et al (2009), students are more likely to justify their career choice in terms of a rational economic model of participation to higher education, rather than in relation to the nature of pharmacy work; these findings indicate that students’ option for pharmacy university studies defines their career choice for pharmacy or related fields. According to Felton et al (1994), the US business students’ L. E. Marinas et. al. 269 © 2016 The Authors. Journal Compilation © 2016 European Center of Sustainable Development. option for a career in accounting is less influenced by intrinsic factors (such as creativity, intellectual challenge and autonomy in task completion) than their option for a nonaccounting career, but rather by extrinsic factors related to long term earnings and job market opportunities. Felton et al (1994) indicates that business students opting for nonaccounting career are rather concerned with intrinsic factors and good initial earnings as extrinsic factor. According to Greenbank (2011), students’ option for university education in a specific study area is dependent of the aim of enhanced employability; thus, the educational choice, in particular related to the study area, is determining students’ preferences for future career paths (Ng et al, 2008; Kopanidis et Shaw, 2014). University learning paths and specialization are shaping the student career pathways (Da Silva Anana and Nique, 2010). University education in engineering, science, technology and business are rather employability oriented and are equipping students with practical skills and job related competencies (Goyettet et al, 2006; Kopanidis et Shaw, 2014), while university education in arts, humanities and social sciences are tending to foster learning for its own sake (Bennett, 2004). A survey conducted on Australian students enrolled in three main areas (business; art, design and social context; sciences and engineering technologies) indicated that students are selecting university study programs based on their values which are associated with various specific career pathways (Kopanidis et Shaw, 2014). Thus, the prospective university students’ option for particular educational paths or field of study is the result of a matching process of their personal interest and career objectives (Kopanidis et Shaw, 2014); the option for a particular field of university studies should be considered an early predictor of students’ career preferences, in particular in vocational and professional fields, such as pharmacy, medicine, accounting, health etc. (Kopanidis et Shaw, 2014; Willis et al 2009). Based on a cost–benefit approach applied on business, education and psychology students, Wheeler (1983) concluded that the individuals’ perception about the reward (benefits) costs ratio of an occupation/profession is more a determinant of career choice than benefits or costs assessed independently. This confirms later findings that career choice, is normally influenced by objective factors as well as subjective factors which may be regrouped under the category of perceptions (Gokuladas V.K., 2010) about various professions and careers. Students’ perceptions when making their first career choice depend on: (a) the specific information they have from various sources, including, family, friends, career counsellor, media (Julien, 1999), employers (Gokuladas, V.K., 2010) about labour market conditions (Bai, 1998), occupations and professions and (b) the attitudes and beliefs they have developed about career opportunities during their life (Gokuladas, V.K., 2010) based on personal treatment
Factors influencing students' career choices
Career choice implies students' or recent graduates' decision of selecting the occupation and professional field which fit best to their individual needs (Gokuladas, V.K., 2010).Apart of the competencies and skills self-assessment and the evaluation of career alternatives, the decision made refers to the field of activity and the employer profile to work for and is dependent to the individual preferences over alternative career options.According to Carpenter and Foster (1977) and Beyon et al. (1998), the career choice is dependent of three categories of factors: extrinsic, intrinsic and interpersonal.Extrinsic factors are not inherent in the nature of the tasks or of the occupational role (Willis, S.C et al, 2009) and may include labour market conditions (Edvardsson Stiwne, E. 2005), employer brand, salaries and income, job security (Gokuladas, 2010, Aycan et Fikret-Pasa), job availability, good remuneration and/or prestige of the occupation/job (Carpenter et Foster, 1977;Beyon et al, 1998).Intrinsic factors focus on the job content and the opportunities it provides for further training, career and professional development.The intrinsic factors include specific job related factors, such as authority and power related to the job (Bai, 1998;Aycan et Fikret-Pasa, 2003), working conditions and professional and career advancement opportunities (Aycan et Fikret-Pasa, 2003, Gokuladas 2010), creativity and professional challenges incurred (Feton S et al, 1994), interest for a specific job, content of the work and satisfaction (Carpenter et Foster, 1977;Beyon et al, 1998), training and professional development opportunities (Gokuladas, V.K., 2010).Interpersonal factors may include parental, family, relatives and/or friends and/or professors' influence (Carpenter et Foster, 1977;Beyon et al, 1998, Gokuladas, 2010), early exposure to profession (Willis, S.C., 2009).Willis et al (2009) concluded that career choice decision is the result of a combination of factors and values located in personal experience.Willis et al (2009) regrouped these factors and values under 4 categories: personal identity (gender, ethnicity), social (family, professional status, career myths), instrumental (financial security, job security, flexibility, autonomy, vocational training) and personal experience related factors.Students' ranking of factors influencing their career choice may differ across cultures (Ng et al, 1998), field of studies and industry (Gokuladas, V.K., 2010).Agricultural economics students' career choice (McGraw, K., 2012) across industries is influenced by previous professional experience, sector preference, career goals, skills and experience and job attributes (combination of intrinsic factors related to job responsibility and career advancement opportunities and extrinsic factors related to salaries).Indian engineering students first career choice (Gokuladas, 2010) is mostly influenced by intrinsic factors (in particular, training, professional development and career opportunities in engineering available within company, working conditions), while extrinsic factors (such as company brand, job security and remuneration) were less relevant; students from rural area are more influenced by extrinsic and interpersonal factors as compared to urban students for whom intrinsic factors are clearly prevailing.Differences were observed across study fields in engineering: computer sciences and IT branch students were more influenced by intrinsic or extrinsic factors when choosing a career in ICT field, unlike non computer science and IT students for whom an ICT career choice is rather dependent on interpersonal factors (Gokuladas, 2010).UK pharmacy students' career choice is influenced by a combination of intrinsic and extrinsic factors (Willis et al, 2009), most relevant motivators for their choice covering opportunities to earn a high income and favorable employment prospects, the content and nature of the pharmacy work, professional status, working conditions, flexibility (Willis et al, 2009;Gleeson et al, 1993); available studies also identifies early exposure to pharmacy work as factor influencing the choice of a pharmacy career (Willis et a, 2009), in particular, previous personal contacts with a pharmacist (Carlson and Wertheimer, 1992;Rascati, 1989), as an important interpersonal factors.According to Willis et al (2009), students are more likely to justify their career choice in terms of a rational economic model of participation to higher education, rather than in relation to the nature of pharmacy work; these findings indicate that students' option for pharmacy university studies defines their career choice for pharmacy or related fields.According to Felton et al (1994), the US business students' option for a career in accounting is less influenced by intrinsic factors (such as creativity, intellectual challenge and autonomy in task completion) than their option for a nonaccounting career, but rather by extrinsic factors related to long term earnings and job market opportunities.Felton et al (1994) indicates that business students opting for nonaccounting career are rather concerned with intrinsic factors and good initial earnings as extrinsic factor.According to Greenbank (2011), students' option for university education in a specific study area is dependent of the aim of enhanced employability; thus, the educational choice, in particular related to the study area, is determining students' preferences for future career paths (Ng et al, 2008;Kopanidis et Shaw, 2014).University learning paths and specialization are shaping the student career pathways (Da Silva Anana and Nique, 2010).University education in engineering, science, technology and business are rather employability oriented and are equipping students with practical skills and job related competencies (Goyettet et al, 2006;Kopanidis et Shaw, 2014), while university education in arts, humanities and social sciences are tending to foster learning for its own sake (Bennett, 2004).A survey conducted on Australian students enrolled in three main areas (business; art, design and social context; sciences and engineering technologies) indicated that students are selecting university study programs based on their values which are associated with various specific career pathways (Kopanidis et Shaw, 2014).Thus, the prospective university students' option for particular educational paths or field of study is the result of a matching process of their personal interest and career objectives (Kopanidis et Shaw, 2014); the option for a particular field of university studies should be considered an early predictor of students' career preferences, in particular in vocational and professional fields, such as pharmacy, medicine, accounting, health etc. (Kopanidis et Shaw, 2014;Willis et al 2009).Based on a cost-benefit approach applied on business, education and psychology students, Wheeler (1983) concluded that the individuals' perception about the reward (benefits) -costs ratio of an occupation/profession is more a determinant of career choice than benefits or costs assessed independently.This confirms later findings that career choice, is normally influenced by objective factors as well as subjective factors which may be regrouped under the category of perceptions (Gokuladas V.K., 2010) about various professions and careers.Students' perceptions when making their first career choice depend on: (a) the specific information they have from various sources, including, family, friends, career counsellor, media (Julien, 1999), employers (Gokuladas, V.K., 2010) about labour market conditions (Bai, 1998), occupations and professions and (b) the attitudes and beliefs they have developed about career opportunities during their life (Gokuladas, V.K., 2010) based on personal treatment of specific information acquired.
Paper aims and methodology
The paper aims to identify and rank the factors influencing Romanian business and administration (BA) students' career choice.The main assumptions refer to: (a) good university education is critical factors influencing career choice; (b) business and administration students' career choices are influenced by extrinsic factors; (c) students' early exposure to profession contribute to successful careers.The paper findings are based on data collected from undergraduate and master programs students enrolled in business and administration (BA) university education programs at Bucharest University of Economic Studies (BUES) during 2014/2015 academic year.With more than 100 years of tradition and over 300.000Alumni, BUES is one of the most prestigious universities in the economics, business and administration in Romania.BUES has 12 faculties offering the opportunity to study in different languages (Romanian, English, German, French) for over 21,000 students who may choose over 22 Bachelor's Programs, 88 Master Programs, 10 research areas for PhD studies and 130 programs for postgraduate studies (Nastase, P., 2016).This is the reason for which we have considered BUES the most representative university for business and administration education in Romania and decided to conduct the survey at this university.The survey conducted had broader objectives; therefore, there will be presented only results obtained after processing some of the questions of the survey which considered relevant for the aim of this paper.Data were collected from 496 BA undergraduate and master students though a questionnaire with 17 questions.The questionnaire is based on he tri-dimensional model of extrinsic, intrinsic and interpersonal factors developed by Carpenter and Foster (1977) and Beyon et al. (1998); similar studies on career choice (Willis et al, 2009;Kopanidis et Shaw, 2014) were also considered.Before the survey, a focus group (HR practitioners, academics, students and stakeholders in education) was organized to improve the questionnaire.A draft version of the questionnaire was tested during a pilot phase on 30 persons in October 2014 before the final version of the questionnaire was developed.Except of the questions related to the profile of the respondents (Q1-Q6), the questionnaire contains multiple choice questions, open ended text questions and matrix table questions (Q7-Q17).The questionnaire is structured on five parts.The first part refers to the profile of the respondent (Q1-Q4), while the second part (Q5-Q6) is assessing the working experience (duration and type) of the respondents.The working experience referred to experience related to volunteering, employment in public or private organization, internship experience or no experience.It was aimed at identifying if and at what extent, the personal exposure to the profession (early working experience) of BA students, regardless the type of exposure, is affecting the students' perception about career opportunities (Gokuladas, 2010) and their career choice (Willis, 2009).The third part (Q7) attempts to identify and rank the main factors influencing the career choice of the Romanian BA students.Students were asked (Q7) to rank to a 6 point Likert scale (1 -least important, 6-most important) the factors influencing their career choice; factors were depicted as follows: family/parental, faculty/university, friends, colleagues and professors, economic and social climate, other factors (to be indicated by the respondent).The fourth part (Q8-Q9) aimed to identify the prospective career paths by asking BA students to identify and rank sectors and type of employers they prefer.The fifth part (Q10-Q17) was designed to identify motivators influencing career choice of BA students.It is developed according to the tri-dimensional model of extrinsic, intrinsic and interpersonal factors, (Q10-Q12); also future career path and job motivation factors were identified and ranked (Q13-Q17).Q10 asked students to indicate the 3 most important factors for a successful career; the items of Q10 were depicted as follows: family educational status, family financial status, good education, hard work, native skills (talent, IQ), good networking, ambition, luck, obedience, good personal branding skills, good foreign language communication skills, adaptability, others (to be indicated by the respondent).Q11 was aimed to rank the influence that various types of working experiences may have on career choice by using a 4 points Likert scale (4 -very high influence, 1 -very low influence); the working experience was depicted by the following items: internships, volunteering, entrepreneurship, paid work (various types of employment).Q12 asked students to rank the three most important factors for their career choice.Q13 -Q17 contained items allowing identification and ranking of relevant factors influencing students' satisfaction at work.The respondents profile by working experience may be considered an early predictor about the importance of interpersonal factors, namely the exposure to profession, for students' career choices.It also reflects the BA students' perceptions that previous working experience (assuring early exposure to profession) remains important for their employability and further career choice: internships (38.71%), employment contracts (27.83%) and volunteering (21.98%) are main tools to increase employability.The respondents profile by employment status is also reflecting the students' perceptions: most students are not employed in early stages of their university education programs, but they are looking for employment starting with the 2 nd year of bachelor, in particular in private companies (public institutions are more closed to employ unexperienced persons and have more rigid working hours approach), not only to finance their studies, but also for future employability reasons as indicated above.
Main results and discussions
The Q7 asked students to rank on 6 points Likert scale (1 -least important, 2less important, 3 -relatively important, 4 -important, 5 -very important, 6 -most important).Based on the answers given by the 496 respondents at the Q7, the three most important factors influencing the BA students' career choice, ranked according to the mean scores, are the university or faculty graduated, family and social and economic climate (table 2).Based on the mean scores, the factors most influencing BA students' career choice are: the university/faculty graduated, economic and social climate and family, while friends and professors advice are considered less important.Based on the frequency analysis, family was ranked among three most important factors influencing career choice by over 60% of the BA students, while the university/faculty graduated and the economic and social climate were ranked among three most important factors by 76% and 62% of the respondents respectively.The results of the survey confirms that for Romanian BA students' educational route, respectively their option for a certain university and a specific field of study, is an early predictor of their career choice and development, confirming thus the results of previous research of Kopanidis and Shaw (2014), Willis( 2009) and Greenbank (2011).The emphasis of university/faculty as a determinant of career choice is important to be analyzed.On the one hand, it seems to indicate that initial career choice is made after secondary education when opting for a particular university and field of study.It indicates that guiding and counselling during upper secondary education becomes an important tool to shaping students' future professional paths and it should be improved to accompany students throughout their high school life.On the other hand, since family is the third most important factor influencing career choice, it becomes crucial that all guiding and counselling activities during secondary education should target also students' families for better results.students' career choice.Economic and social climate reflected in labor market conditions (Edvardsson Stiwne, E. 2005), earnings and job availability (Beyon et al, 1998;Carpenter et Foster, 1977) is underpinning students' option for financial activities, banking and insurance; preference for financial activities, banking and insurance is to be justified since most vacancies in this sector are requiring economic studies.Also, perception about good salaries and good employer brand (Gokuladas, 2010) associated to financial and banking sector (also supported by statistical data) is still widespread across nowadays generation of BA students despite recent financial and banking turbulences in Europe.
Similarly, traditional perception about hardworking conditions and negative employer brand in agriculture (Gokuladas, 2010, McGraw, K., 2012) and students' perception of limited use of modern management tools in agriculture, arts, and sport impacting on limited job availability (Carpenter et Foster, 1977;Beyon et al, 1998) for economic university education graduates ranked agriculture and sport management among the least preferred sectors for their future career development.Health sector and social work are considered highly specialized (Willis, 2009), requiring medical, psychology and social sciences education which is explaining the students' perception about limited job availability for BA university graduates and their preference for career development in this sectors.Also, low salaries existing in health, social work and education sectors (mostly public owned ranking among the worst performers in terms of salaries with low growth prospects on both short and long term) as result of economic and social climate seems to demotivate BA students' career choices in these sectors.Both extrinsic factors related to the earning opportunities (Gokuladas, 2010;Beyon et al, 1998) and intrinsic factors related to professional challenges incurred (Feton S. et al, 1994), job content and career advancement opportunities (Aycan et Fikret-Pasa, 2003, Gokuladas 2010) are underpinning BA students' option for a career in retail or wholesale trading.Students' option for a career in public administration is mainly the result of extrinsic factors related to stability and job security (Gokuladas, 2010, Aycan et Fikret-Pasa, 2003).
If students' preference for certain economic sectors and the university degrees and qualifications provided by BUES are considered together, the survey reveals BA students' are opting for sectors where they consider they can exploit best their university degrees.Results for Q7 and Q8 confirm earlier findings of Kopanidis and Shaw (2014), Greenbank (2011), Ng et al (2008) that the university students' option for particular educational paths or field of study is linked to their career objectives and opportunities, indicating that Romanian BA students' career choice is a rational model: option for a specific university and field of study is a question of future employability and of the contribution it may add to help students to get access to careers in industries of which development is favored by the economic climate (extrinsic factors).Career choice is a very pragmatic process: BA students have chosen the university and their field of specialization which are best satisfying their career objectives, ensuring, thus, access to professions and jobs they are considering as being suitable and most satisfactory for them (in particular access to jobs in sectors in which brand employer, high earnings or security are perceived as being guaranteed).Consequently, BA students' preference for a particular field of university education is an early predictor for future career paths.
Results obtained for Q7 and Q8 indicate that the interpersonal factors related family and colleagues are important factors influencing BA students' perception about economic sectors and professions and consequently their further career choice.The structure of the respondents (96.58% of the respondents are under 25 years, 78.63% enrolled in undergraduate programs and 80.54% with no employment or less than 1 year employment contract) is explaining the relative dominance of extrinsic factors in their initial career choice defined in terms of industry and professions.
Results obtained also indicate that students' choice is based on their perceptions about career opportunities in various sectors.To this end, it is important to understand how their perceptions are created.If results for Q7 are considered together with respondents profile, it seems that students' perceptions about career options depends on the information they acquired mostly from their families, colleagues and professors, as most important interpersonal factors for the respondents' career choice.If the respondents' employment status (the absence or less than 1 year under employment for 80.54% of the respondents) is added to previous considerations, it may explain why intrinsic factors have a residual importance for BA students: their perceptions are mostly based on information collected from external trusted sources (namely the family, professors and colleagues) and not on their direct and personal experience; in other words since they lack information collected through their personal employment experience, it is difficult to refer to the intrinsic factors as determinants of their career choice.This is an important aspect to be taken into consideration, since the limited perspective of family or professors across various professions, sectors or about evolution of economic climate may alter the students' perception about career paths and consequently the quality of their career choice.
The Q11 asked students to rank on 4 points Likert scale (1 -very important, 2important, 3 -less important, 4 -most important) the importance of exposure to professions (various working experiences) for BA students' career choice.Based on mean scores obtained, internships and employment contracts are considered the most important factors influencing their career choice, while volunteering is less relevant for BA students (table 3).
Results for Q11 indicate internships as the most important type of working experience influencing BA students' career choice: 48.59% of the respondent ranked it as the most important and 27.02% of the respondents ranked it as important.Internships ensure conditions for students to gain practical experience through experiential learning (Ching et al, 2013), to put into practice the theory they learned to school, contributing (directly and indirectly to the sustainability of the learning process and support students in grounding their theoretical knowledge into the reality; thus internships are effective tools helping student to make better career choices (Brooks et al, 1995).Both internships and employment contracts are supporting students to develop job related skills which are not traditionally delivered within formal education (Garavan T., 2001).Paid employment contracts are considered a type of working experience significantly influencing career choice for BA students (31.65% of the respondent ranked it as the most important and 27.22% of the respondents ranked it as important).Paid employment was ranked the second since, as compared to internships, since it has the disadvantage of limited flexibility of working hours which may have detrimental effects related to time allocated for education.The least relevant working experience is volunteering.If extrinsic factors and preference for dynamic and rewarding (in terms of salaries and growth perspectives) economic sectors (Q7 and Q8) are taken into consideration, it seems that BA students prefer direct early exposure (direct working experiences) to economic sectors, professions and jobs which they could accede to according to their university degree and specialization; even if volunteering is providing some working experience, it is rather associated with social works and charity, which explains why it is the least preferred by BA students.
The results for Q11 confirm BA students' rational career choice model: early exposure to professions and industries is important for better understanding the content of specific jobs and for their employability objectives.BA students are considering that early exposure to professions (interpersonal factor) could support them to make better career choice; to make it effective, exposure to profession has to ensure students' direct contact to job content and responsibilities (intrinsic factors).The pragmatism of BA students' career choice is obvious: early exposure to professions for career choice is not just declarative, students are making use of various opportunities to acquire this experience (68.15% of the respondents have various working experiences).
Conclusions and practical recommendations
The survey on Romanian BA students confirms that enrolment in university education and preference for a specific field of study are employability driven and an early predictor of their future career pathway: university education is facilitating access to better paid jobs (UNICEF, 2014).The option for a specific field of studies is the decisive for students' career objectives since it is being considered a prerequisite for students' access to attractive jobs in attractive sectors.BA students' career choices seem to be extrinsic factors driven: they prefer either economic sectors with significant growth potential offering well paid jobs and good employer brands (financial activities, banking and insurance), either sectors perceived to guarantee job stability and security (as it is the case for public administration).Also, interpersonal factors, in particular family and relatives and early exposure to professions, are significantly influencing the career choices for BA students.Family and relatives are important contributors to BA students' perceptions about career opportunities with impact on both their initial career choice and learning pathways.The extrinsic factors and parental influence seem to be important at initial phases of career choice and design.It seems that BA students, following their initial option for a particular learning pathway during university education, they also need to understand the job content and responsibilities, through direct exposure to profession, before making further career choices,.To this end, most students are attempting to ensure and diversify their working experience during university education; all types of working experiences are considered by BA students since they are helping them to develop job related skills and to make better professional choices.Personal exposure to sectors and jobs as critical through internships and other working experiences creates conditions for students to introduce intrinsic factors (related to job content) in their career choice model.BA students are considering direct personal exposure to sectors and jobs and intrinsic factors as critical for their career choice in latter stages.For BA students, interpersonal factors are relevant for their initial career choice, in particular related to their university education and profession.University education and extrinsic factors are mostly linked with students' option for a particular industry.Direct exposure to professions through personal working experience (internships and volunteering included) is both equipping students with employability skills and is revealing to the students the intrinsic factors (related to specific jobs content) to be considered for their career pathway development.Based on these findings, BUES management is considering both to develop quality guiding and counselling services (coherent with high school guiding and counselling) and to diversify the working experiences opportunities for students (in particular internships opportunities) so that to help BA students to make better career choices.
The Q8 asked students to rank the economic sectors they are considering for their career development.It was used a 4 points Likert scale (1 -preferred, 2 -possible, 3 -unlikely, 4 -excluded).For each sector, students were asked to indicate their preference on scale from 1 to 4. Students were indicated 13 sectors to consider and rank (agriculture; manufacturing; public administration; retail and wholesale trade; transportation; tourism; mass media; cultural activities; ICT; finance banking and insurance; education and research; health and social work; sport management) and had the possibility to indicate another economic sector.Findings indicate that the most attractive sectors are: finance banking and insurance, retail and wholesale trade and public administration.The least attractive sectors for BA students' career were agriculture, sport management, health and social work, transportation (chart 1).Chart 1. Students' preference for economic sectors -Q8If the importance given to the economic climate (ranked as the 2 nd most important) and results obtained at Q8 are combined, it seems that extrinsic factors are critical for BA
Table 1 .
The questionnaire (printed version) was distributed, during October 2014 -May 2015, to 1200 BUES' undergraduate and master students enrolled in the at 11 faculties: Business Administration; Public Administration and Management; Business and Tourism; Cybernetics, Statistics and Economic Informatics; Accounting; Theoretical and Applied Economics; Agricultural and Environmental Economics; Finance, Insurance, Banking and Stock Exchange; Management; Marketing; International Business and Economics.There were 496 valid questionnaires which were processed which leads to a rate of response of 41.33%.Crosstabulation, frequency analysis and descriptive statistics were used for processing the data collected.The respondents profile was outlined by means of frequency analysis and is depicted in Table 1.Demographic profile
Table 2 .
Mean and standard deviation of the factors influencing career choice -Q7 ( | 2018-12-21T02:58:42.881Z | 2016-10-01T00:00:00.000 | {
"year": 2016,
"sha1": "e5ded87c35683effa2c35b535f5c8098aa85d1b5",
"oa_license": "CCBYNC",
"oa_url": "http://ecsdev.org/ojs/index.php/ejsd/article/download/353/350",
"oa_status": "GOLD",
"pdf_src": "ScienceParseMerged",
"pdf_hash": "e5ded87c35683effa2c35b535f5c8098aa85d1b5",
"s2fieldsofstudy": [
"Education"
],
"extfieldsofstudy": [
"Economics"
]
} |
4121633 | pes2o/s2orc | v3-fos-license | Evaluating the response of freshwater organisms to vital staining
The unintentional introduction of nonindigenous species by ballast water discharge is one of the greatest threats to biodiversity in freshwater systems. Proposed international regulations for ballast water management will require enumeration of viable plankton in ballast water. In this study we analyze the efficacy of vital stains in determining viability of freshwater taxa. The efficacy of vital stains fluorescein diacetate (FDA) and FDA+5-chloromethylfluorescein diacetate (CMFDA) was evaluated with freshwater macroinvertebrates, zooplankton, and phytoplankton. Macroinvertebrates were cultured in laboratory, while plankton were collected from Hamilton Harbour and ballast tanks of commercial vessels. Organisms were subjected to various treatments (i.e., heat, NaClO, and NaOH) to establish efficacy of stains for viable and non-viable organisms. No significant difference in accuracy rate was found between stains, regardless of treatment, within groups of organisms, indicating that the addition of CMFDA is superfluous in the sample region studied. False positive errors, in which dead organisms fluoresced similarly to live organisms, occurred in most groups and were significantly different between test groups. False positive error rates were 2.3% for phytoplankton, 20% for ballast water zooplankton, 35% for Hamilton Harbour zooplankton and 47% for macroinvertebrates. Response to stains varied between taxonomic groups. Low (< 10%) false positive error rates were observed with phytoplankton, softbodied rotifers, oligochaetes, and Bosmina spp., while rates between 20% and 50% were observed for Daphnia spp., Hexagenia sp., and Chironomus riparius. False positive rates of copepods, Hyalella azteca, and Hemimysis anomala were between 70% and 100%. The FDA/FDA+CMFDA vital staining methods provide useful tools for viability analysis of freshwater phytoplankton, soft-bodied invertebrates and zooplankton, and may be used for viability analysis of the ≥ 10 μm to < 50 μm size fraction in compliance testing of ballast water. However, viability analysis of larger freshwater crustaceans with vital stains should be undertaken with caution.
Introduction
Aquatic nonindigenous species (NIS) are organisms that have established populations outside of their native range, through either intentional or unintentional means of introduction.NIS that successfully establish in a new environment may inflict negative impacts on the receiving ecosystem, and are considered by many to be the greatest threat to biodiversity in freshwater ecosystems and the second greatest cause of global extinction (Sala et al. 2000;MEA 2005;Lawler et al. 2006).The unintentional introduction of aquatic NIS through ballast water discharge from commercial vessels is a primary vector for aquatic NIS introductions in freshwater systems such as the Great Lakes and St. Lawrence River (Ricciardi and MacIsaac 2000;de Lafontaine and Costan 2002;Holeck et al. 2004;Ricciardi 2006).Viability of organisms upon discharge of ballast water may be dependent on various factors such as length of voyage, physical-chemical conditions, occurrence of mid-ocean exchange, and application of ballast water treatment systems (Olenin et al. 2000;McCollin et al. 2007;Klein et al. 2010).
In 2004, the International Maritime Organization (IMO) adopted the International Convention for the Control and Management of Ships' Ballast Water and Sediments, which, when ratified, will govern the maximum allowable concentrations of viable organisms in discharged ballast water.In relation to plankton and invertebrates, the Convention states that maximum discharge densities must be less than 10 viable organisms ≥ 50 µm per m 3 , and less than 10 viable organisms ≥ 10 µm to < 50 µm per mL (IMO 2004).A variety of treatment systems are being developed to meet these discharge limits, which require accurate, quantitative testing to verify their effectiveness in removing or exterminating viable organisms.
Traditional methods of collecting plankton in the field, preserving, and enumerating total numbers of organisms rely on the assumption that all visibly intact organisms were viable at the time of collection.However this assumption is not supported for many organisms in natural environments (Tang et al. 2006;Bickel et al. 2009) and may not prove true in ballast tanks where environments may be harsh and transit times may be too long for many organisms to survive (McCollin et al. 2007;Klein et al. 2010).Moreover, the application of biocides to inactivate organisms shortly before sample collection results in little time for decomposition, likely resulting in overestimation of viable plankton abundance.
The need to quantify organisms in ballast water to determine compliance with new IMO discharge standards therefore requires the development of viability assessment protocols.The use of vital stains to assess viability of phytoplankton and zooplankton in marine and coastal environments has been well established in recent years: SYTOX green (Veldhuis et al. 2001;Baudoux et al. 2008); fluorescein diacetate (FDA) (Brookes et al. 2000;Garvey et al. 2007;Peperzak and Brussaard 2011;Villac and Kaczmzrska 2011); FDA + 5-chloromethylfluorescein diacetate (CMFDA) (Steinberg et al. 2011); neutral red (Elliott and Tang 2009;Zetzche and Meysman 2012).In contrast, there have been few studies examining appropriate methodologies for freshwater communities (Seepersad and Crippen 1978;Bickel et al. 2009;Reavie et al. 2010).
The vital stain FDA reacts to non-specific enzymatic activity within cells, is non-toxic, and inexpensive.CMFDA also reacts with non-specific enzymatic activity and is mildly thiol reactive, allowing the compound to remain within the cell longer, but is more expensive.Reavie et al. (2010) tested the accuracy of FDA with phytoplankton assemblages from Lake Superior and several small lakes in northern Minnesota (USA).The vital fluorescent stain was found to be suitable for organisms in the 10-50 µm size range, however, it is unknown if FDA would be useful for determining viability of larger freshwater organisms potentially found in ballast water, or for freshwater phytoplankton outside of Lake Superior.In contrast, the stains FDA and CMFDA were found insufficient for viability assessment of marine phytoplankton when used individually due to differential staining across species (Steinberg et al. 2011).Yet it was observed that the combination of stains provided complimentary staining of the majority of phytoplankton (Steinberg et al. 2011).
In this study, we assess the use of FDA and FDA+CMFDA in determining viability of freshwater organisms.We evaluate the accuracy of the fluorescent vital stains in differentiating between live and dead organisms for different size classes and taxonomic groups across different treatments (kill methods).Our null hypotheses are that: i) no difference in staining efficacy will be observed between the two vital stains; ii) the varying treatments applied will not have an influence on the outcome of staining; and iii) no difference in staining accuracy will be present between different taxonomic groups of freshwater organisms.Finally, we evaluate the method in terms of its potential application for assessment of ballast water from ships transiting the Laurentian Great Lakes, and compare results with traditional methods of plankton assessments for ballast water.
Test groups and sample collection
Five sample groups were subjected to testing: macroinvertebrates, harbour and ballast plankton (zooplankton and phytoplankton).Macroinvertebrates consisted of primarily benthic, laboratorygrown cultures including two species of oligochaetes (Lumbriculus variegatus Mueller, 1774 and Tubifex tubifex Mueller, 1774), midge larvae Chironomus riparius (Meigen, 1804), the amphipod Hyalella azteca (Saussure, 1858), and mayfly larvae Hexagenia sp.With the exception of L. variegatus, which was purchased from a commercial vendor (Merlan Scientific Ltd, Mississauga, Ontario), all cultures were reared/ hatched in a laboratory at the Canada Centre for Inland Waters, Burlington, ON.Included in the macroinvertebrates group is the large planktonic invasive amphipod, Hemimysis anomala (Sars, 1907), which was sampled from Lake Ontario and maintained in the laboratory for up to 2 weeks post-collection.The zooplankton (> 50 µm) and phytoplankton (10-50 µm) samples consisted of collected species from Hamilton Harbour and ballast water tanks of commercial ships transiting the Great Lakes-St.Lawrence Seaway.
Zooplankton samples were collected from Hamilton Harbour (Lake Ontario; 43°N, 79°W) on seven occasions between April and July 2012 by a single vertical net haul in 9 metres of water, using a 35 µm mesh net (50 µm diagonal).Samples were collected and concentrated into a 35 µm cod end and rinsed into a 500 mL plastic sample bottle.Phytoplankton samples were collected from Hamilton Harbour on two occasions during October 2012.Whole surface water samples were collected using a 20 L bucket and sieved through a 35 µm mesh, with the filtrate collected and further size fractionated using a vacuum filtration system fitted with a 5 µm mesh cloth (7 µm diagonal).The 5 µm mesh cloth was then rinsed into a 300 mL beaker using a small amount of filtrate water.
Ballast water samples were collected from three domestic and three foreign ships on arrival to the Port of Hamilton, ON or while in transit in the Welland Canal between September and November 2012.Domestic ships were transporting ballast water sourced from Montreal, Quebec, Tracy, Quebec, and Cote Ste-Catherine, Quebec, while all foreign ships had undertaken mid-ocean exchange in the Atlantic Ocean.Approximately 1000 L of water was filtered from a single tank of each ship for collection of zooplankton using a 35 µm mesh net.Samples were then concentrated in a 35 µm codend and rinsed into a 1000 mL sample bottle.Phytoplankton samples were collected as whole water samples from the tank surface.
Vital stains
Laboratory trials were performed to assess the accuracy and efficacy of vital stains FDA (Sigma-Aldrich Canada, Oakville, Ontario) and CMFDA (Invitrogen Canada, Burlington, Ontario).Cultured and ambient plankton were stained either with FDA-only or a combination of CMFDA+FDA.
A primary solution of FDA was made by combining 50 mg of solid powder FDA with 10 mL of reagent grade dimethyl-sulfoxide (DMSO; Sigma-Aldrich Canada, Oakville, Ontario), for a final concentration of 12.0 mM.Further, FDA working solution was made through the addition of 10.0 µL of FDA primary solution to 1.0 mL of distilled water, for a final working solution concentration of 120 µM.Primary solutions of CMFDA were created through the addition of 10.7 µL DMSO to 0.05 mg of powdered CMFDA, resulting in a final concentration of 10 mM.Twenty-five µL aliquots of the CMFDA primary solution were added to microcentrifuge tubes containing 1 mL distilled water, resulting in working solutions with final concentration of 250 µM.Primary solutions were stored at 4°C in the dark, while working solutions were prepared new with every use.
Sample treatment
Three kill methods were applied to each of the five test groups (macroinvertebrates, Hamilton Harbour (HH) zooplankton, HH phytoplankton, ballast water (BW) zooplankton, and BW phytoplankton) in replicates of five, in order to perform staining trials on live and dead organisms, and to compare influence of kill methods on staining results: heat, NaClO, and NaOH.In kill method #1, samples were placed in a water bath of 95°C for 15 minutes.Samples were then allowed to return to room temperature prior to staining.Kill method #2 involved 24 hour incubation with NaClO, for a final Cl -concentration of 23 ppm.Kill method #3 entailed the addition of NaOH to increase the sample pH to 12.0 for 24 hours, or for 1 hour (L.variegatus only).Samples were kept in the dark during incubation, and NaOH was subsequently neutralized by addition of HCl.NaOH kill method trials could not be performed on T. tubifex, as addition of the strong base to the sample resulted in immediate disintegration of the animal.Following all kill methods, all macoinvertebrate and zooplankton samples and all phytoplankton samples were decanted onto 35 µm and 5 µm filter mesh, respectively, and gently rinsed with filtered (<5 µm) ambient water to eliminate residual chemicals used by the kill methods.Heat treated samples were also rinsed to maintain a consistent methodology across treatments.Untreated samples were stained and analyzed within 2 hours of collection, while treated samples were stained and analyzed immediately following treatment.
Sample staining and analysis
Each treated (5 per kill method per test group) and untreated (5 per test group) replicate for the five test groups were stained with the vital stains.Macroinvertebrates were stained in 20 mL scintillation vials along with 5 mL of culture water, at densities of 5 organisms/sample (L.variegatus and T. tubifex), 10 organisms/ sample (Hexagenia sp. and C. riparius), or 20 organisms/sample (H.azteca and H. anomala).Zooplankton and phytoplankton were stained by transferring 5 mL of each sample to 20 mL scintillation vials.Macroinvertebrates, zooplankton, and phytoplankton were stained with 417 µL of the FDA working solution and, for the combination method, 100 µL of the CMFDA working solution, for a final concentration of 10 µM and 5 µM, respectively.
Stained samples were incubated in the dark at room temperature for 10 minutes.Following incubation with the stain, macroinvertebrates were loaded into well-plates, zooplankton samples were loaded onto a gridded (5mm 2 ) zooplankton counting chamber measuring approximately 6 cm 3 cm, and phytoplankton samples were loaded onto gridded (1 mm 2 ) Sedgewick-Rafter counting chambers measuring 7.6 cm 2.5 cm, with a total cell size of 5 cm 2 cm 0.1 cm.Phytoplankton samples were allowed to settle for 2 minutes prior to observation.Macroinvertebrates were enumerated at 10 magnification, while zooplankton were enumerated at 40 magnification.Both macroinvertebrates and zooplankton were observed using a Nikon AZ100 compound epifluorescent microscope with blue light excitation-green bandpass emission filter cubes (FITC; excitation 465-495 nm, dichoric 505 nm, barrier 515-555 nm).Phytoplankton were enumerated at 200x magnification using a Zeiss Axiovert A1 inverted epifluorescent microscope with the same blue light excitation-green bandpass emission filter cubes.Transitions between brightfield and epifluorescence were employed for zooplankton and phytoplankton observations for simultaneous taxonomic identification and viability analysis.Phytoplankton were examined under epifluorescent light for a maximum of 20 minutes, as it was assessed during preliminary trials that prolonged exposure to light in combination with stain leakage over time resulted in increased background fluorescence and fading of stain, leading to difficulty distinguishing between fluorescing plankton and background.
A minimum of 100 individuals were enumerated for each HH zooplankton sample, while HH phytoplankton samples were enumerated to either a minimum of 500 individuals, or until the maximum observation time of twenty minutes was reached.In the case of ballast water samples, phytoplankton and zooplankton were enumerated until the appropriate minimum numbers were reached, or the entire sample was analyzed.
Preliminary trials indicated that even a weak fluorescence signal may indicate a live organism, therefore any detectable signal observed was considered a positive result and that individual was counted as 'live'.Individuals in both untreated and treated samples were analyzed for movement and fluorescence simultaneously to determine error rates for each stain.As all organisms were considered dead in treated samples, any organism emitting a fluorescence signal was considered a false positive.Organisms in untreated samples which had movement but did not fluoresce were considered false negatives.Organisms that either moved or fluoresced were considered live, while organisms were considered dead when they neither moved nor fluoresced.To determine the effect, and potentially confounding issue, of green autofluorescence on vital staining (Tang and Dobbs 2007), in addition to the effect of kill methods on fluorescence, each sample type included a negative control to which no stain was applied.
Statistical analysis
Statistical analysis was conducted using Systat v.11 (Systat Software, Inc.).Variations in percent of organisms stained in untreated samples were compared using a one-way multivariate analysis of variance (MANOVA), where test groups (macroinvertebrates, HH zooplankton, HH phytoplankton, BW zooplankton, and BW phytoplankton) were dependent variables and stains (FDA and CMFDA+FDA) were independent variables.Variations in the rate of false positives among kill methods and stains were compared using two-way MANOVA, where test groups (macroinvertebrates, HH zooplankton, HH phytoplankton, BW zooplankton, and BW phytoplankton) were dependent and kill methods (heat, NaClO, and NaOH) and stains (FDA and CMFDA+FDA) were independent variables.Furthermore, one-way analysis of variance (ANOVA) was performed to test for differences among test groups.To determine if taxonomic groups responded differently to various kill methods and stains, variation in the rate of false positives among kill methods and
Untreated samples
Regardless of vital stain applied, untreated cultures stained correctly 98.6% of the time.False negative errors occurred during one experiment with Hyalella azteca cultures, during which 10% (2/20) of individuals did not stain, but were mobile.Untreated HH plankton stained correctly 100% of the time as assessed through the observation of movement and fluorescence (Figure 1).No significant difference (p > 0.05; Table 1) was observed between the two vital stains for accurately identifying living organisms of plankton from either Hamilton Harbour or ballast water.
Vital staining results indicated that untreated HH zooplankton samples contained 89% to 100% viable organisms and an average of 5% nonviable organisms, while phytoplankton samples contained 56% to 100% viable organisms and an average of 18% non-viable organisms.Viable organisms comprised 40% to 100% of any single BW zooplankton sample, and 27% to 100% of any single phytoplankton sample.Non-viable organisms comprised on average 30% samples for both BW zooplankton and BW phytoplankton.
Treated samples
The rates of false positive occurrences within and between test groups were evaluated to determine the overall performance of the vital stains (Figure 2).The results showed no significant differences in error rates between FDA and CMFDA+FDA within any test group -macroinvertebrates, HH zooplankton, BW zooplankton, HH phytoplankton or BW phytoplankton (p > 0.05; Table 2).Furthermore, no significant difference occurred between the three kill methods used, within test groups (p > 0.05; Table 2).
However, between the test groups, false positive rates differed significantly (p < 0.001; Figure 3).Rates of false positives were significantly lower amongst phytoplankton when compared to zooplankton and macroinvertebrate groups (p < 0.05), where phytoplankton false positive rates were 2.3% for both BW and HH samples.Furthermore, rates of false positives were significantly higher for HH zooplankton than for BW zooplankton with error rates of 35% and 20%, respectively (p < 0.001).Macroinvertebrates exhibited the highest rates of false positives at 47%, and were not significantly different from HH zooplankton error rates (p > 0.05).
Taxonomic responses
Rates of false positives were consistently low amongst phytoplankton groups, which included primarily diatoms (both centric and pennate), dinoflagellates, cyanobacteria, and chlorophytes.However, false positive errors were much more common and variable amongst macroinvertebrates and zooplankton, which consisted primarily of several varieties of copepods and copepod nauplii, and several families of rotifers and cladocerans (Figure 4; Figure 5).Oligochaetes stained correctly in 100% of trials.Rotifers and cladocerans had moderately high false positive rates of 29% and 22%, respectively.Furthermore, variation existed between rotifer genera, as Asplanchna, Polyarthra, and Synchaeta were less likely to produce false positives than Keratella or Kellicottia.Likewise, Bosmina and Eubosmina spp.had a low mean false positive rate of 7%, while Daphnia spp.had a higher false positive error rate of 41%.Insect larvae also exhibited moderately high false positive rates at 23% for mayfly larvae Hexagenia sp. and 49% for midge larvae C. riparius.Finally, high rates of false positives were observed for the larger crustaceans: copepod nauplii (47%), copepods (71%), H. anomala (94%), and H. azteca (98%).
Accuracy typically did not vary significantly between stains or kill methods (Table 3, Figure 6).However, accuracy between FDA and CMFDA+ FDA varied significantly for treated H. azteca and C. riparius (p<0.05).H. azteca had consistently high rates of false positives regardless of stain used, with error rates of 98% with FDA and 97% with CMFDA+FDA.C. riparius (p < 0.05) had lower overall rates of false positives; FDA error rates (32%) were significantly lower than error rates for CMFDA+FDA (62%).
Within two taxonomic groups, false positive error rates differed significantly between kill methods (Table 3, Figure 7; p < 0.05).Larvae of the insect C. riparius (p < 0.05) had significantly lower rates of false positives with heat or NaOH as kill method, than when killed with NaClO (30%, 39%, and 79%, respectively).Similarly, copepod nauplii (p < 0.001) had significantly lower rates of false positives when killed with NaOH (14%), than when killed with heat or NaClO (64% for both heat and NaClO).
The possibility that delayed staining following kill methods may yield better results in macroinvertebrate cultures and zooplankton was investigated during preliminary trials.Trials included staining and observation at 24, 48, and 72 hours following heat, NaClO, and NaOH kill methods with H. azteca, C. riparius, and copepods.Observations indicated no difference in the rate of false positive errors when organisms were stained with either FDA or CMFDA+FDA following any of the prescribed wait periods.
Discussion
While ecological assemblages of soft-bodied aquatic worms, T. tubifex and L. variegatus, several rotifer genera, and Bosmina and Eubosmina spp.were reliably stained by both FDA and FDA+CMFDA, vital stains proved problematic with freshwater copepods and amphipods.False negative errors were not prevalent amongst those taxa, however false positive errors were common, as most treated individuals displayed fluorescence regardless of live or dead status as assessed through movement, and would hence be misidentified as live.Seepersad and Crippen (1978) and Bickel et al. (2009) attributed errors in aniline blue staining of copepods and cladocerans to individuals entering a moribund state following exposure to a stressor (such as heat, NaClO, or NaOH in our study).Such individuals would be on the verge of death, but potentially still possess enzymatic activity, hence the observed fluorescence.However, we did investigate the possibility that delayed staining following a kill method may yield better results in macroinvertebrates and zooplankton.Observations indicated no difference in the number of treated organisms stained with either vital stain up to 72 hours following treatment, unlike Elliott and Tang (2009) who found that marine zooplankton allowed to sit in room temperature water for 5 minutes would no longer display false positive staining with neutral red.Insect larvae, though not as prone to false positives as copepods and amphipods, displayed intermediate rates of false positives.Observations of C. riparius indicate increased accuracy with the use of the single vital stain, FDA, over the combination of vital stains.Quality of vital staining in treated, dead C. riparius differed from live, stained organisms.In treated individuals, stain consistently appeared to be superficial, with only the outer wall of the organism picking up the stain.However, in untreated individuals the stain is more internal than external.Similar degrees of staining were seen in treated Hexagenia sp., which also displayed a staining pattern following exposure to vital stain differing from that of live Hexagenia sp.Dead Hexagenia sp. that did fluoresce with the vital stain exhibited the fluorescence primarily on the legs and tail (cerci), while the gills and most of the abdomen and thorax did not fluoresce, whereas the legs of live Hexagenia sp. did not fluoresce and the gills, abdomen, and thorax fluoresced brightly.Patchy staining has also been observed in live marine zooplankton, particularly copepods and molluscs stained with neutral red (Elliott and Tang 2009;Zetsche and Meysman 2012), however precise patterns of staining were not as predictable as were seen here for Hexagenia sp.Based on the high rates of false positives seen in copepods and amphipods, and high degree of variability observed in insect larvae, we recommend FDA or CMFDA+FDA be used with caution on samples containing such assemblages.
Precise reasons as to why the vital stains would continue to stain zooplankton and macroinvertebrates several days following death remains unknown.However it is possible that the presence of a carapace or exoskeleton in such organisms is related to the occurrence of false positives, as could be evidenced by the differences in the patterns and appearance of staining between live and dead C. riparius and Hexagenia sp., and lower rates of false positives amongst soft bodied plankton and invertebrates.Future investigations into the physiological reactions of zooplankton and macroinvertebrates with fluorescent stains after death could potentially aid in the search for appropriate viability assessment techniques for these groups of organisms.
Our laboratory testing indicated that the two vital staining methods appear to be appropriate for use with freshwater phytoplankton.Reavie et al. (2010) and Steinberg et al. (2011) also recently investigated the utility of fluorescent vital stains with phytoplankton communities.While Reavie et al. (2010) conclude that FDA alone is useful for freshwater phytoplankton of Lake Superior, Steinberg et al. (2011) indicate the need for the combined staining method with FDA+CMFDA for use with marine phytoplankton taxa.Our analysis of mixed phytoplankton assemblages show no significant differences in accuracy rates between the two staining methods for freshwater taxa and supports the findings of Reavie et al. (2010), concluding that FDA alone will provide accurate and consistent viability results in freshwater phytoplankton communities.Furthermore, the lack of significant difference observed between HH and BW phytoplankton is an indication of the wide applicability of FDA/FDA+CMFDA for use with phytoplankton, as it seems that the utility of these stains can be applied across a range of locations and sample types.Nonetheless, we suggest that an initial round of testing of any stain be employed prior to use in a new region.
Our results indicate that an overestimation of viable plankton density is likely to occur through the use of traditional preservation methods alone when sampling ballast water.Traditional methods for analysis of ballast water samples consider the degradation status of individuals as a means of determining viability at time of collection, however, organisms recently killed by treatment or other means may not exhibit noticeable decomposition prior to collection.Tang et al. (2006) and Bickel et al. (2009) indicate that abundances of zooplankton carcasses in natural samples may be 29% and between 6% and 8% for marine and freshwater environments, respectively.The abundances of freshwater zooplankton carcasses found by Bickel et al. (2009) are similar to abundances found in Hamilton Harbour samples reported here (5%), as determined by the vital stains.However ballast water samples appear to have elevated abundances of dead zooplankton (30%), relative to harbour communities, for reasons possibly including but not limited to harsh environments of ballast tanks and long travel times between source and recipient ports.Therefore assessments of plankton communities in ballast tanks should include viability testing to determine compliance with discharge standards, as traditional methods will likely overestimate density of viable organisms.
Concluding remarks
Determining compliance with impending IMO standards for ships discharging ballast water will require precise knowledge of viable plankton densities present in ballast water to be discharged (IMO 2004), yet, traditional assessments of plankton present in ballast water do not take into account viability of organisms.Our study confirmed the findings of Reavie et al. (2010) that vital stains could be a useful tool for testing efficacy of ballast water treatment systems, particularly with phytoplankton.However, our findings suggest that vital stains FDA and CMFDA are not suitable as viability assessment methods for mixed assemblages of freshwater or marine zooplankton samples, particularly for those samples containing large crustaceans such as amphipods or copepods, as gross overestimates of live organisms are likely to occur due to the high occurrences of fluorescing dead organisms.This finding may prove problematic for the use of ballast water test kits in determining compliance with new IMO-D2 standards.Ballast water test kits are designed to provide a rapid on-board assessment of ballast water compliance, often measuring the bulk FDA fluorescence in a small subsample of ballast water to determine the presence of viable organisms.An assumption with using such kits would be that any type of error associated with the stain would be negligible.Additionally, our findings indicate that traditional methods of assessing plankton in ballast water may overestimate the true viability status of communities.The vital stains are efficient at accurately determining viability status of phytoplankton, many types of rotifers, soft-bodied aquatic worms, and some cladocerans from Lake Ontario, the St. Lawrence River, and ships which have undertaken midocean exchange.These results therefore increase the confidence of using FDA and CMFDA across a variety of kill methods and illustrate the range of applicability of these vital stains with natural freshwater assemblages.
Figure 1 .
Figure 1.Mean (± standard error) total percentage (%) of untreated organisms stained with vital stains for macroinvertebrate and HH and BW plankton (total zooplankton and total phytoplankton).M+, F+, M-, and F-indicate movement, fluorescence, no movement, and no fluorescence, respectively.
Figure 2 .
Figure 2. Mean (± standard error) total percentage (%) of false positive errors for treated samples within each test group across all combinations of stains and treatments.
Figure 3 .Figure 4 .
Figure 3. Mean (± standard error) total percentage (%) of false positive errors for each test group.Different letters denote significant difference at 0.05 in false positive rates between groups
Figure 6 .
Figure 6.Mean (± standard error) total percentage (%) of false positive errors for each taxon and different vital stain.* denotes significant difference at 0.05 in false positive rates between the two stains.
Figure 7 .
Figure 7. Mean (± standard error) total percentage (%) of false positive errors for each taxon and type of treatment tested.* denotes significant difference at 0.05 in false positive rates between the three treatments.
Table 1 .
Results of multivariate analysis of variance (MANOVA) with untreated test groups as dependent (macroinvertebrates, Hamilton Harbour (HH) zooplankton, HH phytoplankton, ballast water (BW) zooplankton, and BW phytoplankton) and stain (FDA and CMFDA+FDA) as independent variable.
Table 2 .
Results of multivariate analysis of variance (MANOVA) with treated test groups as dependent (macroinvertebrates, Hamilton Harbour (HH) zooplankton, HH phytoplankton, ballast water (BW) zooplankton, and BW phytoplankton) and stains (FDA and CMFDA+FDA) and treatment (heat, NaClO, and NaOH) as independent variables. | 2018-03-23T01:20:11.387Z | 2014-09-01T00:00:00.000 | {
"year": 2014,
"sha1": "0ac39601f34b367dec5388f3d7932c7f35c78423",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.3391/mbi.2014.5.3.02",
"oa_status": "GOLD",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "0ac39601f34b367dec5388f3d7932c7f35c78423",
"s2fieldsofstudy": [
"Environmental Science"
],
"extfieldsofstudy": [
"Biology"
]
} |
218688406 | pes2o/s2orc | v3-fos-license | Single-port laparoscopic appendectomy using a needle-type grasping forceps for acute uncomplicated appendicitis in children: Case series
Highlights • Acute appendicitis is the most common between the ages of 10 and 20 years.• Our centre performed a new technology of single-port laparoscopic appendectomy using a needle-type grasping forceps (SLAN).• SLAN has advantages of favourable cosmetic results, minimal trauma, and enhanced postoperative recovery.
Introduction
Acute appendicitis is the most common abdominal surgical emergency in the world, who can affect people of any age but is most common between the ages of 10 and 20 years [1].
McBurney firstly reported open appendectomy in 1891 [2], which has been the main operation choice for a time. Recently, more and more concerns have been concentrated to the application of laparoscopy, especially single-port laparoscopy, in the treatment of acute appendicitis, relating to the rising requirement of mini-trauma and cosmetic appearance. Our centre successfully performed single-port laparoscopic appendectomy using a needle-type grasping forceps (SLAN) for a pediatric patient with acute simplex appendicitis in April 2019. SLAN was performed with a 1 cm transumbilical incision, while conventional laparoscopic instruments and a needle-type grasping forceps were both used. The main operative procedure is similar to conventional three-port laparoscopic appendectomy, which can shorten the learning curve, and warrant safety, along with the advantages of favourable cosmetic results, minimal trauma, and enhanced postoperative recovery. In this report, we introduced our experience with SLAN for the treatment of six pediatric patients with uncomplicated appendicitis from April to November 2019, and assessed the feasibility and safety of this technique.
Materials and methods
Between April to November 2019, six pediatric patients with uncomplicated appendicitis (including acute simplex and purulent appendicitis) underwent emergent SLAN in our centre. patients were provided informed consent prior to undergoing surgery. All patients were diagnosed with acute appendicitis based on their medical history, physical sign, chemical examination results, abdominal ultrasound and computer-tomography. Patients with preoperative evidence of gangrene or perforated appendicitis, severe ascites or adhesion were excluded. The median age and BMI were 10.7 (range, 6-14) y, 18.40 (range, 14.57-21.48) kg/m2. The operation was performed by the same one attending surgeon in our center, who has adequate experiences on laparoscopic procedure. The research work has been reported in line with the PROCESS criteria [3].
Every patient was required to ensure fasting at least 6 h and evacuate bladder to ensure the bladder flatulent, while the insertion of gastric canal and urinary catheter is unnecessary. Under general endotracheal anesthesia, the patient was positioned in supine position, and laparoscopic video screen was placed on the right side, master surgeon standed on bottom-left, assistant surgeon standed on top-left. When all preparations were ready, one umbilical port (about 10 mm) was dissected by open pathway. Actually, the port size who just permits a little finger passing through may be suitable, because CO2 air leakage may occur if the port is dissected longer (Fig. 1). Two 5 mm disposable trocars were inserted into abdominal cavity through the umbilical port to be used of observation and procedure (Fig. 2), while two seventh surgical sutures were used to seal the port and fix trocars. The intraoperative diagnosis of acute uncomplicated appendicitis (including acute simplex and purulent appendicitis) was confirmed by laparoscopic exploration. Then the patient was tilted 300 to leg, and titled 300 to left side. A 50 ml injector pinhead was sticked into abdominal wall on the McBurney site, which can help needle-type grasping forceps be inserted into abdominal cavity easily. By the assistance of the needle-type grasping forceps to drag appendix lumen, the mesentery of appendix was dissociated until the root of appendix revealed. Two seventh surgical sutures were used to ligate and slender the root of appendix, and a green Hem-o-lock clip was used to seal the root appendix. Another seventh surgical sutures was used to ligate distal appendix lumen, to be ready for search of appendix when CO2 air in abdomen was evacuated in the end. Appendix was cut by ultrasound scalpel with distance about 3−5 mm from the root of appendix. Then ascites were sucked until clean. When carbasus and all other surgical equipments were confirmed integrated, and no bleeding happened in neither umbilical port nor McBurney site. Then the seventh surgical suture in the distal appendix was extracted through 5 mm trocar by laparoscopic elastic separating plier, then a 10 mm disposable trocar was inserted into umbilical port following anterior two 5 mm trocars were taken out. In the end, the pathological appendix was extracted through 10 mm disposable trocar to avoid incision infectious. Umbilical incision was sutured with one 3−0 absorptive suture (Table 1).
Results
All six pediatric patients were successfully performed SLAN, and none of them was converted to open surgery. The operation time was 50−85 min; the postoperative hospitalization was 1-2 d; the first exhaust time after surgery was 1-2 d. The umbilical incision is small and covert, without obvious scars are visible. The puncture Table 1 Clinical characteristic, operative and postoperative data of pediatric patients undergoing single-port laparoscopic appendectomy using a needle-type grasping forceps. Patient point of needle grasping forceps in the right lower abdomen is about 2 mm, which is not necessary to be sutured because of retracting properties of the skin. During the follow-up of 2-9 months, none complications were observed, such as incisional infection, adhesive intestinal obstruction, and abdominal abscess formation. Satisfied feedback was received from all the patients and their family.
Discussion
Acute appendicitis is the most common surgical abdomen in the world with a high incidence [1]. Although there is still an international controversy regarding the choice of conservative treatment or active surgical interference, surgery is still considered to be an active and effective treatment [4][5][6][7][8]. Since Semm firstly reported laparoscopic appendectomy in 1983 [9], and Gans and Berci introduced laparoscopic appendectomy into pediatric surgery in 1973 [10], laparoscopic appendectomy has been confirmed to be predominant, including safe, effective, minimally invasive, and recovered quickly after surgery. In recent years, with the development of laparoscopic technology, single-port laparoscopic appendectomy has received more attention in the field of pediatric surgery and has been proven to be safe and effective [11][12][13]. In general, pediatric patients are small, whose parents have higher requirements for aesthetic incision. On the premise of ensuring the safety and effectiveness of the operation, reducing the incision as much as possible and reducing surgical stress will greatly improve the satisfaction of patients and their families. Based on this, our center attempted to make a 1 cm incision along the natural folds under the umbilicus and simultaneously inserted two 5 mm trocars. The needle grasping forceps was a mini instrument which was initially used in single-port laparoscopic appendectomy in children. In order to complete the operation smoothly, our centre introduced needle grasping forceps in single-port laparoscopic appendectomy which can assist in clamping the appendix and knotting. This procedure can effectively reduce the length of the incision without affecting the surgical outcomes.
Though there are many methods to perform single-port laparoscopic appendectomy, SLAN is really a progress with the following advantages: Firstly, the incision is small and covert, and postoperative appearance is excellent. The incision is selected on natural folds along the lower umbilical edge, with about 1 cm length, which plus the puncture point of the needle-type grasping forceps with a total length of about 1.2 cm. However, most of the previously reported single-port umbilicus incisions are about 1.5-2.5 cm [14,15], while the total length of traditional three-port methods is about 2.0-2.5 cm. In this comparison, SLAN showed minimal incision and surgical stress. Meanwhile, no obvious scars were observed 9 months after surgery, and the cosmetic appearance is excellent according to the follow-up result (Fig. 3). Secondly, SLAN can obviously shorten hospital stay and enhance postoperative recovery procedure. Previous studies have showed that the traditional single-port laparoscopic appendectomy requires 44.4 h hospital stay [16], in contrast, SLAN requires only 1.5 days postoperative hospital day, and recovered gas on about 1-2 days after surgery, which is in line with the concept of enhanced recovery after surgery [17]. Thirdly, the needle-type grasping forceps which was initially applied in single-port laparoscopic hernia repair, is similar to conventional laparoscopic elastic separating plier, as well as the use of conventional laparoscopic instruments, the learning curve is shortened largely. Fourthly, Hem-o-lock clip has been confirmed to be effective in laparoscopic appendectomy [18]. Thus green Hem-o-lock clip which was used in SLAN to clamp the root of the appendix with can avoid postoperative appendix stump fistula, this process is safer and more reliable than conventional suture ligation. Meanwhile, retrieval bag has been proved to reduce the risk of intra-abdominal infection during laparoscopic appendectomy [19], although retrieval bag cannot be inserted into abdominal cavity in SLAN because of the limitation of 5 mm trocar, we extracted the appendix through a 10 mm disposable trocar to avoid incision being contaminated by pathological appendix, and fortunately, no postoperative incision infections or abdominal abscesses feedback has been received in all the six patients by 2-9 months followup.
Undoubtedly, any surgery has its limitations. To our experience, the following announcements should be emphasized: Firstly, Because the pathway of procedure and the observation shared the same umbilical port,chopstick effectöccured inevitablely. In order to solve this trouble, the assistant need to move with the master surgeon in the same direction, to fully expose the operative field by adjusting the laparoscopic 30 • bevel direction, which can make the operation comfortable and efficient. Secondly, because of lack of suitable laparoscopic instruments for SLAN, CO2 air leakage happened frequently, especially in the initial cases, which may greatly interfere surgical process. After continuous accumulation and improvement of the surgical experience, the umbilical port size who just permits a little finger passing through is favourable for avoidingleakagephenomenon (Fig. 2). Thirdly, because of the limitation of the tiny needle-type grasping forceps, patients with severe abdominal infection and severe abdominal adhesions were not recommended, though abdominal wall trauma and operative stress can effectively be reduced. Fourthly, the length of the green Hem-o-lock clip after closing is about 8 mm (Fig. 4), considering of the limitation of 5 mm trocar, any other longer Hem-o-lock clips cannot be inserted into abdominal cavity, so the diameter of the root of the appendix larger than 8 mm afterslimmingmay induce incomplete clamping, which is not suitable for SLAN. Of course, we also considered to expand the umbilical incision to 1.5 cm, so that one 10 mm and one 5 mm trocar can be inserted into abdominal cavity respectively, if that, the clamping effect is equivalent to the three-hole method, which can expand the surgery indications. However, there is no doubt that this method will increase the length of the umbilical incision, the cosmetic effect is not good, and the postoperative stress is larger, which is contrary to our original intention. Fifthly, the abdominal drainage tube cannot be retained for the limitation of only one umbilical incision, therefore, preoperative assessment of abdominal infection and the diameter of the appendix is particularly important. Previous studies have emphasized the importance of preoperative abdominal CT examination to identify acute simple appendicitis and complex appendix [20]. Therefore, for patients without radiological contraindications, we recommends perfecting abdominal CT before surgery. If abdominal CT indicates a severe abdominal infection or adhesion, or the root of the appendix is too thick to be clamped completely by a green Hem-o-lock clip, SLAN should be decisively abandoned after the intraoperative exploration, and the conventional three-port procedure or open surgery should be performed instead, depending on the condition. In a word, proper preoperative examination and evaluation should be completed, rational cases should be selected, in order to maximize the benefits for patients.
Conclusion
SLAN is a feasible and safe technique to treat acute uncomplicated appendicitis in children. Surgeons must strictly grasp the surgery indications and select suitable patients. Randomized controlled trials with larger samples and postoperative follow-up data need to be carried out in the future.
Ethical approval
No ethical approval was necessary since this paper describes a retrospective research of the procedure of usual single-port laparoscopic appendectomy with conventional instruments, and the needle-type grasping forceps has been widely used in singleport lapaorscopic high ligation of hernia sac in children, in addition, the main surgery procedure is similar to conventional three-port laparoscopic appendectomy (not considered as a 'first-in-man' study).
Consent
Written informed consent was obtained from all patients for publication of this case series. A copy of all the written consents is available for review by the Editor-in-Chief of this journal on request.
Author contribution
Y Chen -study design, data collection, data analyse, writing, and review.
JQ Yuan -study design, writing, and review; SG Guo, ZJ Yang-data collection, and review.
Registration of research studies
1. Name of the registry: Researchregistry 2. Unique identifying number or registration ID: researchregistry 5430 3. Hyperlink to your specific registration (must be publicly accessible and will be checked): https://www.researchregistry. com/browse-the-registry#home/
Provenance and peer review
Editorially reviewed, not externally peer-reviewed | 2020-05-20T13:06:07.568Z | 2020-05-06T00:00:00.000 | {
"year": 2020,
"sha1": "9e7941606ad95f7c6a4c62947e53461caf012546",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1016/j.ijscr.2020.03.040",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "7167e8c0979aed8a496a02905b2a665dbbd1fc42",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
251725201 | pes2o/s2orc | v3-fos-license | Clinical features and surgical management of tuberculous arthritis of the sacroiliac joint: a retrospective analysis of 33 patients
Background We reviewed 3 different types of tuberculous sacroiliitis via anterior and posterior approaches to determine the efficacy and safety of this surgical approach by describing clinical presentation, imaging, and surgical treatment. Methods We reviewed 33 patients with 3 different types of severe tuberculous sacroiliitis, of which 16 patients with anterior iliac abscess underwent anterior debridement. 17 patients underwent posterior debridement. Among them, 5 patients with lumbar tuberculosis underwent lesion debridement through fenestration, joint fusion, and interbody fusion and internal fixation. The mean postoperative follow-up was 16.9 months (12–25 months).Erythrocyte sedimentation rate (ESR), visual analogue scale (VAS) and the Oswestry Disability Index (ODI) were used to judge the postoperative condition and functional recovery. Results All patients’ hip, back and lower back pain symptoms were significantly relieved after surgical treatment. At 3 months after operation, the VAS and ODI scores of all patients decreased significantly. Conclusion Surgical treatment of tuberculous sacroiliitis should be performed as soon as possible under the adjuvant chemotherapy of anti-tuberculosis drugs. According to the different characteristics of sacroiliac joint tuberculosis, appropriate surgical operations should be adopted according to our classification criteria.
Introduction
Tuberculosis is the number one cause of death from a single infectious agent according to the World Health Organization's 2021 Global Tuberculosis Report [1]. The incidence of sacroiliac joint tuberculosis is relatively low, accounting for only 10% of bone and joint tuberculosis. The sacroiliac joints are true synovial joints and are just as susceptible to infection as any other joint [2]. It is mainly manifested as lumbosacral pain and limited mobility of the lower extremities. And only some patients have typical signs of tuberculosis such as low fever and night sweats.
It is difficult to diagnose and differentiate from other diseases of lumbosacral pain [3]. Diagnosis often requires biopsy or bacterial culture based on fine needle aspiration or surgical resection of the lesion. Kim has classified sacroiliac tuberculosis into 4 types [4].Types I and II SJT should be treated with anti-tuberculosis drugs and chemotherapy. For types III and IV, intensive treatment with regular antituberculosis drugs should be combined with debridement and bone graft fusion.
Early diagnosis and stable reconstruction after complete removal of the lesion are the most important treatment methods to prevent the instability of the sacroiliac Open Access *Correspondence: doctortianqi@163.com 1 Xi'an Ninth Hospital, Xi'an, China Full list of author information is available at the end of the article joint and pelvic ring caused by the lesion [5]. However, there are few research reports on sacroiliac joint tuberculous arthritis, and there is no unified standard for the best treatment. Surgery is mainly divided into open surgery and minimally invasive surgery. And open surgery includes anterior approach; upper anterior portion and posterior approach. Upper anterior portion avoids the iliac muscle from being dissected from the ilium. However, there may be a risk that the lesions cannot be completely removed [6]. Posterior approach avoids the separation of important pelvic neurovascular. However, some scholars believe that anterior approach can reveal the lesion under direct vision and have a larger space to operate the operation. Then the optimal surgical approach for sacroiliac joint tuberculous arthritis remains unclear. The purpose of this study is to classify SJT and adopt corresponding surgical methods to determine its effectiveness.
Design of study
After receiving the written informed consent from participants and approval from hospital's Ethics Committee, We reviewed 33 cases (15 males and 18 females, classified as type III and IV according to Kim's classification) of sacroiliac joint tuberculous arthritis (SJT) who were treated in the First Affiliated Hospital of Xinjiang Medical University and underwent anterior debridement from March 2011 to June 2021.
Settings of the study
The severity of the lesions was assessed by us according to the destruction of the sacroiliac joint surface, the presence of lumbar tuberculosis and the location of the abscess, etc. We divided these patients into 3 categories: A, B and C. Type A is severe sacroiliac joint destruction with or without iliac fossa abscess. Type B is severe sacroiliac joint destruction with posterior iliac abscess, iliac fossa abscess, or no iliac fossa abscess. Type C is tuberculous sacroiliitis with lumbar tuberculosis or paravertebral abscess. Most cases are solitary sacroiliac joint tuberculous arthritis. Only some patients have pulmonary tuberculosis, urinary tuberculosis, pubic tuberculosis and lumbar tuberculosis. We track their treatment outcomes to determine the safety and feasibility of this classification and surgical procedures.
Preoperative preparation
After admission, we make a preliminary diagnosis of sacroiliac joint tuberculous arthritis based on typical clinical symptoms combined with blood routine, erythrocyte sedimentation rate, PPD test, test, T-SPOT.TB-DOT test and chemical tests such as anti-tuberculosis antibody. At the same time, we did further verification by combining pelvic frontal X-rays, sacroiliac joint CT and MRI scans. Final diagnosis is confirmed by fine needle aspiration biopsy or intraoperative curettage for culture of Mycobacterium tuberculosis. For patients with high suspicion of sacroiliac joint tuberculosis infection or confirmed by fine needle aspiration biopsy of sacroiliac joint tuberculosis infection. They need immediate quadruple antituberculosis drug treatment. That is, oral isoniazid once a day, a total of 300 mg/d, rifampicin once a day, a total of 450-600 mg/d, pyrazinamide 1-3 times a day, a total of 1500-1750 mg/d, ethambutol Alcohol 1-2 times/d, total 750-1000 mg/d. When symptoms such as body temperature and erythrocyte sedimentation rate are controlled, active nutritional support treatment is required. Anti-tuberculosis drugs were used for about 2 weeks before surgery. Surgery to remove the lesion when the erythrocyte sedimentation rate is below 40 mm/h. Even if the erythrocyte sedimentation rate is still higher than 40 mm/h, SJT with abscess or other vertebral tuberculosis can be treated with surgery after excluding active pulmonary tuberculosis.
Surgical procedure Anterior approach surgery
For patients with type A SJT with predominant anterior sacroiliac joint destruction with or without anterior abscess or sinus tract, we usually operate through the anterior approach (Fig. 1).
Posterior approach surgery
For patients with type B SJT with predominant posterior sacroiliac joint destruction with or without a posterior abscess or sinus tract, and type C with lumbosacral tuberculosis, we usually operate through the posterior approach (Fig. 2).
Postoperative treatment
The drainage tube should be pulled out when the drainage volume is less than 20 ml. Standardized antituberculosis therapy needs to be continued after surgery. The total course of treatment is 18-24 months. Patients need regular review of liver and kidney function to monitor the presence or absence of toxic effects caused by the side effects of anti-tuberculosis drugs during the postoperative period of medication. X-rays should be reviewed regularly after surgery to determine the degree of joint fusion. Computed tomography (CT) and magnetic resonance imaging (MRI) are also reviewed to further confirm the fusion of the sacroiliac joint bone graft if necessary.
Statistics analysis
Data analysis was performed using SPSS 22.0 (IBM, Chicago, IL, USA). In statistical descriptions, the mean ± standard deviation is used for continuous variables that fit a normal distribution; if they do not fit a normal distribution, the median (interquartile range) is used. The Kolmogorov-Smirnov test was used to test for normality, and the Levene test was used to test the homogeneity of variances. For quantitative data, use ANOVA if the sample is normally distributed and the variance is uniform. If the above conditions are not met, the rank sum test (Kruskal-Wallis test) is used. For qualitative data, the chi-square test or exact probability method was used for the comparison of two groups of binary data and the comparison of two groups of unordered multiclass data, while the rank sum test Preoperative CT showed the joint space disappeared. Strip calcification can be seen in the spinal canal behind the lumbar 4-5 vertebral body, and the spinal canal is narrowed behind the corresponding intervertebral space. The bone density of the right sacroiliac joint and sacrum is uneven, and multiple worm-like bone destructions can be seen, and multiple spot-like high-density shadows can be seen inside. Similar oval, slightly hypodense shadows in the right psoas muscle, multiple punctate calcifications can be seen on the edge of the lesion and in its interior. The right iliacus muscle was swollen, and punctate high density was seen in the musculature around the pelvic floor. g, h Postoperative X-ray. i, j postoperative CT. k, l X-rays 6 months after surgery. m, n CT 1 year after operation: the sacroiliac joint is fused, and the joint space disappears. o, p CT and X-ray films 2 years after surgery showed irregular shape of the right sacroiliac joint, fusion of the sacroiliac joint and disappearance of the joint space. q Examination results of surgically removed pathological tissue (Kruskal-Wallis test) was used for the comparison of two groups of ordered multiclass data. The Wilcoxon rank sum test and the Mann-Whitney test were used for VAS and ODI data. A significance level of 0.05 was used.
Results
Among the 33 patients, there were 6 cases of type III and 27 cases of type IV according to Kim's classification method. The 16 patients with type A in our study underwent anterior debridement according to the above classification, Lesion debridement of 11 patients with type B via posterior iliac fenestration, Five cases of type C patients with lumbar spine tuberculosis underwent posterior fenestration debridement, arthrodesis and interbody fusion and internal fixation, One patient with type C with anterior paravertebral abscess underwent posterior fenestration debridement and bone graft fusion. Removal of anterior abscess and interbody fusion after secondary surgery (Table 1). Of the patients who underwent type A surgery through the anterior approach, 5 patients were used to fix the surgical area with plates and screws. Plates and screws were not used in patients undergoing type B surgery through the posterior approach. Posterior surgery can achieve solid fixation due to bone grafting through the gap of the iliac fenestration, additional plates and screws are often not required. Anterior approach creates a large bone defect after debridement anterior to the sacroiliac joint, plate screws are needed to promote joint fusion and stabilize the sacroiliac joint. One case of severe urinary tract tuberculosis, tuberculous meningitis and severe pulmonary infection occurred 3 months after the anterior approach was excluded. Patients who eventually died of type I respiratory failure and septic shock. Finally, 33 patients with Kim III-IV tuberculous sacroiliitis were followed up. There were 15 males and 18 females. The average age was (31 ± 13.9) years old. All cases were unilateral, with 15 cases on the left side and 18 cases on the right side. The average time from first symptom to diagnosis was 22.7 weeks, the shortest was 2 weeks, and the longest was 27 months. Fifteen of these patients had difficulty walking because they were unable to walk with full weight bearing. Among them, 16 patients had obvious symptoms of tuberculosis such as low-grade fever and night sweats in the afternoon, while the other 17 patients showed no obvious signs of tuberculosis; one patient had nephrectomy due to renal tuberculosis before; 1 patient had pulmonary tuberculosis at the same time; seven patients had lumbar tuberculosis at the same time; 3 patients had palpable masses that could fluctuate on the body surface. And one of them was treated in a local hospital and the pus was removed by puncture. There were two patients with local sinus discharge. There were 13 patients who had received irregular treatments such as acupuncture, massage, antiinflammatory painkillers and even injection of antibiotics before surgery. Physical examination revealed sacroiliac joint tenderness and pelvic squeezing pain in all patients. Patrick's test was positive. In a state of excessive flexion and extension, passive motion of the affected joint is limited and painful.
The average operation time was 110.3 min (60-250 min). The average anterior approach for type B and C patients was 120.3 min. The average time of posterior approach for patients with type A was 181.3 min; the average blood loss during surgery was 243 ml (50 ~ 1000) ml, The mean blood loss of posterior and anterior procedures was 301.2 ml and 99.7 ml respectively (Only the operation time and blood loss of the sacroiliac joint are compared here). There was no significant difference between the operation time and intraoperative blood loss. The comparison results are (P = 0.213 and P = 0.173). One patient had complications of anterior sinus formation and recurrence 3 months after surgery. Sinus tract was resected and lesions were removed in secondary surgery. Other than that, no other complications were found. One patient suffered a lot of bleeding due to the injury of the artery when the abscess was removed during the operation, and the trouble was finally solved by interventional embolization. The mean erythrocyte sedimentation rate before surgery was 53.7 mm/h (13 ~ 80 mm/h). The blood sedimentation on the 7th postoperative day was 38.7 mm/h (21 ~ 64 mm/h), it basically returned to normal at 3 months after surgery, 11.4 mm/h (4 ~ 20 mm/h) (Table2).
The sacroiliac joints of all patients were found to have different degrees of damage during the operation. Patient 29 had anterior rupture of the sacroiliac joint capsule with an abscess lesion at the iliopsoas muscle. Patient 20 has an abscess in the groin that develops into a sinus and discharge of pus. Patient No. 5 was accompanied by erosion and destruction of the fifth lumbar vertebra and abscess of the psoas major. His spinal canal was compressed by pus causing lower extremity pain. No serious complications such as joint dislocation were found in all patients during postoperative follow-up.
Improvement of clinical symptoms
In terms of clinical symptoms, the focus of this study is the patients' local pain and daily work and life conditions. VAS and ODI scores were used to evaluate the improvement of patients' clinical symptoms. All patients' pain symptoms were effectively relieved after surgery. VAS and ODI scores decreased significantly at 3 months postoperatively ( Table 2). The VAS and ODI of 33 patients at 3, 6 and 12 months after operation were significantly lower than those before operation. The difference was statistically significant (P < 0.001).The patient had no significant pain or only mild discomfort with activities at 12 months postoperatively. The function of the lower limbs has basically recovered and can meet the needs of daily work and life. All patients were able to walk with full weight bearing, ascend and descend stairs, and perform light exercise at the last follow-up visit. Compared with type A, the preoperative and postoperative ODI patients with lumbar tuberculosis and posterior abscess (types B and C) had no statistical significance (P > 0.05). Except for the VAS score at 6 months after operation, there was no significant difference between the two groups at other time points (P < 0.05). (P = 0.257 before operation. P = 0.075 at 3 months after operation, P = 0.34 at 12 months after operation). Only 1 patient with type A who underwent anterior approach had recurrence 3 months after surgery until the last follow-up (Table 3).
Discussion
The sacroiliac joint consists of anterior and lower synovial structures and posterior 1/3 to 2/3 ligamentous structures; Tuberculosis bacteria penetrate into the cancellous bone and joint synovium with less muscle attachment and abundant blood vessels through the blood circulatory system, causing sacroiliac joint tuberculosis. Partial SJT secondary to adjacent bone and joint tuberculosis commonly seen in lumbar spine tuberculosis. When the bacteria and pus in the synovium further destroy the articular cartilage and the bony structure of the articular surface, it develops into total joint involvement. The lesions are deep and insidious, and the early symptoms and imaging findings are not typical. It is often misdiagnosed as sciatica, discitis, chronic pain syndrome, lumbar disc herniation or spondyloarthritis in the early stage of the disease. The average time from symptom onset to diagnosis in the literature is 14 months [7]. The mean time from first symptom onset to presentation in our study was 22.7 weeks. Localized pain in the groin, buttocks, and back of the thighs is generally the most common clinical presentation. [8][9][10]. And local symptoms are more severe than systemic symptoms [11]. There were 21 patients with local pain as the main symptom in our study, but only 16 patients (48%) showed typical signs such as afternoon fever and night sweats. Faber's test and direct tenderness are the most reliable physical findings for evaluating tuberculous sacroiliitis. High false-positive results due to possible compression of the lumbosacral plexus by the abscess during active straight leg raising [2]. Therefore, it is often misdiagnosed as lumbosacral radiculopathy due to the occurrence of lower extremity pain [12]. Sitting, walking, and exercising can make the pain worse as the condition progresses. The joint space is narrow so that too much pus accumulate. When the pus breaks through the joint capsule and overflows, the joint pain is partially relieved due to the reduced pressure and it spreads along the weak tissue space to form a sinus tract. Tuberculous sacroiliitis can be divided into caseous necrosis type and proliferative type in pathology. The former often involves the surrounding soft tissue and causes caseous necrosis and sequestrum formation. The necrotic material liquefies to form abscesses and sinus tracts. The proliferative type is relatively rare, mainly forming granulation tuberculosis tissue and destroying the trabeculae under the bone [13]. X-ray and CT are mainly used for imaging examination. Since the sacroiliac joint has an included angle of 15° with the sagittal plane of the longitudinal axis of the body, the X-ray should take a plain film of the pelvis and the sacroiliac joint with the affected side elevated by 15° in the supine position [14].
CT and MRI are more helpful in the early diagnosis of tuberculous sacroiliitis. The advantage of CT is to show fine synovial thickening, articular surface destruction, small bone abscesses, sequestrum, cystic degeneration, and osteosclerosis. [7,15]. MRI can help show the location and size of the abscess. [16]. But the final diagnosis is by fine needle aspiration and intraoperative biopsy. The purpose of surgery for tuberculous sacroiliitis is to remove the tuberculosis and fuse the sacroiliac joints [17]. Conventional surgical methods are divided into anterior approach and posterior approach. We divide it into 3 surgical methods according to the characteristics of the patient's lesions. Anterior approach is chosen when SJT is type A with iliac fossa abscess. Anterior surgery helps to preserve the stability of the joint because the lesions of joint destruction are generally in the anterior half, and the posterior ligaments and other tissues are not violated. Anterior approach surgery is usually done in the posterior peritoneum for debridement. The sacroiliac joint can be clearly exposed under direct vision after incision of the anterior ligament and joint capsule. There is enough space in the front to operate, which facilitates the insertion of internal fixation. However, due to the existence of large blood vessels and lumbosacral plexus inside the pelvis, the operation may be difficult and the amount of bleeding may be large. For category B and C SJT with hip abscess we choose posterior approach. Posterior approach is safe and easy to operate. However, incision of the posterior ligament and joint capsule of the sacroiliac joint will destroy the stability of the ligament around the joint, and now it is often used to open the window through the iliac bone and perform curettage of the lesion. However, the disadvantage of this operation is that the lesion cannot be cured under complete direct vision, and the operating space is small. Generally, the stability of the bone fragment can be maintained by suturing the tissue after bone grafting. If the bone fragment cannot be stabilized, an internal fixation device should be used to restore the stability of the joint. Patients without internal fixation with steel plates should be bundled with compression such as braces or bandages for 3 weeks after surgery to facilitate the fusion of implanted bone fragments. For C-type SJT with lumbar tuberculosis, posterior surgical removal of sacroiliac lesions and debridement of lumbar tuberculosis and one-stage pedicle screw therapy can achieve stable fusion of the intervertebral and sacroiliac joints at the same time. The base of the posterior superior iliac spine is the best position for iliac screw fixation of the iliac bone. The iliac bone screw should be inserted in the cylindrical area above the greater sciatic notch along the direction of the acetabulum. This is not only safe and easy, but also can obtain maximum holding force [5]. All patients' symptoms and activity limitation were effectively relieved after surgery. Their VAS and ODI at 3, 6, and 12 months postoperatively were all meaningful compared to preoperatively. Erythrocyte sedimentation rate decreased to normal levels at 3 months postoperatively. There was no significant difference in intraoperative blood loss and operation time between anterior and posterior surgery. Quiescent sacroiliac joint tuberculosis requires posterior surgery to avoid greater trauma, while for SJT with sacroiliac abscesses, anterior abscess removal is required. The study by Zhu et al. showed that posterior fenestration for SJT is a safe and feasible modality [18]. There was no significant difference in VAS and ODI between anterior and posterior procedures at 3, 6, and 12 months postoperatively, so both procedures are effective treatments for SJT. Because there is an important neurovascular network in front of the sacroiliac joint, and the lesions are deep, it is prone to iatrogenic trauma. The iliac bone behind the sacroiliac joint is thick, superficial, and has no important nerves and blood vessels, so the risk of exposure is small. Sinus of the groin usually occurs after incision and drainage of an anterior abscess [19]. Many scholars suggest that sacroiliitis should be removed by a posterior approach for bone grafting [20]. Our study also found that the anterior approach may have more complications. A patient who underwent anterior approach surgery inadvertently injured the artery during the removal of the abscess, and finally stopped the bleeding by interventional embolization. Another patient with anterior approach developed a sinus tract at 3 months after surgery. No complications were found in patients with the posterior approach. However, performing surgery according to our SJT classification can ensure complete removal of the lesion. After all, complete removal of the lesion is the ultimate goal of surgery.
The following points need to be paid attention to during the operation: 1 Choose the appropriate surgical approach according to the location and characteristics of the lesion before the operation. Thorough debridement is the most powerful guarantee to prevent surgical failure. Incomplete debridement can easily lead to recurrence of infection. When removing lesions and abscesses during the operation, careful and careful debridement should be combined with imaging studies. Soft silicone tubes should be fully used for irrigation and drainage, and if necessary, they should be removed. Part of the lamina on the affected side to remove the presacral abscess from the back to the front, and avoid the use of sharp instruments to damage the presacral blood vessels and the lumbosacral plexus. 2 For anterior approach surgery, pay attention to flexing the hip and knee joint to relax the L4 and L5 nerve roots. 3 In the anterior approach, the iliac muscle should be stripped away with other soft tissues of the pelvis and pulled to the inside together with blunt retractors to protect the superior gluteal artery and vein and the lumbosacral trunk. The stripping range on the sacral side should not exceed 20 mm inside the sacroiliac joint [21,22]. 4 The iliac muscle should be stripped under the periosteum through the anterior approach, and bone wax is used to stop bleeding when the feeding vessels bleed. 5 The gauze should be wrapped with fingers to separate the tissue carefully and gently when the lesions are exposed to the sacroiliac joint during the operation; 6 For those with sinus tract, firstly remove the abscess lesions inward along the sinus tract and search for the bone cavity at the sacroiliac joint, and finally extend to the intra-articular lesions for complete curettage. 7 As it is difficult to completely close out the lesions by fenestration and curettage through the posterior approach, anhydrous ethanol, carbolic acid and hydrogen peroxide should be used together. 8 Due to the large residual cavity after curettage, adequate bone grafting is required to facilitate fusion. We recommend fenestration and autogenous ilium. 9 The combination of anti-tuberculosis drug chemotherapy and surgery must be adhered to, and the principles of sufficient dose, combination, whole process and regularity must be followed.
Generally speaking, SJT patients with large abscess cavity, thin pus and poor constitution have a high recurrence rate after surgery. However, patients with small abscess cavity, thick pus and strong constitution have less chance of recurrence and good postoperative curative effect [23]. Therefore, it should be noted during the operation that the sinus tract and abscess cavity should be eliminated first, and then the joint space lesions should be completely cured. Finally, place a negative pressure drainage tube in the operation area and continue to adhere to the drug consolidation therapy for 1-1.5 years after surgery. Tuberculosis recurrence should be prevented in patients with pulmonary tuberculosis or massive bone destruction [7]. In our study, there was 1 patient with anterior metamorphic surgery who had recurrence 3 months after surgery, which was considered to be caused by the formation of sinus tract after surgery. Therefore, it is necessary not only to select the appropriate surgery according to our disease classification, but also to carefully remove the soft tissue lesions that may be infected; all patients were treated with standard quadruple antituberculosis drugs before and after surgery, and the total course of treatment was 18-24 months.
The advantage of this study is that the corresponding surgery is selected according to the different characteristics of Kim's type III-IV SJT abscesses and lesions, and similar studies are relatively few. This provides some experience in the treatment of sacroiliac joint tuberculous arthritis.
This study has certain limitations. This is a retrospective study, and the number of cases is not large enough. Therefore, some prospective studies and studies with larger sample sizes are needed in future studies. In addition, although anterior surgery is somewhat cumbersome because of the important blood vessels and nerves in the anterior. However, anterior surgery is the most promising way to completely remove the lesions for type A SJT. Therefore, each SJT patient needs to synthesize its unique lesion characteristics and choose the appropriate surgical method.
Conclusion
Due to the insidious onset of tuberculous sacroiliitis, it is often misdiagnosed in the early stage and receives informal treatment. Therefore, tuberculous sacroiliitis cannot be easily ignored when the patient presents with pain in the groin, buttocks, and around the sacroiliac and an abnormally high erythrocyte sedimentation rate. A detailed history and careful examination of the sacroiliac joints are key to the diagnosis. It is highly suggestive of tuberculosis when X-ray and CT scans show sacroiliac joint calcification, joint space enlargement, and articular surface destruction. The diagnosis should be confirmed by fine needle aspiration biopsy in the early stages of infection. Chemotherapy with standard anti-TB drugs should be aggressively administered when TB infection is identified.Surgery should be performed as early as possible to facilitate fusion of the sacroiliac joints after control of tuberculosis infection. Posterior fenestration with minimal secondary damage to the joint is the first choice for stationary SJT without abscess. Anterior approach is required for SJT with an abscess in the anterior sacroiliac. | 2022-08-23T13:29:38.851Z | 2022-08-22T00:00:00.000 | {
"year": 2022,
"sha1": "ad8ad1dfb5f6eec76b554b6ecfaaff9577fb6efd",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Springer",
"pdf_hash": "cfcd483a618643656541651c4c3a8ca1f96dc08e",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
253789968 | pes2o/s2orc | v3-fos-license | Temperature-to-Digital Converters’ Evolution, Trends and Techniques across the Last Two Decades: A Review
This paper presents an extensive review of the main highlights in the Temperature-to-Digital Converters (TDCs) field, which has gained importance and research interest throughout the last two decades. The key techniques and approaches that have led to the evolution of this kind of systems are presented and compared; their peculiarities are identified in order to highlight the pros and cons of the different design methods, and the main trade-offs are extracted from this analysis. Finally, the trends that have emerged from the performance evaluation of the large amount of published works in this field are identified with the purpose of providing a directional view of the past, present and future features of these devices.
Introduction
On-chip temperature measurements have acquired an increasingly important role over the past two decades, especially if we consider sensors that produce data in the digital domain, referred to as Temperature-to-Digital Converters (TDCs). The growing computational power of modern microprocessors has given rise to a higher degree of criticality in their thermal management process [1]; for instance, dynamic voltage and frequency scaling (DVFS), a commonly used approach in this framework [2], requires responsive temperature tracking to allow an effective control on the thermal status of the microprocessor and, furthermore, the cooling fans' speed regulation is also based on a continuous temperature monitoring [3][4][5]. Another field that has featured a remarkable growth in recent years is the Micro-Electro-Mechanical Systems (MEMS) one [6]; the employment of these devices for Internet of Things (IoT) applications, supported by a parallel technological development, has led the research focus to more and more robust devices with respect to the influence of environmental effects. One of the main challenges is, indeed, to mitigate the impact of the ambient temperature on the performance of these devices; the micro-structures used as sensing elements suffer from a significant thermal spread causing a degradation of the reliability of the sensed quantity. For this reason, high-precision MEMS devices also require a temperature tracking to compensate for the drift of their parameters [7][8][9][10][11]. Integrated temperature sensors are also used for clinical applications [12][13][14]; devices that provide a high accuracy monitoring in the human body temperature range are needed for the detection of atypical biomedical conditions. Lastly, since temperature is a fundamental physical parameter of both industry and everyday life, on-chip temperature measurements are also combined with radio-frequency identification (RFID) tags in several applications: monitoring of the food cold chain [15,16], environmental monitoring [17,18], supply chain management of healthcare products [19], animal healthcare monitoring [20] and many more. This paper, besides proposing a State-of-the-Art analysis, reviews the different design techniques employed for all the presented on-chip temperature sensing applications and is organized as follows: Section 2 addresses the basics of TDCs taking all their relevant parameters into account and explaining, through four different subsections, the different design techniques adopted so far. Section 3, instead, is focused on the main trends and trade-offs that emerge from the analysis of the previous section; its goal is to provide an overview of the TDC features' evolution over more than twenty years of research activity and to deliver to the reader an useful set of performance considerations to discerningly start a new design in this framework or simply to enter more deeply into the world of TDCs. Section 4 concludes the paper, highlighting the main introduced concepts with a brief recap.
Temperature-to-Digital Converters: Theory and Design Techniques
There are a lot of applications requiring on-chip temperature sensing, as seen in the introduction, and concern several systems in the microelectronics field; despite their wide range, all the reported examples [3][4][5][7][8][9][10][11][12][13][14][15][16][17][18][19][20] have one important feature in common: they provide temperature information in the form of digital data. This is fundamental as it makes them compatible for a direct communication with digital signal processing (DSP) circuits that can easily handle the needed temperature information and at the same time reduces the complexity of the system they are inserted in; for this reason, they are often referred to as smart temperature sensors [21] or as Temperature-to-Digital Converters (TDCs). It is important to specify that this category of temperature sensors was born with a cost-minimization perspective and that its development in the past two decades has consequently followed this line; even if, in principle, these fully integrated temperature sensors have significant limitations in terms of accuracy and sensing range with respect to other existing discrete sensors, their great success is related to their compatibility with large-scale production of low-cost products being integrated within the system in which they are operated. Figure 1 shows the conceptual diagram of a TDC. It is composed of an Analog Front-End (AFE), an Analog-to-Digital Converter (ADC) and a Digital Back-End (DBE). The TDC's input signal is temperature; the AFE, the first block of the chain, is responsible to sense it achieving an electrical form for it (either in the voltage or in the current domain) and to generate at its output the signals needed for the Analog-to-Digital conversion: a proportional-to-absolute-temperature (PTAT) signal which contains the information to be converted and a reference (REF) signal, which in principle is a Zero-Temperature-Coefficient (ZTC) signal, with respect to which the conversion is carried out. Those signals enter the ADC which produces PTAT digital words with an intrinsic n-bit resolution and with a data rate ( f S ) that depends on the converter architecture; this operation is typically performed without the use of sample and hold (S/H) circuits because of the relative slowness of the temperature signal with respect to the common conversion rates of ADCs. The n-bit codes are then processed by the DBE that, in fact, acts as an oversampler; it refines their intrinsic resolution performing decimation and filtering with a certain OverSampling Ratio (OSR) in order to obtain the output codes of the TDC which feature a higher resolution at the cost of a lower data rate ( f S /OSR).
The resulting time interval required to perform a single Temperature-to-Digital conversion is therefore given by Considering the TDC's minimum working supply voltage (V sy ) and the current drained from it (I sy ), its conversion energy can be defined as It is a parameter of paramount importance together with the TDC's resolution (Res) which is the minimum temperature difference that can correctly be detected and which is determined by the quantization noise of the ADC, by the electronic noise (thermal, flicker, etc.) and by T conv itself. Another parameter of interest is the temperature inaccuracy (I A); in absolute form, it is a statistical evaluation of the worst case (or ±3σ) temperature error and, introducing the TDC conversion range (T range ), its relative form can be expressed as This quantity is strongly dependent on the number of controlled temperatures at which the TDC gets trimmed (n trim ) [22,23], an unavoidable procedure in most applications; the trimming process, which basically consists of calibrating the sensed temperature error, is a cost of great relevance in the TDC framework as heating and cooling the devices to be trimmed is a very time consuming operation. For this reason, n trim should be minimized to preserve the cost-effectiveness of the sensor.
Due to the presence of this great variety of parameters of interest, several Figures of Merit (FoMs) have been introduced to provide TDC performance metrics in a synthetic way and from specific perspectives: (4) and (5), presented in [24], involve the TDC conversion energy together with its resolution or its inaccuracy, respectively. (6), instead, addresses only the production cost of the TDC (Area is the active silicon area of the device, F is the feature size of the adopted technological process) while (7) provides a global overview of the TDC performance [25]. Several ADC architectures have been used, in literature, to be included in TDCs; there are examples of Flash-based TDCs [26,27], of SAR-based ones [11,28], of Σ∆-based ones [5,14], of time/frequency-domain-based ones [22,29] or of hybrid solutions [30,31]. It is important to notice that even if, conceptually, Flash ADCs and SAR ADCs are faster for a given quantization noise and clock frequency, to overcome the limits imposed by the presence of thermal noise, their output codes still need to be processed by the DBE and therefore, for the same amount of power consumption, are not automatically at a higher energy efficiency level with respect to the Σ∆-based or the time/frequency-domain-based alternatives. Actually, thanks to their versatility, Σ∆ converters are the most used ones in the case of AFEs generating static temperature-dependent signals while time/frequencydomain-based ADCs are preferred in the case of dynamic temperature-dependent signals.
It makes sense to categorize TDCs on the basis of the sensing device/technique adopted within the AFE; four main categories can be identified: BJT-based TDCs (Section 2.1), MOS-based TDCs (Section 2.2), resistor-based TDCs (Section 2.3) and Thermal Diffusivity (TD) based TDCs (Section 2.4). The next subsections address in detail the peculiarities of each of these sensing techniques.
BJT-Based TDCs
On-chip temperature sensing can be achieved exploiting the thermal behaviour of the base-to-emitter voltage (V BE ) of bipolar transistors operated in the forward-active region [4,5,11,13,16,19,26,[32][33][34][35][36][37][38][39][40][41][42][43][44][45][46][47]. It can be expressed as where k is the Boltzmann constant, T is the absolute temperature, q is the magnitude of the elementary charge, I C is the collector current and I S is the bipolar saturation current which, typically, is in the fA to pA range, is proportional to the emitter area and exhibits a strong temperature dependence (as a rule of thumb, it doubles for every 5 K rise). This provides a complementary-to-absolute-temperature (CTAT) voltage variation with the well-known average slope of about −2 mV/K. Considering a pair of BJTs operating at different collector currents and/or having different emitter areas, a proportional-to-absolute-temperature (PTAT) signal is obtained taking the difference of their base-to-emitter voltages into account. According to the scheme and the notations of Figure 2, the following expression holds: where a and b are the emitter areas and collector currents ratios, respectively.
BJT pair for ∆V BE signal generation.
Referring to Figure 1, a ∆V BE -dependent signal can be used as the PTAT one while the REF signal can be generated by means of a proper combination of V BE -dependent and ∆V BE -dependent contributions [48].
BJT-based TDCs are the most common ones thanks to the good intrinsic accuracy of bipolar transistors [49]; this leads to temperature sensors requiring at most one trimming point to achieve inaccuracy values which other sensing techniques implement after two trimming points or more. This feature is essential from the cost-effectiveness point of view and, together with the availability of bipolar transistors (even if parasitic) within most CMOS processes, is the reason for the great employment of these kinds of devices for on-chip temperature sensing.
MOS-Based TDCs
Another possibility for integrated temperature sensing is to rely on the thermal variations related to MOS devices; an option is to exploit the significant temperature dependence offered by the gate-to-source voltage (V GS ) of transistors operated in the subthreshold region [29,[50][51][52][53]: where V th is the threshold voltage, n depends on the MOS structure and I D0 is the drain current for V GS = V th . Besides being directly proportional to the transistor aspect ratio (W/L), I D0 increases with temperature almost parabolically giving rise to a CTAT behaviour for V GS ; in absolute value, it exhibits a slightly lower average slope (about −1.5 mV/K [49]) with respect to the previously introduced V BE slope (about −2 mV/K). Similarly to the BJT case, considering a pair of MOSFETs biased at different drain currents and/or having different aspect ratios, a PTAT signal is obtained taking the difference of their gate-to-source voltages into account. According to the scheme and the notations of Figure 3, the ∆V GS signal can be expressed as where a and b are the W/L and drain currents ratios, respectively. It is interesting to notice that the PTAT sensitivity offered by subthreshold operated MOS devices benefits from the presence of the n coefficient if compared to the bipolar case; considering that this technology dependent parameter is intrinsically larger than 1, for the same a and b ratios, the ∆V GS temperature sensitivity is intrinsically higher than the ∆V BE one [49]. Also in this case, a reference signal can be generated by combining V GS -dependent and ∆V GS -dependent contributions. Another option to exploit the temperature dependence of MOS devices for on-chip sensing is to consider the propagation time (t p ) of CMOS inverters; as shown in (12), this parameter depends on many variables such as the adopted supply voltage (V DD ), the threshold voltage (V th ) and the size (W,L) of the devices constituting the inverter, the carriers mobility (µ), the oxide capacitance (C ox ) and the capacitance (C L ) of the load to be driven: In particular, V th and µ are a function of the temperature that, if properly exploited, may lead to an effective sensing.
The first way to achieve a t p -based temperature to digital conversion is to rely on a delay line [29,30,[54][55][56][57] as shown in Figure 4a. A clock signal running at a reference frequency ( f re f ) is passed through a delay line composed by N inverters and is compared with an undelayed version of itself; this gives rise to temperature dependent time intervals which can be expressed as ∆t(T) = N · t p (T), (13) and which are processed by a time-to-digital converter that, hence, generates temperature dependent digital words (D out ). The second possibility, instead, is to exploit the thermal behaviour of ring oscillators [22,52,53,[58][59][60][61][62][63][64] as shown in Figure 4b in which the t p temperature dependence impacts the oscillation frequency ( f osc ) as shown by the following expression: The signal produced by the oscillator gets processed by a counter (clocked at f re f ) which generates temperature dependent digital codes (D out ) depending on the oscillations count. In addition to this, in 2019, new interesting MOS-based techniques were proposed, opening the doors for sub-nW TDCs design. An innovative temperature sensing principle based on the gate-leakage current of MOS devices was adopted in [65,66], resulting in an exceptionally low power consumption. The t p -based and the leakage-based approaches offer outstanding performance in terms of energy/conversion but typically exhibit poor linearity and accuracy.
Resistor-Based TDCs
Also integrated resistors exhibit a significant thermal variability that makes them suitable for on-chip temperature sensing. Considering a first order approximation, their resistance value can be expressed as where R 0 is the resistance value at a reference temperature T 0 , TC is the temperature coefficient and Tables 1 and 2 report realistic TC values for some kinds of resistors in 0.18-µm and 65-nm CMOS processes, respectively. Table 1. First order TCs of different resistor types in a standard 0.18-µm CMOS process [67]. [68].
According to Figure 5a, temperature information is contained in the V sig voltage, which can be expressed as Figure 6 shows V sig as a function of temperature for several (|α|; |β|) combinations in a symmetrical 100 K ∆T range; to maximize the Wheatstone bridge temperature sensitivity, the |β|/|α| ratio should be selected as high as possible according to the resistor availability of the adopted technology. On the other hand, as pointed out by Table 3 and as can be easily derived from (19), the linearity of the thermal response degrades moving away from the |α| = |β| optimal case (it is important to mention that the reported considerations do not take any second or higher order contribution to the resistance temperature variability into account). Figure 6. V sig as a function of ∆T in different (|α|; |β|) conditions for α = +1.5 · 10 −3 K −1 , realistic value for n+ diffusion integrated resistors [67]. Table 3. Additional details regarding the curves of Figure 6.
1 Evaluated as norm of residuals between V sig and its linear fit across the considered ∆T range.
TDCs based on RC [68,73,74] and Wien-bridge [67,[75][76][77][78] filters, instead, take advantage of the temperature variations of their transfer functions; in both cases, as can be deduced from Figure 5b,c, the temperature dependence of the employed resistors causes an alteration of their phase response that can be exploited to achieve the desired temperatureto-digital conversion; this is achieved by driving the considered structures with signals oscillating close to the fundamental frequency of the filters (ω 0 = 1/R(T)C in both cases) at room temperature and processing their output by means of appropriate phase-to-digital conversion circuits. Given the RC transfer function, its temperature-dependent phase shift can be expressed as Figure 7a shows the RC phase response for different resistance values in the ±20% range where the selected colors conceptually refer to a positive TC resistor (the warmer the color, the higher the temperature); since the most effective temperature phase impact occurs at ω = ω 0 , Figure 7b reports the phase shift generated by the RC filter as a function of the resistance variation with respect to the room temperature value (R 0 ).
The Wien-bridge transfer function is instead given by
and its temperature-dependent phase shift can be expressed as In keeping with the graphs reported for the RC case, Figure 8a shows the Wien-bridge phase response for the same resistance value variation range while Figure 8b reports the resulting phase shift at ω 0 . For the same resistance variation and capacitor value (C), the Wien-bridge filter achieves a better phase sensitivity to temperature and linearity if compared to the RC one at the cost of a double occupied area; considering that, typically, the size of the filter is not the limiting element in the TDC area breakdown, Wien-bridge filters are the preferred choice over RC ones.
As will be addressed in Section 3.4, the TDCs exploiting the presented resistor-based temperature sensing techniques are undoubtedly the best in class from the energy efficiency point of view but typically are less accurate than BJT-based solutions and more power hungry than MOS-based solutions.
TD-Based TDCs
The last considered category is that of thermal diffusivity TDCs [79][80][81][82][83][84]. These on-chip sensors exploit measurements of the thermal diffusivity of silicon (D Si ) which exhibits a considerable temperature dependence and, moreover, does not suffer from process spread variations. This quantity can be sensed by means of the electrothermal filter (ETF) shown in Figure 9. A heater that can be realized by a diffusion resistor is driven by a square wave (at a f drive frequency) and, consequently, generates heat pulses which diffuse to a neighboring thermopile placed at a distance s; these pulses are affected by a delay and by an attenuation which are determined by D Si which, in turn, is a function of the temperature (∝ T 1.8 ) [81]. For this reason, the phase of the voltage sensed by the thermopile (V sense ) is sensitive to temperature and, according to [84], can be expressed as With similar phase-to-digital conversion solutions as the ones needed for the previously introduced RC-based and Wien-bridge-based TDCs, φ ETF can be digitized, thus generating temperature dependent digital codes. The major drawback of this kind of sensing technique is the large amount of power (>1 mW) burnt to drive the heater: its energy inefficiency makes it unsuitable for the majority of battery-powered applications. Nevertheless, TD-based TDCs offer a really remarkable accuracy performance, especially considering that, in many cases, no trimming procedure is required; this aspect will be further explored in Section 3.2.
State-of-the-Art Review and Design Trends
Over the past two decades, more than 150 TDC works have been published, each of which can be assigned to one of the four categories introduced in Section 2. A really valuable survey [85] that keeps track of all these works has been made available by prof. Makinwa from TU Delft and has been adopted as dataset for all the following analysis and considerations. The time evolution and the performance peculiarities of the four considered TDC types are investigated in the next subsections, each addressing a primary parameter of interest of TDCs: resolution (Section 3.1), inaccuracy (Section 3.2), conversion energy (Section 3.3), energy efficiency (Section 3.4) and silicon area (Section 3.5). All the reported trend-lines have been produced by a log-scale adapted smoothing spline method based on the geometric mean of the considered parameter values for each year.
Resolution
As introduced in Section 2, the resolution of a TDC is the minimum temperature difference that can correctly be detected; it is a function of the intrinsic quantization noise of the ADC used to perform the temperature-to-digital conversion, of the amount of electronic noise that affects the TDC output and of the DBE processing type. Figure 10 reports the resolution of the considered works as a function of the publication year for all of the four studied categories of sensors; it can be noticed that the resolution performance of TDCs is basically trend-less since its requirements are strongly applicationdependent: the resolution specification is of prime importance in the cases in which the sensing goal is to precisely detect temperature variations but a moderate value can be acceptable in the case of accuracy-oriented designs, in favour of a conversion energy saving. In addition to this, it can be observed that the first examples of resistor-based TDCs have been introduced just starting from 2010 and, a few years later, a series of high resolution works exploiting this sensing approach has been proposed, actually showing their greater potential regarding the resolution parameter. This feature can be further appreciated, considering Figure 11; the resolution of each item shown in Figure 10 has been collected to build a bar plot organized on the basis of five decades: maintaining the sensing-type distinction, it provides an overview of how the resolution performance of all the considered works is distributed, confirming the advantage of resistor-based TDCs. It should be taken into account that, in principle, resolution can always be improved by increasing the DBE OSR at the cost of a higher conversion time (1) and that, therefore, the performance limitation of the other kinds of sensing approaches is actually related to their worse energy efficiency, a parameter that will be addressed in detail in Section 3.4.
Inaccuracy
In the same vein of what was presented for resolution, Figure 12 shows the relative inaccuracy, defined in (3), as a function of the publication year for the TDCs surveyed in [85]. Also in this case, a trend-less behaviour can be noticed, once again because of the application-dependency of the accuracy specification of TDCs. For example, the ones designed for clinical applications require absolute inaccuracy values on the order of ±0.1°C, while the ones used to track the temperature status of microprocessors or to compensate for the thermal drift in MEMS resonators typically require an inaccuracy of about ±1°C or even worse. In order to evaluate the accuracy performance potential of the four considered sensing techniques, it is of paramount importance to take the number of trimming points into account since, as a rule of thumb, the transition to the 1-point trimming condition from the untrimmed one typically provides a benefit of at least a factor two to the accuracy of the sensor, while the addition of a trimming point at a second temperature generally improves the TDC accuracy of at least an extra factor four. For this reason, the inaccuracy bar plot, analogous to the resolution one of Figure 11, has been split into three plots: Figure 13 addresses the untrimmed works, Figure 14 focuses on the TDCs with a single-temperature trimming, while Figure 15 considers the works with at least two trimming points.
It can be seen that, from the accuracy point of view, the TD-based TDCs are the best in class, followed by the BJT-based ones; they are, indeed, the only types of sensors that can achieve relatively good accuracy without the need of being trimmed (Figure 13), a huge advantage in terms of cost-effectiveness. MOS-based TDCs and resistor-based TDCs, instead, require at least one trimming point (in most cases 2-pts, Figure 15) to offer acceptable performance and therefore are undesirable for accuracy-oriented designs. On top of this, it is important to remember that, in addition to the spread due to the sensing element, inaccuracy is also determined by the spread of all the components present in the device [49] and consequently it may not be limited by the sensing technique choice but by the matching performance of the entire circuitry of the AFE and of the ADC. In this framework, a key element to take into account is the silicon area size of the TDC (addressed in Section 3.5): the smaller its active area, the tougher the achievement of acceptable accuracy values.
Conversion Energy
The growth of the IoT market and the increasing number of battery-powered systems requiring on-chip temperature sensing have induced a really strong trend when it comes to TDC conversion energy (2). This parameter, which is a full-fledged measure of the energy price to pay to achieve a single temperature-to-digital conversion, is crucial to ensure the highest battery lifetime possible or even to allow the operation of energy-harvesting-based devices such as [86], in which temperature-dependent digital codes are generated with just a few picojoules of energy. Figure 16 reports the conversion energy values of the same works analyzed in the previous subsections as a function of their publication year. In this case, a trend towards lower values is definitely visible; the TDC conversion energy exhibits a reduction of about a factor 10 every five years, a clear direction that allows for predicting the future evolution of these kinds of devices. As reported for the resolution and the inaccuracy cases, Figure 17 shows the conversion energy performance distribution across four orders of magnitude and with the different sensing-types taken into account. It can be noticed that, undoubtedly, TD-based TDCs, due to the power consumed by the heater, require the highest conversion energy while the other three types exhibit quite similar performance. Similarly to the resolution discussion (Section 3.1), it is important to consider that, naturally, the conversion energy can be reduced by accepting a poorer temperature resolution and therefore, also in this case, the reported conversion energy values are linked to the efficiency of the different sensing techniques that will be addressed in the next subsection.
Energy Efficiency
Both Sections 3.1 and 3.3 have introduced the resolution vs. conversion energy trade-off. The energy efficiency of a TDC is a metric of what resolution can be achieved for a given conversion energy or, on the other hand, what conversion energy is needed to achieve a target resolution. To determine what the trade space of a certain TDC is and, consequently, to determine its energy efficiency, it is useful to consider the resolution FoM introduced in (4), in which Res is squared because it is usually limited by thermal noise and therefore, to achieve an improvement of a factor two of it, a four times larger conversion time is required and so on. Figure 18 shows the time evolution of the energy efficiency of the same considered works of the previous subsections. Three different phases can be identified: at first, approximately until 2010, there is a horizontal phase in which the novelty of such kind of integrated sensors has resulted in TDCs without the primary target of energy efficiency but simply aiming at a proper operation of the device (functionality phase). Then, from 2010 to 2019, the trend starts to bend down, taking a definite direction with an improvement of about a factor 10 every 3 years (performance phase); lastly, from 2020 onwards, a significant breaking of the trend-line can be observed, which indicates the difficulty for a further progress of the TDC energy efficiency (saturation phase). Similarly to what has been proposed for the previously analyzed TDC parameters of interest, the bar plot of Figure 19 provides an overview of how the different kinds of considered sensing techniques are distributed in terms of energy efficiency. It is clear that, from this point of view, the best performing sensors are the resistor-based ones; BJT-based and MOS-based TDCs offer quite similar performance while, as previously introduced, TD-based TDCs are the most energy-inefficient ones. Figure 19. TDC energy efficiency performance distribution with sensing-type distinction.
Silicon Area
Finally, the occupied silicon area of the considered TDC works is taken into account; still bearing in mind that it usually offers a direct trade-off with the temperature sensing accuracy performance, the compactness of the TDC is a fundamental requirement considering a production cost minimization perspective. Accordingly, in the last two decades, the size reduction trend has been pretty significant and is shown in Figure 20: it can be observed that the silicon areas of the oldest reported works in the range of 1 mm 2 have progressively given way to designs featuring active areas reaching a few hundred of µm 2 .
Once more, Figure 21 shows how the considered TDCs are distributed in terms of active area and sensing-type. In this case, as will become clearer in the wrap-up proposed in Section 4, the sensors that, on average, offer the best compactness are the MOS-based ones, followed by the TD-based ones; resistor-based and BJT-based devices, even if there are exceptional cases as [87] or [28], generally require a larger area.
Conclusions
This paper reviewed the TDCs State-of-the-Art, initially browsing the main on-chip temperature sensing techniques (Section 2) and then highlighting the most significant trends and trade-offs (Section 3).
To summarize the proposed considerations, Table 4 reports performance indicators for each of the four studied sensing techniques and for each of the parameters of interest previously analyzed, with inaccuracy differentiated according to the number of adopted trimming points. For every entry, the geometric mean of the corresponding values of the TDC works discussed in Section 3 has been computed and considered as a meaningful indicator being based on two decades of research activity. For each parameter, the best indicator has been highlighted in green so that the most attractive features of each sensing category could be easily identified; it is interesting to note that, on the basis of a TDC design specifications, each of the sensing techniques could be the optimal choice. Indeed, BJTbased sensors exhibit the best 1-pt trimmed inaccuracy indicator, MOS-based sensors have the lowest conversion energy one and offer the highest degree of compactness, resistorbased sensors feature the best resolution, energy efficiency and accuracy after at least 2 trimming points, while TD-based sensors exhibit the lowest untrimmed inaccuracy. Starting from the results collected in Table 4, it has been possible to build a spider chart ( Figure 22) to provide a graphical representation of the considerations presented in this work to intuitively and immediately figure out the strengths and the weaknesses of the different categories of TDCs. To effectively design the spider chart, all the values reported in Table 4 have been normalized with respect to the best one for each parameter of interest (the relative inaccuracy values have been merged according to the coefficients of the rule of thumb introduced in Section 3.2); then, considering that all the parameters are of the lower-isbetter kind, they have been converted to a higher-is-better mode with a simple inversion and, finally, have been plotted adopting log-scaled axes to make differences of orders of magnitude still appreciable. Given the extent of the corresponding pentagon, the chart clearly illustrates how promising resistor-based TDCs are and motivates the high number of works exploiting this sensing technique published in the last four years as shown in Section 3. Nevertheless, these kinds of TDCs have considerable linearity issues and, in most cases [28,31,67,68,[70][71][72][76][77][78], the employment of nonlinearity polynomial error correction techniques is mandatory; this limit, considering that linearity is a crucial parameter for example in MEMS thermal drift compensation applications, may guide the sensing-type choice to the presented alternatives.
In conclusion, the message is that, since each TDC type excels in a different parameter of interest, the sensing technique should be definitely selected on the basis of the requirements of the specific application for which the TDC is designed for; there is no a priori winner. Finally, the feeling resulting from this review is that the research interest in this field will remain strong in the next several years thanks to a constant need for on-chip temperature sensing in a wide variety of applications and to inherent increasingly challenging requirements. Data Availability Statement: Data supporting the reported results can be found at [85].
Conflicts of Interest:
The authors declare no conflict of interest.
Abbreviations
The following abbreviations are used in this manuscript: | 2022-11-23T16:20:49.511Z | 2022-11-01T00:00:00.000 | {
"year": 2022,
"sha1": "5b37b2d0d0765acda0a0faf32ae031a83dcf0962",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2072-666X/13/11/2025/pdf?version=1668847011",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "038a092d0d40864160ea10002cd73657a69baf45",
"s2fieldsofstudy": [
"Engineering"
],
"extfieldsofstudy": [
"Medicine"
]
} |
255433450 | pes2o/s2orc | v3-fos-license | Student-content interactions: Exploring behavioural engagement with self-regulated inquiry-based online learning modules
Technological innovations and changing learning environments are influencing student engagement more than ever before. These changing learning environments are affecting the constructs of student behavioural engagement in the online environment and require scrutiny to determine how to facilitate better student learning outcomes. Specifically, recent literature is lacking in providing insights into how students engage and interact with online content in the self-regulated environment, considering the absence of direct teacher support. This paper investigates how instructional design, informed by the factors relating to behavioural engagement, can influence the student-content interaction process within the fabric of inquiry-based learning activities. Two online learning modules on introductory science topics were developed to facilitate students’ independent study in an asynchronous online environment. The study revealed that students showed a high commitment to engaging and completing the tasks that required less manipulative and pro-active effort during the learning process. The findings also revealed that instructional guidance significantly improved the behavioural engagement for student groups with prior learning experience in science simulations and technology skills. This study highlights several issues concerning student engagement in a self-regulated online learning environment and offers possible suggestions for improvement. The findings might contribute to informing the practice of teachers and educators in developing online science modules applicable to inquiry-based learning.
Introduction
Student engagement is a prerequisite for learning and central to any successful educational experience. Contemporary research relating to online learning environments (Garrison & Cleveland-Innes, 2005;Meyer, 2014) highlights the key role of engagement in effective learning. Researchers have endeavoured to define and understand various dimensions of student engagement that apply across various contexts (Bond et al., 2020). Some have defined student engagement as a 'psychological process' implicated Page 2 of 31 Al Mamun and Lawrie Smart Learning Environments (2023) 10:1 in learning (Marks, 2000); others have conceptualised it by considering what behaviours count as engagement (Harris, 2008) and what constructs need to be considered to define them (Sinatra et al., 2015). Nonetheless, commonly identified and investigated dimensions of engagement found in the literature focus on the behavioural, cognitive, and emotional aspects of this phenomenon (Fredricks et al., 2004). Behavioural, cognitive, and emotional engagement often include multidimensional constructs and are highly influenced by context and defined by a given conceptual framework (Reeve et al., 2019;Schmidt et al., 2018). Whether it is the construct or context, it has been argued that a detailed level of specificity is required to measure and conceptualize student engagement (Sinatra et al., 2015).
Within an online learning context, student engagement and interactivity are difficult to capture in precise detail (Rojas et al., 2016). One of the reasons for this difficulty resides in the complex nature of the online environment and the nature of the task involved. The online environment may involve multiple dimensions (Anderson, 2008;Mayer, 2019) as variables ( Fig. 1) and their combination requires careful consideration during instructional design.
A traditional didactic lecture might be defined by combining the far, left-hand conditions in the continua in Fig. 1, whereas an online, open, inquiry-based learning (IBL) environment involving individual students might be described by a combination of the far right-hand conditions. Mayer (2004) presents a strong case for avoiding unstructured, unguided inquiry environments where high cognitive load and lack of direction may result in negative outcomes on student learning.
Online students in remote, asynchronous, individual environments are likely to experience different interactions to those in face-to-face, teacher facilitated, synchronous contexts (such as the traditional classroom), and immediate individual feedback is easier to deliver in the latter. Also, an online environment offers a novel teaching and learning context which is highly influenced by the digital interface, available technologies, and the underpinning pedagogical design. Mayer (2019) proposes, after 30 years of research on online learning, that the instructional methods are central to student learning and are informed by a combination of behaviourist, cognitivist, and constructivist conceptions of learning. It is not the instructional media on its own that enables learning.
Key questions that educators might pose regarding students' engagement in online contexts include: What do students engage with? When do students engage? and How do students use educational technology in their learning? (Ding et al., 2017;Dixson et al., 2017;Sheeran & Cummings, 2018). To answer these questions, educational institutions are primarily dependent on the data from the learning management system (LMS) analytics. LMS analytics readily capture quantitative engagement data such as how many clicks, login time, submissions or reads were made by each student. Total time spent on the activities, the total number of completed tasks achieved, etc. are also available in LMS. However, while data analytics are conceptualised as indicators of student behavioural engagement, they are insufficient to define student engagement in detail, specifically the quality of the engagement related to learning. Researchers are keen to understand the nature of students' behavioural engagement with the technology resources while they study independently and how the underlying pedagogical design influences students' independent interactions during tasks. To address this issue, the following research question has been investigated within an inquiry-based learning context to enable further understanding of the nature of student exploration and interaction with the learning content: • To what extent does prior experience with interactive simulations influence student behavioural engagement during student-content interactions in the self-regulated inquiry-based online learning modules?
Inquiry-based learning pedagogies: disciplinary and contextual versatility IBL has been described as a flexible pedagogical approach for active, student-centred forms of instruction in higher education, its adoption is evident across all levels of education (Aditomo et al., 2013). Many consider IBL as a pedagogy that is particularly relevant to science, technology, engineering, and mathematics (STEM) and science education disciplines through the focus on laboratory learning (Abd-El- Khalick et al, 2004). It is also evident in practice across multiple other disciplines such as psychology (MacKinnon, 2017), arts, humanities, and social sciences (Ahmad et al., 2014;Shih et al., 2010), vocational education (van der Graaf et al., 2020), nursing and medical education Theobald & Ramsbotham, 2019). IBL has been used in a wide range of educational levels and contexts such as K-12 classrooms (Aditomo & Klieme, 2020;Kubicek, 2005), undergraduate and graduate level education (Chan & Pow, 2020;Lewis et al., 2021). IBL approaches have been particularly prevalent in STEM and health education disciplines. IBL methods are praised for fostering authentic learning experiences in practice-based disciplines and are well suited to the cognitive difficulties encountered in clinical practice (Levett-Jones et al., 2010;Tang & Sung, 2012). Research shows that IBL strategies promote group interaction and reflection on authentic practices (Horne et al., 2007), and provide an enjoyable experience to learn (Kirwan & Adams, 2009). Recently Theobald and Ramsbotham (2019) employed IBL approach using a clinical reasoning framework with scaffolding elements to examine undergraduate nursing students' interactions and teachers' teaching behaviours. They found that clinical reasoning scaffolds embedded within the IBL approach promote high levels of student engagement. The teacher also plays a key role to create a favourable IBL environment for the students. Sotiriou et al., (2020) showed that even in large-scale implementations at school levels, teachers can create individual inquiry scenarios and monitor students' achievement when an IBL approach has been effectively integrated within the programme. Findings from this research showed that individual inquiry scenario helps the high achievers more than the other students in complex problem-solving scenarios. Spronken-Smith & Walker (2010) recommend that teachers carefully consider the learning outcomes based on the level of instructional guidance provided during IBL, the teaching and research nexus can be strengthened through open, discovery-oriented inquiry whereas highly structured activities scaffold the development of inquiry skills.
IBL pedagogies can facilitate multiple aspects of blended learning according to the instructional aims for student learning outcomes, the integration of collaborative learning tools within the IBL can create more effective teaching and learning processes involving student-student (S-S) interactions in higher education (Chan & Pow, 2020;Kopeinik et al., 2017).
Student diversity is an important consideration in instructional design, Laursen et al., (2014) studied the implementation of IBL in undergraduate mathematics courses and found that deep engagement and collaboration of ideas are the two key components contributing to students' active learning (Laursen et al., 2011). They also found that gender becomes an important variable in non-IBL courses in which women are gaining lower mastery than men, these differences disappeared in IBL courses. This indicates that IBL approaches can potentially address those courses that have historically promoted inequitable access to learning for women. Archer-Kuhn (2020) examined in what ways has IBL been utilized in higher education, and how IBL approaches might be compatible with values that promote social justice. Archer-Kuhn (2020) further argued that IBL can uphold various social work principles and supports the linking of theory to practice during service-learning.
Research has incorporated modern technology tools and devices to facilitate the IBL approach into the online learning context. For example, in the vocational education context, van der Graaf et al., (2020) use eye-tracking to examine the integration of informational texts and virtual labs during inquiry-based learning in science. Results showed a higher learning gain in domain knowledge when students did frequent integration of informational texts and virtual labs in their virtual experiment. The findings thus infer that integration could compensate for the negative effects of lower prior knowledge. Becker et al., (2020) showed that mobile devices, such as tablets, in the form of multimedia learning in physical experimental processes enhance IBL processes. They further provide evidence that IBL approach with multimedia integration leads to a significant reduction of extraneous cognitive load and greater conceptual knowledge of the subjects (Becker et al., 2020).
With all the disciplinary and contextual flexibility that inquiry-based pedagogy offers, there is enough evidence that when scaffolding support is given, students become actively involved in their learning. The research described above, however, is predominantly situated within the social constructivism paradigm, where interaction between peers and teachers are regarded as central. Student-teacher (S-T) and student-student (S-S) interaction, the two facets of interaction theory and the essential tenets of social constructivism, have received the majority of attention in the literature of distance Xiao, 2017). The focus on student-content (S-C) interaction has received far less attention than it deserves in the interaction theory literature, especially when it comes to building a self-regulated learning environment in the absence of immediate human support (e.g., teachers, peers). Additional research is necessary to help us better understand student behavioural engagement in the setting of an independent online study environment. In this study, therefore, the S-C interaction process is explored further to understand students' behvaioural engagement while learning science concepts. The lack of suitable pedagogical approaches has meant that researchers face significant challenges in developing an effective online learning environment for science inquiry (Lai et al., 2018). For instance, online environments are unable to deliver effective interaction or increase learning engagement without carefully planned learning tactics (Chen & Hwang, 2019). In order to create a productive atmosphere for the S-C interaction process, there is still a significant challenge in integrating technology to facilitate self-regulated learning processes (Lai & Hwang, 2021). Through incorporating an instructional scaffolding technique into the design of the intervention, this current study aims to overcome this problem and facilitate students' self-regulation and behavioural engagement during science inquiry (Fig. 3).
In this study, we applied different levels of scaffolding support to explicitly synthesize the student engagement with the learning content. The scaffolding framework is unique in that it gives researchers a focused lens through which to view how students actively engage with the learning materials that place an emphasis on inquiry and science learning. The scaffolding framework represents an emerging pedagogical approach assisting researchers to understand how teachers can design learning activities to encourage student self-regulation and engagement in online environment.
The POEE scaffolding strategy is demonstrated in practice through two online learning modules on introductory science concepts that include simulation-based science inquiry. It also provides an outline for instructors to create a student-driven independent online learning environments and to focus on how guided inquiry facilitated by technology support can student interactions and engagement with learning content. Moore (1989) proposed three important interactions for online learning environments: student-content (S-C), student-teacher (S-T), and student-student (S-S) interaction. Moore's categorization has become a widely accepted framework for the study of the interrelationships between teacher, student, and content in an online environment. Student behavioural engagement inherently plays a key role in the effectiveness of these relationships.
Behavioural engagement in the online context
In a traditional environment, it is conceptualized that the study of behavioural engagement relies on observation of student responses to physical and verbal cues provided by the teacher; however, these cues become less valuable in the online environment where students do not necessarily engage directly with their teachers and peers as part of the learning process (Lei et al., 2019). In an asynchronous online context, S-C interactions become the key indicator of student behavioural engagement. While visual indicators of physical engagement in the online learning process are not as evident as face-to-face learning (Lei et al., 2019), Vytasek et al. (2020) infer that tracking students' digital artefacts can be used to indirectly understand their behavioural patterns. However, these analytics data often provide insufficient information to understand how students intrinsically regulate their behaviour or why they behave in a particular way during the S-C interaction.
In the online context, student behavioural engagement can be transacted either in an individual study space or one that is socially oriented. Self-contained online modules or courses designed for self-directed study are common examples of learning activities in which students must engage individually. on the other hand, students might use the feedback and forum aspects of a learning management system to interact more socially with their teachers and peers (Baragash & Al-Samarraie, 2018). Within technologically mediated situations, this kind of engagement fosters social presence. Hong et al. (2019) argued that social presence demands active participation from the people involved in the online community. Research indicates that during collaborative tasks, students display interdependency and essentially synchronize their work through some level of time commitment (Romero & Lambropoulos, 2011;Yoo & Alavi, 2001). Furthermore, Yoo and Alavi (2001) found that group cohesion promoted students' drive to be involved in collaborative tasks, however, this commitment is only possible when collaborative options are included in the online environment. In contrast, it is much more difficult to facilitate student engagement in an independent study space when no social interaction and collaborative tasks are available.
To better understand student behavioural engagement in the context of an online study environment without synchronous teacher (S-T) or peer (S-S) interactions, it is important to explore the nature of the student-content (S-C) interactions. Two primary aspects of online S-C interactions that have been explored in research are: (a) total time spent (time-on-task) on the activity, and (b) quality time spent (nature of student participation) in the learning process (Christenson et al., 2008;Ding et al., 2017). In their study, Brenner et al. (2017) considered both participation (such as the productive moves, clicks, and total tries) as well as time on task (such as total elapsed time) to determine the students' behavioural engagement. Also, Romero and Barberà (2011) argued that both time-on-task and the quality of time spent could influence students' academic performance. Therefore, in this study, we combine both time-on-task and quality time spent (or participation) on the tasks to conceptualize students' behavioural engagement (see Fig. 2) during S-C interaction.
Previous studies have argued that several key behavioural engagement constructs need to be considered to understand student quality time spent in an online activity. Fredricks et al., (2004Fredricks et al., ( , 2016) concentrated on effort, persistence, attention, good conduct, and the absence of disruptive conduct to measure student behavioural engagement. Young (2010) argued that students with high effort and persistence are generally exhibiting high levels of behavioural engagement. However, it is undoubtedly more challenging to quantify students' good and disruptive behaviour in a remote learning environment. Fredricks et al. (2004) reported that students' completion of a designated task is a sign of behavioural engagement. Additionally, a systematic and organised interaction process essentially provides a qualitative dimension to student engagement (Garrison & Cleveland-Innes, 2005). Therefore, in this study, students' systematic efforts in the inquiry process are conceptualised as 'systematic investigation' and considered as one of the important constructs to measure students' quality time spent on a task. In brief, the three important constructs that can define quality time spent by a student on a task are: persistence, systematic investigation, and task accomplishment (Fig. 2).
Instructional method design
Critically, it has been found previously that students have demonstrated poor participation when scaffolding or guidance has been absent during online learning (Tallent-Runnels et al., 2006). Therefore, educators are continually seeking a viable solution to delivering an effective, guided inquiry-based, online learning environment. In recent times, sophisticated technology has offered educators the opportunity to explore and create more sophisticated guided learning environments . However, Meyer (2014) recommended that a strong pedagogical design is required to create and structure the learning environment that makes what they need to do and achieve transparent for students. The inquiry-based learning environment is exploratory by nature in science education, it requires active participation, and self-regulation by students in the process of their knowledge construction (Sharples et al., 2015). Therefore, students are encouraged to engage in a series of inquiry cycles formulating their reasoning on the problem under investigation during the process (Pedaste et al., 2015). In creating an effective pedagogical design, educators often categorise the student learning process in accord with the cycle of inquiry phases. One of the popular long-standing pedagogical strategies employed within science education is the predict observe explain (POE) pedagogical framework (White & Gunstone, 1992). The POE pedagogical framework supports instructional methods that enable students to work in phases. For example, students need to predict a phenomenon, perform an observation, and then explain the observed findings about the initial prediction (Bilen et al., 2016). Other studies have also reported that the POE framework can be used to change the students' initial misconceptions into correct ones (Ayvacı, 2013;Karamustafaoğlu & Mamlok-Naaman, 2015;Samsudin & Efendi, 2019), while supporting self-regulation (Al Mamun et al., 2020, 2022 in the inquiry process. Consequently, the predict, observe, explain, and evaluate (POEE) pedagogical design, an extended version of POE, has been utilised in this study to provide a series of inquiry phases for student learning in an asynchronous, self-directed, online environment. The details of the development of this pedagogical design have been reported elsewhere (Al Mamun, 2018;Al Mamun et al., 2020). Figure 3 shows the schematic representation of the POEE pedagogical framework.
Under the POEE pedagogical design, emerging technologies such as interactive multimedia have been employed to promote higher quality S-C interactivity in terms of elicitation, exploration, explanation, and clarification of the concepts. Such multimodal technology, including dynamic and interactive representations, may help students to understand more complex science concepts (Bernard et al., 2009) and support increased student performance (Mayer et al., 2001).
In this study, two learning modules that cover the introductory science topics of Phase change and Heat have been used to illustrate how the POEE framework can be used to guide instructional design for online inquiry-based environments. Several POEE activities have been employed in each of the learning modules and examples are shared (Al Mamun et al., 2020;Al Mamun, 2022).
Multiple media in the form of videos and animations that include audio narration and sound effects, also sometimes music, were utilised to introduce dynamic representations of concepts linked to the text and embedded images. Interactive simulations were also a core learning object included in the modules, they provided only visual interactive experiences without embedded auditory media such as narration, sound effects or music. The interactive simulations that have been selected for inclusion in the modules in this study were sourced from two popular websites that freely share science simulations, namely physics education technology (PhET) interactive simulations (PhET, n.d.)) and Molecular Workbench science simulations (Molecular Workbench, n.d.). Both platforms provide students with interactive and flexible experiences of science concepts at the molecular level. Such forms of multimedia technology integration in online environments can facilitate proximity between learners, teachers and learning content and can influence student engagement (Dyer et al., 2018). In addition, Miles et al. (2018) argue that delivering educational materials in multiple forms can facilitate student engagement and support effective navigation and utilisation of the materials.
Study context and participants
This study aimed to explore S-C interactions in a self-directed online environment and employ a mixed method research design. A group of 30 science students, enrolled in first-year chemistry of a large Australian university were selected as a sample for this
Micro-scripted Level
Macro-scripted Level
Predict (P)
Elicit students to interact
Observe (O)
Exploring the learning contents
Explain (E)
Explain the understanding
Evaluate (E)
Reflection and clarification study. In general, sample sizes of 30 are considered adequate for a qualitative data dominant study and can achieve data saturation (Creswell, 2007). Small sample sizes in a qualitative study help researchers to obtain detailed, in-depth experiential accounts of the phenomenon under study (Ryan et al., 2009). In fact, researchers often do not consider the sample size in qualitative research (Onwuegbuzie & Leech, 2005). However, this study also used statistics for quantifying the qualitative data to conduct t-test and chi-square test analysis. A small sample size generally satisfies the assumptions of t-test and chi-square test analysis (Kim & Park, 2019;Poncet et al., 2016). Due to the ease of access, this study employed a convenience sampling technique to secure this cohort. All enrolled students received an invitation to participate in the study via the LMS (Blackboard), and only those who responded positively to the invitation were chosen to participate. Students had to give informed consent in order to participate in this study. Two student groups were formed based on their self-reported prior learning experience with online simulations: experienced and non-experienced. Experienced learner, in this study, was conceptualised as the student having experience of a science simulation in the online environment during their previous science learning. Figure 4 summarises the details of the participants and study context.
The two learning modules were offered to students in parallel to their formal coursework, that is, these activities were not required for their courses. Students participated voluntarily in learning from the modules, and they were aware that their performances would not be assessed; no grading was assigned to course marks upon their completion of a module.
Data collection
Observations of the S-C interactions included video recording, observation, and stimulated recall interview and a variety of tools were used to collect the data. Students were required to participate in only one of the two available learning modules (either Phase Change or Heat) and participant IDs are formulated to indicate which module they had completed. For example, an ID that begins 'pxxx' indicates the Phase Change module and those beginning 'hxxx' indicate the completion of the Heat module. At the beginning of the module, a short orientation was provided to the students showing different components of the web-based learning module such as the simulation models, videos, and other important elements. Each student was then left to work independently on their own in a dedicated room. The student's on-screen computer activities were recorded through the Echo360 software. Additionally, the researcher made observation notes on a student's written responses and on-screen interactions from a remote location using Virtual Networking Computing (VNC). Once a student finished a module, a stimulated recall interview was conducted to record the student's immediate reflection on their experiences with the module (O'Brien, 1993 on-screen activities and the researcher's notes in combination provided the basis for conducting this post-module interview. These data collection techniques focussed on exploring the different constructs of behavioural engagement like persistence, systematic investigation, and task accomplishment.
Data analysis
This study used both an inductive and theory driven thematic analysis approach to formulate themes from the data (Boyatzis, 1998;Braun & Clarke, 2006). The constructs of behavioural engagement originated in the relevant theories (described above in Fig. 2) while various sources documented in the literature review provided the basis of a rationale for formulating the construction of the themes. Thereafter, the students' behavioural efforts, related to the identified themes, were quantified, and codified to measure the relative degree of influence those factors exerted on the interaction process. Persistence is defined in the literature as a student's continuous effort to overcome various challenges faced in the process of learning (Parker, 2003). Likewise, student persistence, in this study, refers to the student's prolonged exploration of the simulation task in pursuit of understanding the science concepts, even though the consequences of this exploration might not contribute to their anticipated learning. Thus, student persistence was measured in this study as the combination of students' time-on-task and their efforts to interact with the simulation activities. In contrast, systematic investigation denotes a strategic and organised investigation of a concept, contributing directly to achieving the anticipated learning. Finally, in combining the results of both the persistence and systematic investigation, students' task accomplishment was assigned as either complete or incomplete. The codified data were then triangulated to explore how they impacted students' behavioural engagement.
For each activity, a threshold time has been defined in order to determine time-on-task (Al Mamun et al., 2022). Combining two distinct metrics has allowed the researcher to determine the threshold value of time-on-task. The first author of this study engaged in each activity themselves to determine how much time was needed to fully comprehend the intended concept from the interaction. Second, the researcher looked at how long each participant spent participating in each activity and noticed how long it typically took a student to understand the target concept during each encounter. The researcher's judgment has been merged with the observations of the students' engagement time to define the threshold of time-on-task for a particular activity. We took into account the students' attempts to make use of the virtual tools built into the simulation model during the inquiry process (Al Mamun et al., 2022). According to Vytasek et al. (2020), tracking students' digital artifacts can be utilized to deduce their behavioural patterns and interaction process.
However, systematic investigation refers to the organized study of the concepts, i.e., a student tries to comprehend a topic by thoroughly exploring it while taking into account the available stimuli from the simulation environment. Research shows that students are generally involved in the process of grasping a particular concept through this kind of investigation (Al Mamun et al., 2022). The details of data analysis coding technique have been reported in other studies in which four key constructs of behavioural engagement mentioned in Fig. 2 have been conceptualised (Al Mamun, 2018;Al Mamun et al., 2022).
These studies along with the current study are parts of a larger study. The two authors iteratively discussed and cross-checked the coding reliability.
After completing the thematic analysis and quantification of the themes, relevant statistical analysis has been conducted to compare the data arising between the two groups of students. An independent sample t-test has been conducted to consider whether any observed difference in mean engagement time between the experienced and inexperienced student groups was significant. Pearson's chi-square test of independence was conducted to gain further insight into any significant association between two categorical variables. A cross tabulation of the data has been formulated based on the observed value and the expected value comes from the null hypothesis, i.e., when the distribution is independent to each categorical variable. Research suggests that chi-square test can be conducted when expected values of the contingency table cells are greater than 5 (Franke et al., 2012). For any significant association between the categories in a chisquare test larger than 2 × 2 contingency table, Cramer's V has been reported to indicate the strength of the association (Kline, 2013). A value of Cramer's V less than 0.26 is considered to indicate weak strength of association (McHugh, 2012). Also, for a contingency table larger than 2 × 2, the source of a statistically significant result can be unclear. Therefore, a post hoc test is required to reveal where the significant result is existing in the contingency table cells (Sharpe, 2015). For this, adjusted residual, a recommended procedure compared to other post hoc alternatives has been used (MacDonald & Gardner, 2000). MacDonald and Gardner (2000) also suggested a Bonferroni correction in this process to reduce the chance of committing type 1 error. Therefore, this study used the Bonferroni correction to report the adjusted p-value for identifying the value which is statistically significantly different from the expectation of the null hypothesis.
Furthermore, when the number of observations was found to be small and the expected frequency in any cell of the contingency table was less than 5, a more appropriate form of analysis Fisher's Exact test has been utilised (Cochran, 1952). Research proved that to deal with small observations, Fisher's Exact test is particularly useful (Bower, 2003). This study combined the categories to form a 2 × 2 contingency table for Fisher's Exact test. For the 2 × 2 contingency table, the Phi value has been reported to indicate the strength of the association between the categories (Franke et al., 2012). All the statistical analyses were performed using statistical package for the social sciences (SPSS) software with the significant p-value threshold set at 0.05.
Engagement time with the learning tasks
It was estimated by the researchers that the typical time for a student to complete each module would be 50 min. Despite the absence of direct or personal guidance, student engagement time with the learning modules was found to be satisfactory. The average engagement time ranged from 44 to 52 min for each learning module for both the experienced and inexperienced student groups. Table 1 displays the statistics of student engagement time obtained from the video records. Table 1 indicates that the mean engagement time (M = 46.90, SD = 15.96) of the experienced group was lower than the inexperienced cohort (M = 50.50, SD = 21.64).
Nonetheless, the engagement times of the inexperienced group are more spread out compared to the experienced group. Also, the inexperienced group took longer in their initial time to become familiar with the online environment. As found from the observation and video record data, inexperienced students generally engaged for an extended period (ranging between 2 to 5 min) at the start of the module in orienting or understanding the simulation environment. This prolonged initial familiarisation with the environment resulted in less engagement time attributable to exploring the target concepts. For example, one student exhibited a difficult time initially with a simulation activity that was intended to provide the student with an opportunity to learn basic ideas relating to the states of matter, i.e., the solid, liquid and gaseous phases of a substance (see Fig. 5). During the interview, this student explained the reason for their initial difficulty: This confirms that the student had faced initial difficulties in understanding the functions of the simulation parameters (e.g., the use of the container lid to change the pressure, and the pump to increase the volume of the substance). Another interview example reveals a different student's reasons for their initial difficulty.
It took me a bit of time to figure out how to work with the play (button) and then press the heat (button) up for a long time to get the temperature up. [p103] This observation suggests that inexperienced students had trouble initially navigating the simulations and therefore they took longer to engage with the activity than the experienced group.
Independent sampled t-test suggests that there were no significant differences between the mean engagement time of the experienced and inexperienced student groups t(28) = 0.486, p > 0.05. It was found, Table 2, that both groups satisfied the condition of homogeneity of equal variances (F = 0.498, p = 0.486).
Student engagement time with separate individual activities across the learning modules was explored further, a chi-square test of independence was conducted to ascertain if there was any significant association between engagement time and the types of activities. A range of scaffolding strategies and activities were included in each module, described in depth elsewhere (Al Mamun et al., 2020;Al Mamun, 2022).
The chi-square test of independence, in Table 3, revealed a significant association between engagement time and the types of activities, chi-square (4, N = 150) = 27.551; p < 0.05. Post hoc analysis revealed that among the types of activities, engagement time in open response, feedback and videos significantly differ from the expected count of the null hypothesis. This indicates that videos and feedback attracted significantly higher It should be noted that the simulations were presented as the central activities in each of the learning modules so it was hypothesised that they would attract longer engagement time, but the data suggests otherwise. During the interview, students expressed why they had preferred videos that were also included in the modules in contrast to the simulations and had engaged for a longer time with the video mode compared to simulations. The data suggests that the videos were perceived as easier to understand and did not require any physical interaction by the students, i.e., no active S-C interaction was required. Students appeared happy, and probably intrinsically motivated to engage with the videos as they could act receptively during the activities. The interviews with students also revealed that they had spent time engaging with feedback because they were intrinsically motivated to know whether their answer was correct or incorrect.
I like feedback. I think it makes understanding clear. [p207] It was good to have that feedback and the little video afterwards. Now I know why I got it wrong, and I will not get it wrong again. [h101] If I did not get the feedback and if I did not know the answer, I would just carry on without really understanding the concept. But because it allows you to answer and then give feedback on it, yeah, I think that is really helpful. [p103]
The above comments support the effectiveness of the feedback mechanism as scaffolding to engage students more deeply in activities, an outcome similar to that noted in a previous study (Mount et al., 2009).
Student effort applied to the task in different instructional settings
Persistence and systematic investigation were examined to identify students' behavioural efforts during the S-C interaction process in three different instructional settings.
In Table 4, the chi-square result shows a statistically significant association between instructional settings and student persistence, chi-square (2, N = 68) = 15.579, p < 0.05. Post hoc analysis did confirm that students show significant high persistence in moderately guided activities and significantly low persistence in the minimal or openended instructional settings. Similarly, students showed a tendency to demonstrate more systematic investigation in the guided activities compared to unguided activities. However, the chi-square test shows that the association between instructional settings and students' systematic investigation were not statistically significant, chi-square (2, N = 68) = 5.608, p > 0.05. So, students' systematic investigation was not directly influenced by the instructional guidance.
In brief, activities without the instructional guidance were perceived to be less effective by students. The original intention of open and minimally guided activities was to support students' independent exploration and learning. It was found from observation of behaviour in this study that this strategy did not work well for students, this finding is further supported by the data from the student interviews shown below:
It is not clear about the objective of this simulation. There should be clear instructions for the activities in the simulation (activity). [h206]
There are some parts (in simulation), need to do some activities but there are not enough instructions for me. So, I am struggling there. [h204] The simulation was pretty hard to understand. Because I had to play around with the things myself. It will be better if somebody was voicing over or explaining it to me. [p205] Additional specific insights into why the open exploration of simulations might have hindered students are provided in the more extended example of a student's open exploration process below.
The simulation activity considered here was taken from the Heat topic module in which minimal guidance was strategically and deliberately offered. It represents the concept of thermal expansion at the molecular level (Fig. 6). The simulation has two important interactive tabs (functions) labelled 'Heat' and 'Cool' that enable the student to change the heat in the system. A student can initiate their independent exploration by clicking on either of these tabs.
One student [H103], during the interaction, was observed to continually attempt to increase the system heat by clicking on the 'Heat' tab, disregarding the 'Cool' tab which could have been used to reduce the system heat for comparison. In the interview, the student explained their behaviour: I just heated it all the way to see how to get it to overflow (with the system heat).
Because that was my intention. I did not think to cool down the system heat. [h103].
Students demonstrated that their exploration of the simulation model was found to be both beneficial and unproductive. For example, the above student sought to find out what might happen to an object when extreme heat was applied. Intuitively, freedom in general to explore a simulation seems appealing. Consequently, this autonomy in learning led them to have a new experience with the simulation model, perhaps, supporting the construction of new knowledge about molecular behaviour. In contrast, such freedom in the exploration might be interpreted as reaping unproductive results. In particular, overlooking the 'Cool' tab deprived the student of experiencing the molecular behaviour at a low temperature, and consequently probably left them in a state of an incomplete understanding of the thermal expansion process; that is, it was observed that the student had missed the opportunity to experience the effect of a low temperature on the behaviour of molecules.
This study also found that, despite the known benefits of guided activities, some students preferred the open nature of the activity. There was evidence of a belief that the simulation and its affordances were enough to support their self-exploration. A student in this category clarified their view in the post-module interview:
I think simulation itself can guide. The whole idea is kind of like making your way through … and playing around with all the concepts. Manipulate all these things and answer the questions, do what you want... you can do most things you like, kind of get yourself involved and learn at a deep level sometimes. [p207]
The ability to 'do what you want' was captivating for this type of student who appeared keen to embark on self-exploration. This infers that the implicit guidance instigated from the learning environment coupled with the consequences of the exploration met their requirements adequately.
The influence of prior simulation experience
The dichotomy in experience with exploring a simulation such as the one described above was investigated further in terms of whether the association between instructional settings and student persistence was influenced by prior simulation experience.
Prior simulation experience was added as a control variable in the statistical analysis to ascertain its effect on students' level of persistence and systematic investigation in different instructional settings. Fisher's Exact test seems appropriate here, as the expected frequency is lower than 5 counts in the contingency table for chi-square test. Therefore, a 2 × 2 contingency table has been formed by combining moderate and strong guidance under the 'guided' category and open/minimal guidance has been put under the 'unguided' category. Table 5 indicates that Fisher Exact test for the experienced student group showed statistically significant association between instructional settings and student persistence (Exact Sig. 2-sided) = 0.000; p < 0.05; and between instructional settings systematic investigation (Exact Sig. 2-sided) = 0.023; p < 0.05. The strength of the associations measured in Phi value showed strong association (0.589 and 0.389) for both the persistence and systematic investigation for the experienced group. In contrast, for the inexperienced student group, the Fisher Exact test shows that instructional settings do not significantly associate with persistence and systematic investigation. This result indicated that experienced students are more capable of utilising instructional guidance to engage meaningfully with the learning content in the self-directed environment. Overall, guided activities tended (as the % value indicates) to support higher student persistence and systematic investigation than activities that provided minimal support for the students.
Students' task completion rate
Based on the number of S-C interactions, the student task completion rate was found to be higher for videos (93.6%) compared to simulations (55.9%) and open response questions (51.3%), as illustrated in Table 6. Table 6 shows that the students exhibited reluctance to respond to open-ended questions with a response rate of 51.3% (around half ) for the inquiry questions asked in the learning modules. Interview data indicated that for several students, an incomplete This suggested that students struggled in interpreting and reformulating their thoughts and ideas into precise explanations and therefore left these answers incomplete. The findings in Table 6, also supported by the interview data, further confirm those observed in "Study context and participants" section and "Data collection" section, where students generally revealed a positive attitude towards the video activities (completion rate 93.6%). Altogether, these data suggest that the video format attracted higher student engagement, albeit receptively. The simpler and less technically difficult videos demanded lower manipulative effort which in turn enabled students to participate visually and, perhaps, were supportive of their receptive understanding of the concepts (Al Mamun et al., 2020). As simulation activities are the central component of the learning modules, further exploration of students' task completion rate in simulation activities in the three different instructional settings was considered.
The chi-square test of independence in Table 7 reveals a statistically significant association between instructional settings and students' task accomplishment, chi-square (2, N = 68) = 11.274, p < 0.05. The post hoc analysis confirmed that it is the open-ended/minimal guided activity that causes the statistically significantly low task accomplishment rate. In contrast, the analysis clearly suggests that the guided activities provided support and motivation to students to complete the tasks. This finding supported the previous findings discussed in detail in "Data collection" section that the students' degree of effort was lower in open-ended exploratory tasks. Further, students' prior simulation experience was added as a control variable and Fisher Exact test has been conducted to understand how prior simulation experience impacted students' task accomplishment rate.
In Table 8 Fisher Exact test reports a statistically significant association, (sig. 2-sided) = 0.000; p < 0.05 between the instructional settings and higher task completion rate for the experienced student group. This indicates again that experienced students can best utilise the instructional settings in a self-directed environment.
Discussions
Behavioural, cognitive, and emotional engagement are all important multidimensional constructs that are highly influenced by the learning context (Reeve et al., 2019;Schmidt et al., 2018). In this study, we have focussed on the behavioural engagement of students as they interacted autonomously with guided-IBL in science modules that were designed through the application of a POEE instructional model. The instructional design (Al Mamun et al., 2020) and findings related to student cognitive and emotional engagement as a function of the design have been described elsewhere (Al Mamun, 2022;Al Mamun et al., 2022).
Several factors that affect student behavioural engagement focusing upon S-C interactions have been explored in this study in the context of an online learning environment. The other study reported elsewhere (Al Mamun et al., 2022) also explores behavioural components such as task completion, persistence etc. in relation to student cognitive engagement and learning approaches. Based on the measures of different behavioural constructs reported in the literature (such as time on task and quality time spent) and the factors derived from the current study related to students and content, a relationship model is proposed (Fig. 7).
In this model, student behavioural engagement was conceptualised based on the relationship between different engagement measures and engagement factors linked to the S-C interaction process. Measures of different behavioural engagement were distilled from research literature while engagement factors were conceptualised from the data originating in the S-C interaction process. The underlying factors relating to both students and content are illustrated in Fig. 7.
Factors affecting students' engagement time and task completion rate
Previous studies report that in an online learning context, students may lack the motivation to engage with the content in the absence of teacher guidance (Fryer & Bovee, 2016). In this study, the students' total engagement time with the two online learning modules was found to be satisfactory. This is likely due to the underlying POEE instructional design supporting students to regulate their learning through a series of inquiry phases (Al Mamun et al., 2020). Also, as students worked independently in the absence of direct teacher support, a sense of autonomy during their interactions might facilitate intrinsic motivation (Deci & Ryan, 1987). Higher engagement time has been shown to improve student performance in a range of learning environments, including online learning environments (Baragash & Al-Samarraie, 2018), blended learning environments (Raspopovic et al., 2014), and traditional classroom settings (Gromada & Shewbridge, 2016). The mean engagement time of the more experienced student group was lower than the student group who had no prior simulation experience. This observation appears to contradict previously published results where experienced students tend to engage longer in utilising the available technology resources and therefore were able to engage more meaningfully in the learning processes (Bates & Khasawneh, 2007). However, in the current study, interactive simulations were provided as a dynamic, interactive representation of science concepts and it was observed that inexperienced students spent more time initially investing in becoming familiar with the functions and orienting in the online environment before cognitively engaging in activities. Experienced learners, in contrast, spent less time familiarizing themselves with the environment and were observed to spend a greater amount of time engaged in actively processing understanding of the intended science concepts. It has been reported previously that when students utilise most of their cognitive ability on something extraneous, they often failed to engage meaningfully with the intended learning concepts with their remaining cognitive capacity (Mayer, 2019).
This study contributes further evidence that students who are inexperienced with simulations demonstrated lower behavioural effort in persistence and task accomplishment, likely due to their inappropriate use of cognitive capacity in learning the functions of the representations. They also failed to effectively utilise the incentives of instructional guidance that was provided in the self-directed environment in other activities. It may therefore initially appear that the strategy of providing multimedia representations is flawed, however recent evidence suggests that the provision of multiple representations can be successful in reducing extraneous cognitive load while supporting conceptual knowledge gains (Becker et al, 2020). Therefore, future modifications of the instructional guidance should aim to reduce the extraneous processing involved in the familiarisation with the environment by increasing the signalling and applying the contiguity principle (Mayer, 2017). One strategy that can be applied is the provision of a brief narrated 'tour' of highlighted interactive functions with modelling of how to notice changes using simulations, further research is required to evaluate this form of intervention. When considering individual forms of activity within the S-C interaction process, videos and feedback activities secured the highest time on task compared to the interactive simulations and open response activities. This aligns with a recent finding set in an open online course, undertaken by a large cohort, that a major proportion of the students (67%) focused almost exclusively on video lectures amongst all of the courses' components and activities (Kovanović et al., 2019). The findings in the current study similarly provided an explicit understanding that students were more engaged with video activities and self-reported that they did not need to engage in manipulative effort and active participation compared to the simulation activities and open responses where greater effort was perceived to be required. Thus, when students are engaged in video activities as part of the learning process, it might increase student satisfaction (Bhadani et al., 2017) and reap improved learning performance (Shen, 2014).
The greater task completion that was observed when videos were the focus in comparison to the simulations and open response activities can be explained by the nature of interactions that are required, videos typically engage students receptively rather than interactively. Previous studies support the notion that a key reason that students are willing to dedicate their time to a task and persist to complete a task is the level of motivation that is aroused (Dev, 1997). In the online context, the psychological motivation factors accord with learners' interests, motivation, and positive attitudes toward learning (J. Lee et al., 2019). According to Mayer's dual processing theory, watching videos can contribute to the reduction of cognitive load due to the simultaneous use of auditory and visual channels (Mayer, 2005(Mayer, , 2017. In contrast to a video as a mode of content interaction, the simulation models used in this study only engage visual channels to process the information. Research shows that attention can be increased, and cognition promoted, if auditory media are successfully employed (Hughes et al., 2019). Thus, the lack of narration, sounds or music in simulation models might hinder students from completing the task. In contrast, some studies are also concerned about the potential cognitive overload due to utilizing a variety of types of media in instruction. Limited capacity theory cautions that information processing channels have a limited capacity, and an overload of these channels can impair cognition (Chandler & Sweller, 1991;Mayer & Moreno, 2003). This would suggest that learning content employing a variety of media could lead to cognitive overload (Hughes et al., 2019). The simulation format already requires manipulative interactions and demands active engagement with the activity. This 'high element interactivity' can cause working memory overload (Kehrwald & Bentley, 2020) thus inducing students to become psychologically demotivated in engagement to complete the task (Lee et al., 2019). This form of the intrinsic load is inherent in the simulations because of its complexity. Research confirms that increased complexity creates increased intrinsic load (Sweller, 1999). Thus, this area of study requires ongoing investigation to understand whether the integration of auditory media will have a negative impact on student learning or promote student cognition.
The current study did not offer any extrinsic motivation in the form of summative marks or certification hence the absence of external motivators might also contribute to the students' low task accomplishment rate when a higher cognitive load is involved. In combination with intrinsic motivation, the rewards anticipated from the task completion are that the activities may stimulate a desire in students to engage highly with the task. Research shows that extrinsic motivation alone, no matter how powerful, cannot ensure maximal learning (Payne, 2019). In fact, attempting to maximize the learning outcomes directly through extrinsic rewards often leads to lower-quality motivation and performance (Ryan & Deci, 2000).
One strategy to reduce the extraneous cognitive load is to introduce explicit instructions to improve the value of the simulation, such as a narrated interactive video to orient students in the simulation functions (Mayer, 2017). This is supported by the temporal contiguity principle which shows the graphical movement and background narration describing them simultaneously (Mayer, 2019). However, a balance needs to be achieved between the freedom to explore, which makes students cognitively active, and the guidance which is required to support cognitive activity to make meaningful construction of knowledge (Mayer, 2004). Mayer (2019) in his review of thirty years of research in online learning favours guided activities and passive media argues that they can help students active cognitively during the learning process.
Further findings in this study reveal that students sustained their engagement for a longer period due to the provision of immediate feedback following their response to concept questions. The feedback system employed, helped students to link the discrete knowledge they had constructed of a concept towards a more comprehensive understanding. In fact, it was found that during interviews most students were in favour of receiving immediate feedback while studying online. Studies show that when students are motivated, they spent quality time undertaking online learning tasks (Romero & Barberà, 2011). Therefore, feedback can contribute significantly to motivating the students to ascertain whether their responses were right or wrong, adjust their understanding and continue. Therefore, student engagement time was rated as high regarding the feedback activities.
In contrast, students were observed to engage less in activities that involved their submission of an open-response explanation of a concept. This activity required students to cognitively process their understanding and translate them into words in entering a response. They needed to utilise their working memory in the process of synchronising both the manipulative and cognitive processes involved while writing their responses. This might create high cognitive stress, through the imposition of a higher cognitive workload, eventually leading to a low engagement time with the open response activities and low task accomplishment. Apart from the demand for physical input, there were a few other factors that militated against students from completing their answers, for example, shallow understanding of the concepts and their cognitive inability to respond to the questions correctly. Research shows that cognitive ability is an important element in the completion of a learning task (Sweller, 1988;Sweller et al., 1998). As the findings of this study revealed, students were presumed to know the associated science concepts but failed to respond with adequate explanations; as a result, they most frequently left the answer incomplete. Therefore, there is a need for module designers to tailor the open response activities by providing 'hints' to facilitate students' thinking in translating their ideas into scientifically correct explanations.
The role of affective factors in behavioural engagement in guided IBL online is attracting increasing attention. A recent quantitative study, applying a predict, observe, explain inquiry-based model within an online learning environment (Hong et al, 2021) reports that student self-confidence increased as well as their critical thinking attitude. The affordances of a guided IBL approach appear to outweigh the limitations, the latter can be addressed to some extent by careful scaffolding and orientation in the learning environment. This emphasises the multidimensionality of engagement constructs which require further exploration.
Student persistence and systematic investigation in the guided activities
The other important factors affecting students' quality of time are persistence and systematic investigation. Students were more likely to demonstrate high persistence and systematic investigation in guided activities than they were in minimally guided or openended instructional settings. Previous studies support the notion that guided activities attract higher student engagement (Fisher, 2010;Mason, 2011). Significantly, a recent study in inquiry-based STEM education confirmed that the higher the provision of guidance in an online environment, the higher the commitment students demonstrated in engaging with an activity (Sergis et al., 2019).
This study, to some extent, found that open exploration often reaps some positive results in the long run, as illustrated in the example described in "Data collection" section. In such a study space, being an independent learner means such a student is intrinsically motivated to explore a simulation (Deci & Ryan, 1987). Students might find such an open exploration appealing to them as they are allowed to have a satisfying experiential learning experience. When such an open environment is created, many students engage in productive exploration (Podolefsky et al., 2009).
Nonetheless, in the open exploration context, students were often observed to be unsuccessful in learning the underlying science concepts. In the example provided in "Data collection" section, the student only raised the heat to observe a change, they could have lowered the temperature to zero to experience how molecules stopped vibrating and completely froze, an opportunity that is impossible to view in the real world. So herein resides a pedagogical conundrum. Open exploration can lead students to acquire new information and construct new knowledge, yet they may not achieve the intended learning if they miss an opportunity. In offering a degree of latitude, only partial success may be realized. In fact, most of the previous studies reveal that inquiry learning without guidance is less successful (Alfieri et al., 2011;Clark et al., 2012;Kirschner et al., 2006;Lazonder, 2014;Luo, 2015). Additionally, open exploration in a technology-rich environment can create a high cognitive load which can disadvantage the learner (Paas et al., 2003;Sweller, 1999). Moreover, students are often led to incorrect conclusions when they are left on their own to explore and use the technology resources (Podolefsky et al., 2009). Therefore, a guided scaffolded design is recommended to support students' effective learning in the IBL environment.
While set in a STEM discipline context, the findings of this study can be translated into a wider range of disciplinary contexts. Guided IBL online, informed by the POEE framework, involves a sequence of inquiry phases that can apply to any stimulus context in learning. For example, case studies offer authentic inquiry contexts and are popular in nursing and clinical education, social sciences, business, law, pre-service teacher education and languages. Instructors should tailor the level of guidance and scaffolding tools required to their learning contexts, for example, the role of scaffolding and reduction of the cognitive load has been addressed in inquiry-based mobile learning in the context of a 5 th grade social science field trip (Shih et al, 2010).
Prior experience to influence student persistence and systematic investigation
The findings of this qualitative study revealed that prior simulation experience significantly improved students' level of persistence and systematic investigation in guided instructional settings. Students' observed behaviours in demonstrating high persistence and systematic investigation support the idea that in the guided environment, students who have prior experience can better utilise the educational resources compared to their non-experienced peers. Previous studies show that experienced students are more successful in their use of a technology-mediated IBL environment (Lee et al., 2010;Pallant & Tinker, 2004). Moos and Azevedo (2008) further added that experienced students can engage with exploration meaningfully through a more discriminating selection of new resources from the technology-mediated environment. Therefore, it is unsurprising that experienced students demonstrate higher self-efficacy in a technology-rich environment (Cheng & Tsai, 2011) and commit to spending more time with the learning content (Bates & Khasawneh, 2007).
In contrast, Meyer (2014) argued that inexperienced learners were prone to a lack of engagement due to insufficient skills in this environment. In the technology environment, inexperienced students' cognitive capacity become depleted as they have already utilised a significant portion of their working memory in getting to know and explore the rich contents prevalent in this environment (Kehrwald & Bentley, 2020).
Limitations and further research implications
The conceptual and empirical work cited above did not consider the multiple dimensions of student engagement; rather it focused only on students' behavioural aspects. Studies show that there are situations when a student can demonstrate high cognitive engagement yet is committed affectively and behaviourally at lower levels. Similarly, a student can find a task to be important for learning, yet not capitalise on this understanding during interaction because the activity itself might not be personally enjoyable and interesting (Schmidt et al., 2018). Also, a student can demonstrate strong behavioural engagement, but invest less cognitive and affective effort, inferring that the student completed the task but very likely did not learn much from the exercise. Studies also show that students with low cognitive engagement usually struggle in understanding the concept and therefore adopt a surface level approach, focusing on completing the task as a means to end the activity instead of striving to understand the concept at a deep level (Fredricks et al., 2004). So, many scenarios are possible, but this study did not consider the multidimensional engagement context to allow for a coordinated result regarding student engagement and learning. A future study may investigate students' emotional attributes either separately or in combined with other engagement dimensions to examine how their interests affect the interaction process.
This study used the POEE design framework to encourage students to become independent learners in an inquiry-based self-directed learning environment. As revealed, the absence of any guidance potentially secured less productive learning for some students. Nevertheless, strong guidance does not necessarily ensure the best learning experience either. A possible disadvantage of strongly guided support is that it might limit a student's autonomy in the learning process and reduce the chance of their becoming independent learners, a phenomenon which was explored in this study. This dilemma of the balance between no guidance versus over-guidance needs to be explored further so future studies might experiment with various pedagogical designs to address this issue.
This study only focuses on the elements of intrinsic motivation to examine students' engagement with learning tasks. Study shows that when extrinsic motivational components are appropriately combined with the learning process in parallel with intrinsic motivation, it improves student engagement and learning achievement (Ryan & Deci, 2020). Thus, future studies can integrate extrinsic motivational factors to examine further student engagement during the S-C interaction process.
Previous studies show sound effects (i.e., audio narration, music etc.) can increase student attention and cognition with the learning materials when it has been effectively incorporated within the activity (Hughes et al., 2019). However, the science simulations used in this study lack all sorts of auditory media and thus, the effect of auditory media during the S-C interaction process had not been examined. We recommend future study could incorporate the auditory media with the science simulation models to examine its effect on the S-C interaction process.
Another potential direction for future studies resides in the use of the POEE design to employ a gradual reduction fading of the degree of guidance from the learning activities to encourage students towards adopting more responsibility in the process of becoming independent learners. This design could provide novice learners with greater continuity in learning and lead them to develop coping mechanisms for interacting more productively with more complex online learning environments (Arbaugh, 2014). Research shows that when students experience similar activities repeatedly, they become more familiar with the technology resources and achieve a certain degree of control over the environment, therefore, becoming more independent of instructional guidance (Li et al., 2019).
Finally, this study involves the application of design principles with a small sample of undergraduate participants in a single context hence the findings contribute to increasing the collective body of research evidence that combines to inform practice rather than being claimed as generalisable outcomes. Mayer (2017) proposes a research agenda that supports the inclusion of studies that explore the application of design principles to advance understanding of student engagement, behaviour and learning achievement using multimedia. Thus, the exploration of learning achievement and consideration of prior academic ability could form the basis of a larger-scale quantitative study applying the framework and modules to the formal courses. This will help to examine the student learning achievement (high or low) by controlling the effect of gender and ESCS (economic, social, and cultural status) in the study.
Conclusions
The underlying POEE scaffolding strategy implemented in the multimedia learning modules highlights the student-content interaction process within the paradigm of individual cognitive constructivism. As no teacher or peer support was included in this study context, the findings of this study revealed several salient factors implicated in understanding the student content interaction process in the self-directed inquiry-based learning context. These factors are conceptualised to explain student behavioural engagement in this novel context that can support educators in creating learning environments conducive to supporting students in becoming independent learners. The relationship between the different measurable engagement criteria and student-content factors can further support educators in designing their instructional strategies applicable to an effective self-directed learning environment. | 2023-01-05T14:03:47.696Z | 2023-01-02T00:00:00.000 | {
"year": 2023,
"sha1": "a454d03b871814ec77a8faa7785643f3e7025e23",
"oa_license": "CCBY",
"oa_url": "https://slejournal.springeropen.com/counter/pdf/10.1186/s40561-022-00221-x",
"oa_status": "GOLD",
"pdf_src": "Springer",
"pdf_hash": "abeb21a83b71903c0ccb8e7743d0e0f17221354b",
"s2fieldsofstudy": [
"Education",
"Psychology"
],
"extfieldsofstudy": [
"Computer Science"
]
} |
255950985 | pes2o/s2orc | v3-fos-license | How personalised medicine will transform healthcare by 2030: the ICPerMed vision
This commentary presents the vision of the International Consortium for Personalised Medicine (ICPerMed) on how personalised medicine (PM) will lead to the next generation of healthcare by 2030. This vision focuses on five perspectives: individual and public engagement, involvement of health professionals, implementation within healthcare systems, health-related data, and the development of sustainable economic models that allow improved therapy, diagnostic and preventive approaches as new healthcare concepts for the benefit of the public. We further identify four pillars representing transversal issues that are crucial for the successful implementation of PM in all perspectives. The implementation of PM will result in more efficient and equitable healthcare, access to modern healthcare methods, and improved control by individuals of their own health data, as well as economic development in the health sector.
Personalised medicine (PM) represents an exciting opportunity to improve the future of individualised healthcare for all citizens (citizen herein equivalent to individuals in the society, reflecting the inclusive and fair nature of PM approaches), holding much promise for disease treatment and prevention. There are high expectations for the future, but will PM and its accompanying tools and approaches change healthcare and be widely implemented for the benefit of society and its citizens by 2030? Will scientists, innovators, healthcare providers, and others be able to provide the most suitable medicine, at the right dose, for the right person, at the right time, at a reasonable cost? Will the healthcare sector be able to find the incentives and create appropriate financial models to implement PM in daily clinical practice? These are questions that require immediate attention and coordinated action to achieve the goal of the comprehensive implementation of PM already by 2030.
The International Consortium for Personalised Medicine (ICPerMed) [1] believes that advancement of the biomedical, social, and economic sciences, together with technological development, is the driving force for PM. Strong investment in research and innovation is therefore a prerequisite for its successful implementation. Here, we present our vision of how PM will lead to the next generation of healthcare by 2030. Through five main perspectives, our vision affirms PM as a medical practice centred on the individual's characteristics, leading to improved effectiveness of diagnostics, treatment and prevention, added economic value, and equitable access for all citizens.
ICPerMed envisages healthcare within the five core perspectives, further delineated in our white paper [2], to be implemented by 2030 as follows: Perspective 1: Informed, empowered, engaged, and responsible citizens Page 2 of 4 Vicente et al. J Transl Med (2020) 18:180 • Easily accessible, reliable, and understandable sources of medical information are available.
Perspective 2: Informed, empowered, engaged, and responsible health providers • The safe, responsible, and optimal use of health information and research results required for PM is routine in clinical settings. • Clinical decisions requiremultidisciplinary teams, integrating novel health-related professions. • The education of healthcare professionals has adopted the interdisciplinary aspects of PM. • Clinicians and researchers work closely to support the rapid development and implementation of PM solutions.
Perspective 3: Healthcare systems enable personally tailored, optimised health promotion and disease prevention, diagnosis, and treatment for the benefit of patients • Equitable access to PM services for all citizens is a reality. • PM services are optimised in terms of effectiveness and equity. • The allocation of resources within healthcare systems is consistent with societal values. • Secure health data flow from citizens and healthcare systems to regulatory authorities and research is in place.
Perspective 4: Available health-related information for optimised treatment, care, prevention, and research • Personal data in electronic health records (EHRs) is used by healthcare providers and researchers for more efficient PM. • Harmonised solutions to ensure data privacy, safety, and security are applied in health-data management. • Optimised treatment and prevention based on personal data benefit citizens, while minimising costs and risks.
Perspective 5: Economic value by establishing the next generation of medicine • A reasonable balance between investment, profit, and shared benefit for the citizen is a reality for PM. • Appropriate business concepts and models are in place for PM. • Telemedicine and mobile solutions promote PM and are of economic value. • New jobs in healthcare systems are created.
The ICPerMed vision for 2030 is aligned with the goals of the United Nations 2030 Agenda for Sustainable Development, which sets out a vision for good health and well-being, promoting healthy lifestyles, preventive measures, and modern, efficient healthcare for everyone. To support these goals and sustain the five perspectives of the ICPerMed vision, four pillars representing transversal issues are crucial for the successful implementation of PM ( Fig. 1): • Data and technology.
By 2030, digital technology is a ubiquitous enabler of all aspects of society, including the health and well-being of citizens. Attitudes towards digital technology and data sharing have changed, driven by a new generation for whom digital technology and social networks are fully integrated in daily life. These citizens are more empowered to control their health data than those of previous generations, and thus more engaged in healthcare decisions and data sharing for research. Adequate regulatory frameworks and data management protocols for the protection of personal rights are compliant with international state-of-the-art standards addressing data security, accessibility, storage, and curation.
Comprehensive personal health data is, in 2030, available through EHRs. The widespread use of wearable devices and apps allows continuous and real-time tracking of health parameters and behaviours, which is complemented by biomarker technology. The global efforts to understand genomic variation in millions of individuals allow the definition of individual genomic risk profiles associated with common diseases, placing greater emphasis on prevention. Other levels of biological information, including epigenomics, proteomics, and metabolomics complement genomic-risk estimates and provide monitoring tools for individuals at risk for disease. Data generation is continuously evolving, requiring innovative and flexible information and communication technology (ICT) solutions to address the needs of PM models for data storage, management, access, safety, and sharing. Interoperability and harmonisation concepts are embedded in healthcare and research systems through more homogeneous data collection tools. Significant investments in artificial intelligence methods by 2030 lead to novel and efficient integration and interpretation of multilevel data from a wide range of sources. Finally, creative and trustworthy ICT solutions are available to support clinical decisions by healthcare providers at the point-of-care.
In the ICPerMed vision for 2030, strong synergies between healthcare and research are crucial for the application of PM. Large volumes of routine healthcare data provide a rich source of material for research, allowing patient stratification and the definition of individual profiles and supporting adapted clinical trials. A close alignment between healthcare providers, researchers, and patients, together with improved flexibility of healthcare systems, enables end userdriven biomedical and clinical research and supports the rapid assimilation of research results by the clinic. The healthcare systems of 2030 support research to strengthen the evidence base of novel PM strategies, effectiveness, and value.
Other parameters influencing health outcomes, including lifestyle and behaviour, socio-economic status, employment, and environmental exposure are integrated with personal health and biomarker data. Acknowledging the impact of policies from other sectors enables valuable inter-sectoral synergies, particularly for health promotion and disease prevention.
In 2030, synergies with the private sector are driven by the need for rapid technological progress, along with novel business opportunities and models. PM drives innovation, particularly in areas such as digital technology, biomarker detection, and the development of molecular-targeted drugs. Through close cooperation with the pharma industry, data from clinical trials is available to the medical community, improving patient access to innovative medicines. Health technology assessment clarifies the true value of technologies, incentivizing PM.
By 2030, the primary focus of healthcare has shifted from treatment to risk definition, patient stratification, and personalised health promotion and disease prevention strategies of particular value for ageing societies. Optimisation of healthcare systems until 2030 reflects this change. Economic sustainability and societal benefits of PM are clear and integrate a societal perspective. Economic analysis is at a systemic level, integrating unemployment, social-care systems, new risk-sharing methods, and the entire life cycle of PM approaches. This broader societal perspective is underpinned by shared ethical values and equity of access for all, including marginalised sectors and under-served populations. In 2030, adequate reimbursement models are in place to support this more equitable approach, and consider the longterm value of innovative technology-based approaches.
Significant investments in technological infrastructure and digital platforms until 2030 maximize the enormous economic value of public ownership of data and create the need for new skills and novel professional profiles. Health professionals trained in digital technologies, biomarker examination, and data analysis are members of multidisciplinary teams that make shared clinical decisions. Healthcare systems use flexible working models to accommodate individual needs and incorporate the rapid turnover of technological and scientific innovations streaming from research, and bidirectional data accessibility is facilitated by networking and data-sharing platforms.
• Education and literacy.
Major changes in medical and other healthcare provider curricula (e.g. pharmacists, nurses, and therapists) result in a new generation of informed, empowered, engaged, and responsible healthcare providers by 2030. There is a strong focus on digital literacy and the skills needed to interpret biomarker information. The value of multidisciplinarity in clinical and healthcare decisions is routinely used in practice. Given the fast turnover of technologies and their potential impact on healthcare, lifelong education and training is essential for healthcare providers. Conversely, professionals with non-clinical backgrounds have a better understanding of healthcare and clinical issues, facilitating interactions amongst clinical teams.
For the citizen, health data education and literacy in PM, including ethical, regulatory, and data-control issues, is provided through schools and specific literacy programs. Improved PM literacy is complemented by interfaces capable of providing required rigorous information on demand while preserving the patient-clinician interaction.
In 2030, healthcare managers and policy makers have ample evidence of the benefits of PM to citizens and healthcare systems. This enables the establishment of political frameworks to tackle effectiveness, efficiency, equity, and ethical issues underlying the development and implementation of PM approaches.
Conclusions
PM is not so much a paradigm change as the evolution of medicine in a biotechnology and data-rich era. This development requires extensive adjustments in the way healthcare is provided, including new skills for healthcare professionals and novel tools for delivery. The ICPerMed vision reflects such an evolution. It was supported by consulting European and international experts, covering key sectors, who provided feedback on the opportunities and challenges of PM and highlighted specific concerns and possible solutions [2].
ICPerMed supports coordinated research directed towards the progressive implementation of PM and has previously developed an Action Plan [3], defining research activities to stimulate the adoption of PM in healthcare. Leveraging the Action Plan, ICPerMed members have been successful in establishing PM research and healthcare programs and actions in their own countries and regions [4]. The European Commission already supports many initiatives consistent with the presented vision and, together with ICPerMed, is committed to expanding its efforts globally. The core perspectives of the ICPerMed vision and transversal issues presented herein can further orient policy makers and guide the healthcare community in their planning of future programs and activities for PM implementation. ICPerMed will continue to act as a communication platform for existing and future initiatives and organisations related to PM, paving the way towards this vision of PM in 2030.
Abbreviations PM: Personalised medicine; ICPerMed: International consortium for personalised medicine; EHR: Electronic health records; ICT: Information and communication technology. | 2023-01-18T14:09:30.953Z | 2020-04-28T00:00:00.000 | {
"year": 2020,
"sha1": "b03f36498b0a75ba34e86bb20c9ba0e4f767434d",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1186/s12967-020-02316-w",
"oa_status": "GOLD",
"pdf_src": "SpringerNature",
"pdf_hash": "b03f36498b0a75ba34e86bb20c9ba0e4f767434d",
"s2fieldsofstudy": [
"Medicine",
"Political Science"
],
"extfieldsofstudy": []
} |
168080050 | pes2o/s2orc | v3-fos-license | The Participation of Latin America and the Caribbean in International Supply Chains
As mentioned in the previous chapter, the concept of international supply chain is typically understood as a group of firms in different countries that work together—from the design to the distribution of a product—under the coordination of a lead firm that seeks to minimize total system costs. Unfortunately, very few existing trade or foreign direct investment databases provide enough information to verify that the cross-border transactions that economists observe conform to this definition. Therefore, short of working with case studies, empirical research in this area has relied primarily on proxies to measure value-chain participation.
Evidence from Intra-industry Trade Indexes
The fi rst measure relies on intra-industry trade indexes (see Fukao, Ishido, & Ito, 2003 ;Jones, Kierzkowski, & Leonard, 2002 ;Kimura, 2006 ). This measure is based on the premise that global supply chains are associated with sequential production links in which countries may import intermediate goods, add value, and export them to another country. As such, production linkages involve trading related goods at different stages of production. In this way, intra-industry trade can be a proxy for these processes, provided that this trade is measured at suffi ciently high levels of aggregation. For this reason, the measures of intra-industry trade constructed here are based on four-digit SITC data. 1 The use of intra-industry trade measures does not come without limitations, however, since they also capture horizontal trade in the same goods, which does not necessarily refl ect participation in global supply chains. Nevertheless, it is reassuring that in our results, the countries that have experienced the largest increases in intra-industry trade between 1985 and 2010 are China, Indonesia, Malaysia, Mexico, Philippines, and Thailand, all of which are highly integrated in global supply chains. Figure 2.1 depicts the evolution of the average intra-industry trade for countries in the Asia-Pacifi c region and for LAC (see appendix A "Trade in Value Added and Set of Countries" for the list of countries in each region). The fi gure shows how intraindustry trade boomed in the Asia-Pacifi c region in the period 1985-2010 while increasing relatively slowly in Latin America, particularly in manufactures. This is the case whether we use all goods or only manufactures. The overall levels are also 1 In particular, we use the Grubel-Lloyd index. 1 Intra-industry trade indexes, regional averages. Source : Authors' calculations based on data from Comtrade very different, with an average measure of intra-industry trade in the Asian region twice that of Latin America. The result is in line with the general notion that the Asian countries are far more engaged in vertical specialization and cross-border production sharing than the countries in LAC. 2
Evidence from Trade in Value Added
Another way to measure the participation of countries in global supply chains is to trace the value added of each source country in a globally integrated production network. Studies have applied this approach to specifi c goods, such as the iPod and iPhone (Dedrick, Kraemer, & Linden, 2008 ) and the Barbie doll (Tempest, 1996 ). The information in these case studies is very rich, showing which countries participate in the supply chain of a particular good and how much value they add to its production. The studies have revealed, for example, that even though China exports the iPod, and the trade statistics report the full value of this product, the country only contributes to 3.8 % of the value added, because many other countries also participate in the production. This case-by-case examination of specifi c international supply chains is very revealing, but the approach is so data-demanding that it would be impossible to examine every such supply chain in which a country participates. For this reason the technique is impractical for measuring the participation of countries in GVCs.
A new group of analyses are taking a different, more practical approach to tracing the value added of a country's trade fl ows: combining input-output tables with bilateral trade statistics (e.g., De La Cruz, Koopman, & Wang, 2011 ;Hummels et al., 2001 ;Johnson & Noguera, 2012a, 2012bKoopman, Wang, & Wei, 2008, 2014Miroudot & Ragousssis, 2009 ). The literature has evolved rapidly and has produced an array of indicators that help quantify the extent to which countries participate in cross-border production sharing. 2 The advantage of using intra-industry trade indices is their simplicity: they only require data on international trade fl ows. Other approaches that only require trade data use the description of trade line classifi cations to pick up terms like "parts and components" as proxies for trade in intermediates. The main idea is to measure the percentage of trade in intermediates in total trade. These methods have been used, for instance, in Yeats ( 2001 ), Ng andYeats ( 1999 ), andFung, Garcia-Herrero, andSiu ( 2009 ). A related approach is to employ a United Nations classifi cation that separates goods according to their use. The classifi cation is called the Broad Economic Categories (BEC), http://unstats.un.org/unsd/tradekb/Knowledgebase/Intermediate-Goods-in-Trade-Statistics . This method has been employed, for instance, in Baldwin and Taglioni ( 2011 ). A shortcoming of these two methods is that they tend to rely on subjective criteria on what constitute an intermediate good (see Hummels, Ishii, & Yi, 2001 ). We nevertheless compare Asia and Latin America in terms of the share of intermediate inputs in total trade as measured by Fung et al. ( 2009 ). The results are in line with the fi ndings from the intra-industry trade indexes. For instance, in 1990, exports of parts and components as a share of total manufacturing exports was on average at around 31 % for Asia and 16 % for Latin America. Two decades later, in 2010, this share increased to 40 % in Asia and declined slightly to 14 % in Latin America.
In this literature, the insertion of countries in GVCs is measured with indicators that seek to capture the extent to which countries participate in a sequential chain of production activities that crosses many borders. The fi rst indicator, called import content of exports, introduced by Hummels et al. ( 2001 ), is based on the notion of vertical specialization. Vertical specialization refers to the use of imported inputs to produce goods that are later exported, a notion that precisely captures the idea of various countries linked sequentially to produce a fi nal good. More recently, the concept of foreign value added in exports is being used to measure vertical specialization by emphasizing value added from other countries embodied in a country's exports (Koopman et al., 2014 ). Foreign value added of exports is nowadays a common measure of the participation of countries in vertically fragmented production through upstream linkages. Figure 2.2 depicts the foreign value added of exports for various Latin American countries. The measure refl ects the share of foreign value added in each country's total exports. Appendix A "Trade in Value Added and Set of Countries" explains in detail the methodology and data used to develop this measure. 3 The fi gure also shows simple averages for two comparator groups: the Asian countries and the EU-27. 3 There are publicly available datasets in which similar measures of trade in value added have already been constructed for many countries in the world. These include the World Input-Output Table, funded by the European Commission and developed by the University of Groningen, and the "Trade in Value Added (TiVA) indicators," a joint OECD-WTO initiative. The coverage of Latin American countries in these databases, however, is very limited, making them unsuitable for this report. We can see that in general, the participation of Latin America in GVCs averages less than the participation of the comparator regions. The exports originating in Asia and in the EU use more intensively imported intermediate inputs than Latin America's exports. In particular, the exports of Asia and the EU use 12 and 15 % points more foreign value added, respectively, than the exports of Latin America; this suggests that the countries from these two regions are more involved in sequentially linked production processes than the countries in the LAC region. 4 At fi rst it might seem surprising that a small, low-income country such as Honduras exhibits a measure of foreign value added that is higher than that of Mexico, given the latter country's extensive production linkages with North American fi rms in motor vehicles, electronics, aeronautics, and other industries. Clearing up this apparent anomaly provides a good opportunity to further explain what Fig. 2.2 is measuring. A foreign value added of, say, 45 % indicates that this portion of the value of a country's exports comes from other nations. This value is independent of the number and/or type of industries participating in global value chains. In the case of Honduras, for example, more than a third of the total exports of the country are in textiles, predominantly T-shirts. Eighty percent of the value added in these exports are yarns, fi bers, and other inputs that originate in other countries, which include the US, Mexico, China, and South Korea. This explains the high value of foreign value added for Honduras.
The example of Honduras clearly shows that global supply chains should not be associated exclusively with high-tech industries. Some countries participate in value chains of high technological content, while others, due to their comparative advantage, participate in value chains of low technological content. The issue of technological content becomes clearer when we separate the foreign value added embodied in countries' exports by the sectors generating such value added. The results, which are presented in Fig. 2.3 , were calculated on the basis of the OECD classifi cation of manufacturing sectors according to their technological content, 5 which is depicted by the two graphs on the top of the fi gure. We complete the picture by including foreign value added generated by the primary sector (bottom left fi gure) and from services (bottom right fi gure). Note that for each country, the sum of the numbers in the four fi gures equals the value in Fig. 2
.2 .
Through this analytical lens, Mexico has much higher foreign value added generated by high and medium-technology sectors than does Honduras, while the reverse remains true for low and medium low-technology sectors. This further supports what we mentioned earlier: Honduras' exports, mainly of textiles and apparel, use mostly foreign inputs of low technological content, that is, fi bers and yarns, with very little inputs from high-technology sectors, while the exports of Mexico largely depend on high-and medium-technology intermediate inputs. Figure 2.3 can also be used to compare the status of Latin America as a whole with that of our comparator regions. For instance, the average values for the EU and Asia are higher than for LAC in the manufacturing sectors and in services, but the reverse is true for the primary sector. In other words, Europe and the Asia-Pacifi c countries are more involved than Latin America in the co-production of goods that largely utilize manufacturing sector inputs, as well as those from services; Latin America, on average, is more involved in the co-production of goods in which the main inputs come from the primary sector.
Returning to Fig. 2.2 , another interesting fi nding is the high degree of heterogeneity that exists within Latin America, with Mexico and the countries in Central America showing the largest shares of foreign value added of exports and the countries in South America showing the smallest. This heterogeneity is in part related to differences in the patterns of specialization across the LAC region. The production of primary goods and related products tends to require fewer imported inputs than the production of many manufactures. As production processes in South American countries are typically biased toward primary products, the foreign value added of these countries' exports is particularly low.
Countries specializing in primary products are most likely to participate in the early stages of supply chains, providing inputs to other countries downstream rather than receiving inputs from abroad. To examine the extent to which the exports of a country are linked to vertically fragmented production downstream in the chain, we calculate what is known as indirect value added. This is a measure of the degree to which a country provides value added by exporting intermediate inputs that are later utilized in the exports of other countries. 6 This measure, which is shown in Fig. 2.4 , indicates the percentage of a country's exports used as inputs in the exports of third countries. Note now that the countries in South America tend to have higher values of this measure than the countries in Central America. Note also that the average for the Latin American region is higher than for the EU and Asia. This suggests that the LAC region, on average, participates more than the EU or Asia as a supplier of value added downstream in the chain. But this is only true for the value added generated from the primary sector (as shown in Fig. 2.5 ), which decomposes the measure by value added generating sectors. 7 This fi gure clearly shows that the average for Latin America is higher than the average for the EU and Asia in the primary sectors (bottom left fi gure), while the reverse is true in the manufacturing sectors (top fi gures). In other words, on average, Latin American countries participate more than Europe and Asia in international value chains as suppliers of primary inputs, while Europe and Asia participate more than Latin America as suppliers of manufacturing inputs with high, medium, or low technological content. 6 Technically, indirect value added is measured as the country's value added embodied as intermediate inputs in third countries' gross exports, as a percentage of the country's gross exports (see Koopman, Wang, & Wei, 2010 ). 7 Note that the sum of the four values for each individual country in Fig. 2.5 is equal to the value in Fig. 2 One way to present a combined measure of value chain participation is to add the measure of foreign value added of exports and the measure of value added used in the exports of third countries (see Koopman et al., 2014 ). This refl ects participation through linkages both upstream and downstream. The measure calculated by the value added generating sector is shown in Fig. 2 the comparator regions clearly shows that our region in general participates less than the EU and Asia in the manufacturing (and service) segments of the global value chains, while it tends to participate more in the segments associated with the primary sector. It is also possible to see once again how countries in Latin America differ in their participation. Costa Rica, Mexico, and Honduras, for example, participate more as recipients of foreign value added (blue segments tend to be longer than green segments), while Chile, Peru, and Bolivia participate more as providers of value added downstream in the chain than recipients (green segments tend to be longer than blue segments). Therefore, beyond the general comparison of Latin America with Europe and Asia, Latin America emerges as a region with large heterogeneity in value chain participation. On the one hand, we have countries-primarily Mexico and Central America-that process lots of foreign inputs that are incorporated in the export of goods close to their fi nal production stages, so these countries tend to be positioned closer to the end of the supply chain. Meanwhile, the South American countries are more specialized in natural resources; they provide inputs to other countries' exports and thus are positioned more at the beginning of the supply chain.
We can construct a general measure of the position of the country in the chain by dividing the indirect value added and the foreign value added measures (see Koopman et al., 2014 ). 8 The higher this value, the more upstream the country's position in the chain. Figure 2. Evidence from Trade in Value Added example, that the value added from Peru used as inputs in third countries' exports is four times greater than the value added from other countries employed in Peruvian exports. Figure 2.7 shows clearly the heterogeneity within the region that we mentioned before, with Mexico and Central America more at the end of supply chains and South America more at the beginning. Latin America as a whole is positioned more upstream in global supply chains than the comparator groups due to the average specialization of the region towards natural resource intensive sectors. Summarizing the results, there is considerable heterogeneity within Latin America, in which Central American countries and Mexico participate more in downstream segments of global value chains while South American countries are relatively more active in upstream segments, mainly due to their specialization in primary sectors. Even within the group of countries participating in downstream supply chain segments, some economies specialize in value chains of low technological content while others focus more on high-technology segments. In general, however, the various indicators confi rm the general perception that Latin America tends to participate less than other regions in global value chains, particularly in value chain segments related to the manufacturing sector.
Two obvious questions arise from these fi ndings: Can countries in the region increase their participation in global value chains? And can they participate in segments of higher value added? Note that these questions do not necessarily imply that the countries should target industries of high technological content, such as electronics. Instead, the questions point to the potential even for countries with comparative advantages within certain industries to identify segments of high value that have not been exploited. For instance, Honduras has traditionally been linked to the low-technology global value chain in which the production of T-shirts is one of the main staples. Today, Honduras can use knowledge developed through the supply chains of exporting T-shirts to enter new export segments of the textile industry, such as parachutes. The same can be said for the primary sector. Abundance of natural endowments and specialization in primary goods does not preclude countries from adding value in natural resource-related supply chains. These are without doubt important issues for the Latin American region that we will address in later chapters of this report.
We can also use this methodology to examine the contribution of the different world regions to global value chain participation. The idea is to see how much participation in value chains occurs among countries of the same region and how much takes place with countries in other regions. For instance, do countries in Europe engage in international supply chains mostly with other European countries? Or are their production networks spread evenly across the globe? Figure 2.8 shows that the participation in international production networks is more intense among countries of the same region than with other regions. The within-region participation in the EU, Asia-Pacifi c, and LAC is 51 %, 47 %, and 29 %, respectively. In each case, the within-region participation is always the highest. This result suggests that global value chains do not cope well with vast distances, an issue that will recur in the rest of this report.
An alternative way to examine the participation of countries in global supply chains is to look at data on FDI. True, many companies offshore part of their production processes through independent suppliers and not through FDI. Nevertheless, multinationals still play an important role in many global production networks, and looking at their locations gives us an additional opportunity to analyze the extent to which Latin American countries take part in cross-border production sharing.
We employ the Dun & Bradstreet (D&B) Worldbase dataset, which covers more than 200 countries and territories and has been used in academic studies for various purposes. 9 For each fi rm in this dataset there is information on an array of variables, including location (city/country), industry of production, and family tree (the fi rm's parent and other related parties). We follow Alfaro and Charlton ( 2009 ) in identifying 9 For instance, the comparison of size and diversifi cation patterns of foreign investment in North America (Caves, 1975 ), the development of microdata sets on enterprises (Lipsey, 1978 ), the effect of bank credit availability and business creation (Black & Strahan, 2002 ) Evidence from FDI Data whether the relationship between a parent company and its subsidiary is horizontal (the parent and the subsidiary produce the same good), vertical (the subsidiary produces an input for the parent), or complex (the relationship is both horizontal and vertical). The methodology compares the industry codes (at the four-digit SIC level) of both parents and affi liates to examine whether they produce the same good and/or whether the affi liate is a supplier to its parent. The latter is determined by using the industry codes in combination with an input-output table to identify whether the industry of the subsidiary corresponds to an upstream industry of the parent's output. 10 One potential shortcoming of this approach could be uneven coverage of a worldwide company dataset, particularly in developing countries where information is harder to obtain. Appendix A "FDI Dataset", however, provides details about the extensive checks and quality controls used by D&B to gather information and presents a test that appears to validate the coverage of the data. Figure 2.9 shows the network of parents and their vertically linked subsidiaries around the world. The size of the circles in each country indicates the total number of parent companies located in that country that own vertically linked subsidiaries in other countries. The thickness and color intensity of the lines represent the number of bilateral vertical subsidiaries between each parent country and a corresponding host country. Several interesting insights emerge from this fi gure. First, most multinational parent companies are located in industrialized countries, and a very large number of their foreign affi liates are also located in the industrialized world. This is consistent with the general fi nding in the literature that most FDI is of the North-North type. This is also consistent with recent evidence indicating that what had been thought to be horizontal FDI fl ows among developed nations are actually vertical FDI fl ows (Alfaro & Charlton, 2009 ). Our evidence is also consistent with results from a US survey: data from Fortune 1,000 companies show that more than 60 % of all the offshoring of these companies is conducted in industrialized economies (Sturgeon, Nielsen, Linden, Gereffi , & Brown, 2012 ). 11 On a regional level, well-defi ned supply chain networks in Europe are led by Germany, those in Asia are led by Japan, and networks in North America are led by the US, which also has very strong links with the EU and Asia. With the exception of Mexico and possibly Brazil, LAC-like Africa-remains pretty much on the sidelines when it comes to participating in production networks led by multinationals.
10 Similar to Alfaro and Charlton ( 2009 ) we use the Bureau of Economic Analysis 1987 benchmark input-output table and employ alternative thresholds of the input-output total requirements coeffi cient. 11 It has been noted that supply chains have been prevalent among nearby high-wage countries, such as the US and Canada, or Germany and France. The trade in these supply chains is typically based on exploiting scale economies rather than on wage gaps. For instance, a fi rm in a developed country dominates the market of a particular input through continuous learning-by-doing and scale economies. This has been referred to as "horizontal specialization" (Baldwin, 2012 ). Figure 2.9 only provides crude evidence on the location of vertical FDI and does not control for factors such as differences in the level of development. One could expect, for instance, that more developed countries would host more foreign subsidiaries than less developed countries. In controlling for differences in per capita income, Fig. 2.10 indeed shows that there is a clear positive relationship between the level of income of the country and the number of vertical subsidiaries that it hosts. However, most countries in Latin America fall below the trend line, indicating that the number of foreign subsidiaries is lower than what should be expected from their level of development. In other words, even after accounting for differences in income per capita, the participation of most countries in the region seems to be low.
Evidence from Trade in Services
International trade in services is a growing trend in global commerce. In particular, the offshoring of business functions such as accounting or IT services is part of the same phenomenon of international fragmentation in which fi rms decide to locate part of their production of components and/or services in different countries. 12 12 Note that the offshoring of services does not involve all trade in services. Some trade in services might not be related to the fragmentation of production. We will make this comparison through an analysis of two service categories that are intrinsically related to global supply chains: "computer and information services" and "miscellaneous business, professional, and technical services." 13 The second category includes services related to business process outsourcing and knowledge process outsourcing. 14 The data are taken from the UN's Service Trade Database. Figure 2.11 shows the positive relationships between exports of these services and the countries' GDP per capita: more developed countries tend to export more of these services. Also clear from the fi gure is that most countries in the region underperform the respective trend lines, suggesting that Latin American countries tend to export less of these services than would be expected given their level of economic development. In the next chapter, we present a model that indicates the potential factors behind this subpar performance. 13 The categories are part of the Extended Balance of Payment Classifi cation, which is commonly used in the service trade databases of the UN, OECD, and IMF. 14 This category includes the following: legal services; accounting, auditing, bookkeeping, and tax-consulting services; business and management consultancy and public relations services; advertising, market research, and public opinion polling; research and development; architectural, engineering, and other technical services; and other business services.
Recapitulating
Most of the indicators we used to examine the participation of LAC in global value chains present a similar picture: LAC's participation generally tends to be low relative to other regions. However, there is also signifi cant heterogeneity within the region. For instance, Mexico and countries in Central America are more engaged in production networks, particularly with North America, and tend to participate in the fi nal stages of production networks. For their part, countries in South America typically enter supply chains in the early stages. A set of clear factors explain at least some of these differences. For instance, proximity to the US makes Mexico an ideal recipient of offshoring activities. Likewise, the sheer abundance of natural resources in South America biases countries to participate in more upstream stages of supply chains. Proximity, the endowments of natural resources, and the relative abundance of different classes of labor are obvious drivers behind the levels and types of participation in supply chains. But they are not the only drivers. The next chapter uses a more rigorous analysis to identify a more complete spectrum of factors behind the region's relatively subpar participation in international supply chains.
Open Access This chapter is distributed under the terms of the Creative Commons Attribution Noncommercial License which permits any noncommercial use, distribution, and reproduction in any medium, provided the original author(s) and source are credited. | 2019-05-29T13:11:26.354Z | 2014-01-01T00:00:00.000 | {
"year": 2014,
"sha1": "5b3251ebe1bdadf30abcbb15fbd4bcda265440c2",
"oa_license": "CCBYNC",
"oa_url": "https://link.springer.com/content/pdf/10.1007/978-3-319-09991-0_2.pdf",
"oa_status": "HYBRID",
"pdf_src": "Adhoc",
"pdf_hash": "d68f06dba7ea7634f24114f0fa438004a25d5a40",
"s2fieldsofstudy": [
"Economics"
],
"extfieldsofstudy": [
"Business"
]
} |
243752528 | pes2o/s2orc | v3-fos-license | Investigating the Effects of Nitric Acid Treatments on the Properties of Recycled Carbon Fiber
In this study, the chemical state change of recycled carbon fiber (rCF) surfaces and the mechanism of the oxygen functional groups according to nitric acid treatment at various times and temperatures were investigated to upcycle the carbon fiber recovered from used carbon composite. When treated with nitric acid at 25 °C, the carbon fiber surface demonstrated the same tensile properties as untreated carbon fiber (CF) for up to 5 h, and the oxygen functional group and polar surface energy of C–O (hydroxyl group) and C=O (carbonyl group) increased slightly compared to the untreated CF up to 5 h. On the other hand, at 100 °C, the tensile properties slightly decreased compared to untreated CF up to 5 h, and the amount of C–O and C=O decreased and the amount of O=C–O (lactone group) started to increase until 1 h. After 1 h, the amount of C-O and C=O decreased significantly, and the amount of O=C–O increased rapidly. At 5 h, the amount of oxygen functional groups increased by 92%, and the polar surface energy increased by 200% compared to desized CF. It was determined that the interfacial bonding force increased the most because the oxygen functional group, O=C–O, increased greatly at 100 °C and 5 h.
Introduction
Carbon fiber (CF) is a lightweight material that has a high specific strength, stiffness, thermal conductivity, and electrical conductivity compared to other materials, as well as high corrosion resistance and chemical resistance [1][2][3][4][5]. However, since CF is an expensive material, it is applied only to expensive components such as those associated with aviation, space, wind power, sports cars, and sporting goods, and environmental problems such as disposal of carbon composites in landfills after use remain a challenge that must be solved [6]. To expand application to all industries, including automobiles, in the future, it is necessary to lower CF prices, regulate parts recycling, and upcycle technology for recovering and recycling waste carbon composites [7][8][9][10][11]. Currently, some research on how to recover recycled CFs from waste carbon composites is being conducted, but the recovered recycled CFs deteriorate by about 20% to 30% compared to recycled CFs, so their reuse is limited [9,12]. In particular, the recovered CF has a deteriorated surface, and the interfacial bonding strength between the CF and the resin decreases, so a surface treatment to improve the interfacial bonding strength is necessary to reuse the recovered CF [13,14].
Surface treatment changes physical/chemical properties depending on the type of treatment solution/gas, treatment temperature, and treatment time. As a result of treatment The CF used in the experiment was desized and then surface treated with nitric acid. Nitric acid treatment was performed under a temperature of 25 • C to 100 • C and a time of 0.5 to 5 h. Surface changes of the CF treated with nitric acid were observed under an acceleration voltage of 20 kV using FE-SEM. To evaluate the mechanical properties of CF, a single fiber tensile test was conducted under a tensile speed of 5 mm/min according to the ASTM D3822 standard, and the average values from testing 25 times for each test condition were used. X-ray photoelectron spectroscopy (XPS) of the Nexsa XPS system (Jeonjusi, Republic of Korea, Korea Basic Science Institute) was used to analyze the change in chemical functional groups on the surface of the CFs according to acid treatment conditions. The specimen was irradiated with monochromatic Al Kα (1486.6 eV), and high-resolution spectra were obtained with a 400 µm sized beam and a pass energy of 50 eV. Dynamic contact angle measurement is required to analyze changes in surface energy. In this study, hydrophilic water and hydrophobic diiodomethane were used to measure the dynamic contact angle, and the dynamic contact angle and surface free energy were calculated from the values measured according to the Wilhelmy plate method for each condition. The value was calculated after a second evaluation. Figure 1 shows the results for comparing and observing the surface of the CF after nitric acid treatments at various time durations and temperatures. When treated with nitric acid at 25 • C to 100 • C for 1 to 5 h, the changes in the surface of the CF could not be observed. Although we increased the amplification for SEM photographs of carbon fibers, there was no change. According to another researcher, surface treatment with sulfuric acid/nitric acid removed sizing for up to 15 min at 60 • C, resulting in vertical stripes though the surface was not damaged; after 15 min, the surface was grooved and damaged [20]. It has been reported that there was almost no surface reaction as a result of heat treatment at 600 • C~700 • C in H 2 /Ar atmosphere [27] (an inert gas) and that a defect exists on the surface of a CF at 600 • C or higher in cases of heat treatment in a nitrogen atmosphere [28]. In addition, in the case of plasma treatment in an oxygen atmosphere, the surface of the CF was seriously damaged after 5 min, the diameter of the fiber was rapidly reduced, and some of it was lost at 7 min, resulting in loss of the function of the CF [38]. This is believed to increase the number and size of pores that exist due to chemical reactions on the surface of CF, depending on the degree of exposure to liquid and gas to be treated during surface treatment. In addition, when treated with nitric acid at 100 • C for 5 h-the experimental condition of this study-no change in the CF surface was observed.
Surface Topography of Carbon Fibers
namic contact angle, and the dynamic contact angle and surface free energy were calculated from the values measured according to the Wilhelmy plate method for each condition. The value was calculated after a second evaluation. Figure 1 shows the results for comparing and observing the surface of the CF after nitric acid treatments at various time durations and temperatures. When treated with nitric acid at 25 °C to 100 °C for 1 to 5 h, the changes in the surface of the CF could not be observed. Although we increased the amplification for SEM photographs of carbon fibers, there was no change. According to another researcher, surface treatment with sulfuric acid/nitric acid removed sizing for up to 15 min at 60 °C, resulting in vertical stripes though the surface was not damaged; after 15 min, the surface was grooved and damaged [20]. It has been reported that there was almost no surface reaction as a result of heat treatment at 600 °C~700 °C in H2/Ar atmosphere [27] (an inert gas) and that a defect exists on the surface of a CF at 600 °C or higher in cases of heat treatment in a nitrogen atmosphere [28]. In addition, in the case of plasma treatment in an oxygen atmosphere, the surface of the CF was seriously damaged after 5 min, the diameter of the fiber was rapidly reduced, and some of it was lost at 7 min, resulting in loss of the function of the CF [38]. This is believed to increase the number and size of pores that exist due to chemical reactions on the surface of CF, depending on the degree of exposure to liquid and gas to be treated during surface treatment. In addition, when treated with nitric acid at 100 °C for 5 h-the experimental condition of this study-no change in the CF surface was observed.
Tensile Properties of Carbon Fibers
The tensile properties according to the temperature and treatment time of nitric acid were evaluated, and the results are shown in Figure 2. The tensile strength, modulus, and elongation after nitric acid treatment demonstrated similar characteristics. In other words, it was confirmed that the tensile properties of nitric acid at 25 °C for 5 h and at 100 °C for 3 h were almost similar to untreated CF but slightly decreased compared to untreated CF after 3 h at 100 °C. Changes in tensile strength between 25 °C and 100 °C were similar to changes at 25 °C within the margin of error. Ibarra et al. showed that when surfaces were treated with nitric acid at 80 °C, tensile strength had no significant difference from untreated CF up to 7 h but that it decreased to 80% compared to untreated CF at 12 h [19]. Rong et al. reported that when heat treatment was performed in an oxygen atmosphere, pitting occurred on the surface of the fiber, and the surface area increased without a
Tensile Properties of Carbon Fibers
The tensile properties according to the temperature and treatment time of nitric acid were evaluated, and the results are shown in Figure 2. The tensile strength, modulus, and elongation after nitric acid treatment demonstrated similar characteristics. In other words, it was confirmed that the tensile properties of nitric acid at 25 • C for 5 h and at 100 • C for 3 h were almost similar to untreated CF but slightly decreased compared to untreated CF after 3 h at 100 • C. Changes in tensile strength between 25 • C and 100 • C were similar to changes at 25 • C within the margin of error. Ibarra et al. showed that when surfaces were treated with nitric acid at 80 • C, tensile strength had no significant difference from untreated CF up to 7 h but that it decreased to 80% compared to untreated CF at 12 h [19]. Rong et al. reported that when heat treatment was performed in an oxygen atmosphere, pitting occurred on the surface of the fiber, and the surface area increased without a change in tensile strength at 420 • C for up to 1 h, but after 2 h, the tensile strength decreased, the pitting agglomerated, and the surface area gradually decreased [29]. Lee et al. showed that plasma treatment in an oxygen atmosphere was almost similar to untreated CF for up to 1 min but showed a rapid decrease up to approximately 52% compared to untreated CF at 5 min [38]. In general, when CF is surface treated, as the temperature, treatment time, and treatment energy increase, surface erosion and carbon and oxygen inside the CF react violently, resulting in the deterioration of the carbon fiber and loss of mechanical strength. It is considered that the effect of temperature on the change of CF is greater than the effect of time during surface treatment with nitric acid.
Polymers 2023, 15, x FOR PEER REVIEW 4 of 12 change in tensile strength at 420 °C for up to 1 h, but after 2 h, the tensile strength decreased, the pitting agglomerated, and the surface area gradually decreased [29]. Lee et al. showed that plasma treatment in an oxygen atmosphere was almost similar to untreated CF for up to 1 min but showed a rapid decrease up to approximately 52% compared to untreated CF at 5 min [38]. In general, when CF is surface treated, as the temperature, treatment time, and treatment energy increase, surface erosion and carbon and oxygen inside the CF react violently, resulting in the deterioration of the carbon fiber and loss of mechanical strength. It is considered that the effect of temperature on the change of CF is greater than the effect of time during surface treatment with nitric acid. Figure 3 is the XPS spectra for analyzing the chemical changes on the CF surface according to the nitric acid treatment conditions, and the composition changes and O/C ratios are summarized in Table 2. When the surface treatment with nitric acid was at 25 °C, as the treatment time increased from 1 to 5 h, the amount of carbon decreased, and the amount of oxygen, nitrogen, and silicon increased significantly compared to the desized CF. By contrast, regarding the composition change before and after the surface treatment at 100 °C with nitric acid, as the treatment time increased to 5 h, the amount of carbon decreased, the amount of oxygen and nitric acid increased, and the amount of silicon significantly decreased compared to the desized CF. The increase in oxygen functional groups between 25 °C and 100 °C was smaller than the amount at 100 °C and larger than that at 25 °C.
Surface Composition of Carbon Fibers
For the O/C ratio, which is an indicator of interfacial shear strength, as the temperature and time increased from 25 °C to 100 °C, from 1 to 5 h, it increased to 0.33 compared to untreated CF at 100°C and 5 h [17]. Observation of the C1s spectra revealed that after desizing, the amount of C-O (hydroxyl groups) and C=O (carbonyl groups) present in large amounts on the surface of the CF was significantly reduced. Furthermore, the amount of C-O and C=O when the surface was treated in nitric acid at 25 °C until 1 h decreased slightly compared to the desized CF, and as it increased from 1 to 5 h, the amount of C-O and C=O increased slightly compared to 1 h, and O=C-O (lactone group) was newly created (Figure 3). In addition, as the time increased from 1 to 5 h, the bond between the carbon fiber surface and nitrogen present in nitric acid rapidly expanded, resulting in significant increases in the amount of C-N compared to the desized CF. The amount of silicon present inside the CF also increased due to the exposed surface. However, as the surface treatment in nitric acid increased from 1 to 5 h at 100 °C, the amount of C-O and C=O decreased slightly compared to the desized CF. Moreover, O=C-O was formed and expanded rapidly, and the amount of C-N increased up to 1 h at 100 °C. The amount of C-N expanded due to the reaction between carbon present on the CF surface and nitrogen of nitric acid, but after 1 h, the amount of C-N decreased slightly compared to 1 h due to the combination and removal of oxygen present on the CF and nitrogen of nitric acid. In addition, the silicon present inside the CF was removed through a violent Figure 3 is the XPS spectra for analyzing the chemical changes on the CF surface according to the nitric acid treatment conditions, and the composition changes and O/C ratios are summarized in Table 2. When the surface treatment with nitric acid was at 25 • C, as the treatment time increased from 1 to 5 h, the amount of carbon decreased, and the amount of oxygen, nitrogen, and silicon increased significantly compared to the desized CF. By contrast, regarding the composition change before and after the surface treatment at 100 • C with nitric acid, as the treatment time increased to 5 h, the amount of carbon decreased, the amount of oxygen and nitric acid increased, and the amount of silicon significantly decreased compared to the desized CF. The increase in oxygen functional groups between 25 • C and 100 • C was smaller than the amount at 100 • C and larger than that at 25 • C. reaction on the surface of the CF as the time increased from 1 h to 5 h, and the amount of silicon decreased compared to the desized CF. This is because the carbon present on the surface of the CF exposed by desizing during surface treatment in nitric acid at 25 °C combines with the oxygen of nitric acid and is removed as CO2, so the carbon inside the CF is slightly reduced, and the silicon inside the CF is revealed, revealing the amount of silicon. It is thought that C-O, C=O, and C-N increased and O=C-O was produced because the oxygen and nitrogen in nitric acid were bound to the CF surface. On the other hand, when the surface was treated with nitric acid at 100 °C, a strong bond was created between the carbon present on the surface of CF and the oxygen present in the nitric acid, and oxygen was introduced into the C-O and C=O bonds, resulting in the formation and rapid increase of O=C-O bonds. It is believed that silicon was rapidly reduced by the removal of carbon fiber due to the strong energy on the surface of the CF. For the O/C ratio, which is an indicator of interfacial shear strength, as the temperature and time increased from 25 • C to 100 • C, from 1 to 5 h, it increased to 0.33 compared to untreated CF at 100 • C and 5 h [17]. Observation of the C1s spectra revealed that after desizing, the amount of C-O (hydroxyl groups) and C=O (carbonyl groups) present in large amounts on the surface of the CF was significantly reduced. Furthermore, the amount of C-O and C=O when the surface was treated in nitric acid at 25 • C until 1 h decreased slightly compared to the desized CF, and as it increased from 1 to 5 h, the amount of C-O and C=O increased slightly compared to 1 h, and O=C-O (lactone group) was newly created (Figure 3). In addition, as the time increased from 1 to 5 h, the bond between the carbon fiber surface and nitrogen present in nitric acid rapidly expanded, resulting in significant increases in the amount of C-N compared to the desized CF. The amount of silicon present inside the CF also increased due to the exposed surface. However, as the surface treatment in nitric acid increased from 1 to 5 h at 100 • C, the amount of C-O and C=O decreased slightly compared to the desized CF. Moreover, O=C-O was formed and expanded rapidly, and the amount of C-N increased up to 1 h at 100 • C. The amount of C-N expanded due to the reaction between carbon present on the CF surface and nitrogen of nitric acid, but after 1 h, the amount of C-N decreased slightly compared to 1 h due to the combination and removal of oxygen present on the CF and nitrogen of nitric acid. In addition, the silicon present inside the CF was removed through a violent reaction on the surface of the CF as the time increased from 1 h to 5 h, and the amount of silicon decreased compared to the desized CF. This is because the carbon present on the surface of the CF exposed by desizing during surface treatment in nitric acid at 25 • C combines with the oxygen of nitric acid and is removed as CO 2 , so the carbon inside the CF is slightly reduced, and the silicon inside the CF is revealed, revealing the amount of silicon. It is thought that C-O, C=O, and C-N increased and O=C-O was produced because the oxygen and nitrogen in nitric acid were bound to the CF surface. On the other hand, when the surface was treated with nitric acid at 100 • C, a strong bond was created between the carbon present on the surface of CF and the oxygen present in the nitric acid, and oxygen was introduced into the C-O and C=O bonds, resulting in the formation and rapid increase of O=C-O bonds. It is believed that silicon was rapidly reduced by the removal of carbon fiber due to the strong energy on the surface of the CF.
Surface Composition of Carbon Fibers
To see the change of the peak according to the composition of the functional group formed on the surface of the CF, C1s were separated and shown in Figure 4 and Table 3. As can be seen from the above results, when it increased from 1 to 5 h at 25 • C of nitric acid, C-O, C=O, C-N, and O=C-O slightly increased compared to 1 h. On the other hand, at 100 • C, C-O and C=O decreased sharply compared to untreated CF, and O=C-O increased significantly. After surface treatment with nitric acid, the reaction between CF and nitric acid intensified as the temperature increased from 25 • C to 100 • C, and treatment time increased from 1 to 5 h, resulting in increases in O=C-O, which greatly affected the interfacial shear strength with resin in oxygen functional groups. In previous studies, as a result of immersing sulfuric acid/nitric acid at a ratio of 3:1 at 60 • C for 15 min, oxygen increased 16 times compared to untreated CF. Furthermore, nitrogen was produced and increased 1.3 times, and Si was augmented by 1.0 times. The amount of oxygen most likely increased due to the increase in carboxyl groups after surface treatment, and it has been reported that nitrogen and Si was produced by oxidized nitrogen and sulfur [20]. In addition, it has been shown that oxygen present in the atmosphere combined with carbon on the CF surface during heat treatment in an oxygen atmosphere, resulting in N-H being broken and the amount of N-O, C-O, and C=O being augmented by combining with oxygen [23]. After heat treatment in the atmosphere of H 2 /Ar, it was reported that the surface change was slight by hydrogen, and there was almost no chemical change [27]. In the case of plasma treatment in an argon atmosphere, it has been reported that no new oxygen functional group was generated on the surface of the CF, which has only the effect of desizing because an inert gas was used [39]. In this study, as the temperature and time increased during nitric acid treatment, the oxygen functional groups introduced on the CF surface increased, and the defects on the CF surface augmented, which is expected to increase interfacial shear strength.
Surface Energy Analysis
To examine the changes in the surface free energy of CFs after nitric acid treatment, the dynamic contact angle was measured using a hydrophilic and hydrophobic wetting solution, and the measured values were substituted into the following equation to obtain polar surface free energy and nonpolar surface free energy values: where γ L is the total surface energy of the wetting liquid, γ D L is the nonpolar surface energy of the wetting liquid, γ P L is the polar surface energy of the wetting liquid, γ S is the total surface energy of the specimen, γ D S is the nonpolar surface energy of the specimen, γ P S is the polar surface energy of the specimen, and θ is the contact angle. The polar surface energy of the specimen (i.e., the slope) and the nonpolar surface energy (i.e., the Y-intercept) were obtained from the two coordinates calculated by substituting the advance angle, which is the angle when the specimen enters the hydrophilic or hydrophobic wetting liquid, into Equation (1). The change in the dynamic contact angle of the CF according to the temperature and treatment time with nitric acid is shown in Figure 5. At 25 • C in nitric acid, the contact angle decreased compared to untreated CF as the treatment time increased from 30 min to 5 h. At 100 • C in nitric acid, the contact angle decreased compared to untreated CF from 30 min to 1 h, but after 1 h, the contact angle gradually increased compared to 1 h. Figure 6 is the result of summarizing the surface free energy and po according to the temperature and time of the nitric acid treatment of the d angle results. At 25 °C, the surface free energy and polar free energy of nit ally increased as time increased from 1 to 5 h. At 100 °C and for up to 1 energy and polar free energy of nitric acid increased compared to 0.5 h, a h, the surface energy and polar free energy decreased compared to 1 h. It the contact angle decreases and the polar surface energy significantly incre active introduction of oxygen functional groups at 100 °C for 1 h, and af surface erodes, expanding the contact angle and reducing the polar surfa change in surface energy between 25 °C and 100 °C was larger than tha smaller than the change at 100 °C, and the value of polar surface energy w the value of maximum polar surface energy at 25 °C, but it was smaller th the maximum polar surface energy at 100 degrees. Figure 6 is the result of summarizing the surface free energy and polar free energy according to the temperature and time of the nitric acid treatment of the dynamic contact angle results. At 25 • C, the surface free energy and polar free energy of nitric acid gradually increased as time increased from 1 to 5 h. At 100 • C and for up to 1 h, the surface energy and polar free energy of nitric acid increased compared to 0.5 h, and then after 1 h, the surface energy and polar free energy decreased compared to 1 h. It is judged that the contact angle decreases and the polar surface energy significantly increases due to the active introduction of oxygen functional groups at 100 • C for 1 h, and after 1 h, the CF surface erodes, expanding the contact angle and reducing the polar surface energy. The change in surface energy between 25 • C and 100 • C was larger than that at 25 • C and smaller than the change at 100 • C, and the value of polar surface energy was greater than the value of maximum polar surface energy at 25 • C, but it was smaller than the value of the maximum polar surface energy at 100 degrees.
Looking at the polar/surface free energy ratio in Figure 7, the effect of nitric acid treatment time was small, but the effect of temperature was large. That is, at 25 • C, the polar/surface free energy ratio slightly increased as the time increased from 30 min to 5 h, and at 100 • C, the ratio increased from 30 min to 1 h but decreased after 1 h. From this, it was determined that the oxygen functional group was greatly increased at 100 • C for 1 h in nitric acid, resulting in a decrease in contact angle and a significant advancement in polar surface energy.
According to one researcher, when the surface is treated with nitric acid at 80 • C for 4 h, oxygen functional groups are introduced 17 times, the contact angle of CF is reduced, and the surface energy is increased, thereby amplifying the interfacial bonding force between CF and resin [14]. In the case of heat treatment at 300 • C in a nitrogen atmosphere, it has been reported that the polar free energy decreases due to the reduction of oxygen functional groups resulting from the curing of sizing [26]. In addition, after the electrical oxidation treatment, it was confirmed that the polar free energy increased due to the increase in the surface area and the increase of C-O, C=O, and O=C-O by etching the surface of the CF [3]. From these results, it can be seen that the interfacial shear strength between the CF and the resin increases due to the increase in polar free energy resulting from the introduction of oxygen functional groups to the surface of the CF during surface treatment.
active introduction of oxygen functional groups at 100 °C for 1 h, and after 1 h, the C surface erodes, expanding the contact angle and reducing the polar surface energy. Th change in surface energy between 25 °C and 100 °C was larger than that at 25 °C an smaller than the change at 100 °C, and the value of polar surface energy was greater tha the value of maximum polar surface energy at 25 °C, but it was smaller than the value the maximum polar surface energy at 100 degrees. Looking at the polar/surface free energy ratio in Figure 7, the effect of nitric ac treatment time was small, but the effect of temperature was large. That is, at 25 °C, th polar/surface free energy ratio slightly increased as the time increased from 30 min to 5 and at 100 °C, the ratio increased from 30 min to 1 h but decreased after 1 h. From this, was determined that the oxygen functional group was greatly increased at 100 °C for 1 According to one researcher, when the surface is treated with nitric a 4 h, oxygen functional groups are introduced 17 times, the contact angle of and the surface energy is increased, thereby amplifying the interfacial bo tween CF and resin [14]. In the case of heat treatment at 300 °C in a nitrog it has been reported that the polar free energy decreases due to the reduc functional groups resulting from the curing of sizing [26]. In addition, aft oxidation treatment, it was confirmed that the polar free energy increased crease in the surface area and the increase of C-O, C=O, and O=C-O by etch of the CF [3]. From these results, it can be seen that the interfacial shear str the CF and the resin increases due to the increase in polar free energy res introduction of oxygen functional groups to the surface of the CF durin ment.
Functional Group Change Mechanism by Nitric Acid Treatment
Based on the analysis of the mechanical/chemical properties of CFs according to the temperature and treatment time of nitric acid treatment, the surface change and oxygen functional group mechanism of CFs are schematized, as shown in Figure 8. When desizing with acetone, the sizing is removed and the oxygen atoms of C-O (hydroxyl group) and C=O (carbonyl group) present on the surface of the CF and the oxygen atoms in acetone are combined, as shown in Equation (2), to be removed as O 2 and CO 2 . As the treatment with nitric acid increased from 1 to 5 h at 25 • C, C-C and C=C on the surface of the CF were continuously bonded with the oxygen of the nitric acid (see Equation (2)) to remove O 2 and CO 2 , and oxygen was bonded to the exposed surface, which generated and increased C-O and C=O, which were removed during desizing, creating O=C-O (lactone group) newly. In addition, carbon present on the surface of CF and nitrogen in nitric acid were combined to increase C-N, and silicon present inside the CF was revealed, and the amount of silicon increased. On the other hand, at 100 • C, the CF surface was revealed as it increased from 1 to 5 h due to strong energy, and the binding of C-C and C=C was significantly reduced. The oxygen of nitric acid was strongly bonded to the C-O and C=O present on the exposed CF surface to generate O=C-O and rapidly increase O=C-O, and Si decreased as carbon fibers were damaged by nitric acid and Si was removed. In this study, it is judged that O=C-O increases for more than 5 h at 100 • C in nitric acid, but the carbon inside the CF and the oxygen atoms of nitric acid reacted violently, and the CF properties deteriorated rapidly.
Conclusions
In this study, the mechanical and chemical properties of CFs according to temperature and treatment time with nitric acid were analyzed, and the chemical state change and oxygen functional group mechanism of the CF surface according to the nitric acid treatment conditions were observed. There was no significant difference in tensile properties from untreated CF at 25 °C from 1 to 5 h, and the amount of C-C, C-O, and C=O on the carbon fiber surface was combined with oxygen in nitric acid to reduce C-C and C=C, and oxygen was introduced into C-O and C=O to form O=C-O for up to 5 h. In addition, the carbon present on the surface of the CF and the oxygen of nitric acid combine to increase C-N. Additionally, at 100 °C in nitric acid, the tensile properties do not differ significantly from the untreated CF for up to 1 h. However, after 1 h, tensile properties gradually decrease, C-O and C=O react violently in heated nitric acid, and O=C-O is rapidly produced and expands. Furthermore, the oxygen functional group is greatly increased and decreased by combining Si on the exposed surface with oxygen. As a result of this, the polar free energy increased significantly at 100 °C compared to 25 °C and showed the most optimal contact angle and polar free energy at 100 °C for 1 h. Therefore, within the scope of this study, after surface treatment of nitric acid at 100 °C for 1 h, the most optimal oxygen
Conclusions
In this study, the mechanical and chemical properties of CFs according to temperature and treatment time with nitric acid were analyzed, and the chemical state change and oxygen functional group mechanism of the CF surface according to the nitric acid treatment conditions were observed. There was no significant difference in tensile properties from untreated CF at 25 • C from 1 to 5 h, and the amount of C-C, C-O, and C=O on the carbon fiber surface was combined with oxygen in nitric acid to reduce C-C and C=C, and oxygen was introduced into C-O and C=O to form O=C-O for up to 5 h. In addition, the carbon present on the surface of the CF and the oxygen of nitric acid combine to increase C-N. Additionally, at 100 • C in nitric acid, the tensile properties do not differ significantly from the untreated CF for up to 1 h. However, after 1 h, tensile properties gradually decrease, C-O and C=O react violently in heated nitric acid, and O=C-O is rapidly produced and expands. Furthermore, the oxygen functional group is greatly increased and decreased by combining Si on the exposed surface with oxygen. As a result of this, the polar free energy increased significantly at 100 • C compared to 25 • C and showed the most optimal contact angle and polar free energy at 100 • C for 1 h. Therefore, within the scope of this study, after surface treatment of nitric acid at 100 • C for 1 h, the most optimal oxygen functional group, polar surface energy, and interfacial bonding force were exhibited without a decrease in tensile strength.
Future experiments should focus more on evaluating whether the mechanical characteristics of composites manufactured using surface-treated CF with nitric acid under optimal conditions are equivalent to commercial carbon composites. This can contribute to the commercialization of automobile parts manufactured using this method.
Conflicts of Interest:
The authors declare no conflict of interest. | 2023-02-10T16:14:25.984Z | 2023-02-01T00:00:00.000 | {
"year": 2023,
"sha1": "94d8fe6cd21aefb70fecbf66b5cd4f565cda7597",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2073-4360/15/4/824/pdf?version=1675760415",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "546819248afa3096f6d02f58b02d4a726b2e9722",
"s2fieldsofstudy": [
"Chemistry"
],
"extfieldsofstudy": [
"Medicine"
]
} |
44954687 | pes2o/s2orc | v3-fos-license | A Large Atom Number Metastable Helium Bose-Einstein Condensate
We have produced a Bose-Einstein condensate of metastable helium (4He*) containing over 1.5x10^7 atoms, which is a factor of 25 higher than previously achieved. The improved starting conditions for evaporative cooling are obtained by applying one-dimensional Doppler cooling inside a magnetic trap. The same technique is successfully used to cool the spin-polarized fermionic isotope (3He*), for which thermalizing collisions are highly suppressed. Our detection techniques include absorption imaging, time-of-flight measurements on a microchannel plate detector and ion counting to monitor the formation and decay of the condensate.
Ten years after the first experimental realization of Bose-Einstein condensation (BEC) in dilute, weakly interacting atomic systems [1] the field of degenerate quantum gases has developed into a major area of research.For most elements it has not yet been possible to produce Bose-Einstein condensates containing large numbers of atoms.Only for hydrogen, sodium and rubidium have condensates with more than 10 6 atoms been realized [2].Large condensates provide a better signal-to-noise ratio, allow a study of both the collisionless and the hydrodynamic regime and are especially useful for sympathetic cooling and atom laser applications.In this realm metastable atoms are of particular interest, offering alternative detection methods due to their high internal energy.Absorption imaging is the technique most frequently applied to detect and measure a BEC.Metastable helium (2 3 S 1 state) is the only species for which detection of the condensate has been performed using a microchannel plate (MCP) detector [3]; the same detector was also used to measure ions produced by Penning ionization (while absorption imaging was not used).In a recent experiment, after pioneering work with metastable neon [4], Schellekens et al. used a position sensitive MCP detector to observe the Hanbury Brown and Twiss effect both above and below the BEC threshold [5].In the second experiment in which BEC was realized with He* [6], optical detection was used and up to 6 × 10 5 atoms could be condensed.Recently, this group reported a high precision measurement of the metastable helium scattering length a, which was performed using a two-photon dark resonance [7].The value a = 7.512 ± 0.005 nm is favorable for experiments with the fermionic isotope 3 He * .It ensures a stable ultracold 4 He * / 3 He * boson-fermion mixture, as the inter-isotope scattering length will be large and positive [8].Large numbers of 4 He * atoms all along to the critical temperature provide an efficient reservoir for sympathetic cooling and will facilitate the production of degenerate 3 He * clouds with large numbers of atoms.
In this letter we present an experiment that combines the various detection methods used previously [3,6] and describe the realization of a BEC of 4 He * containing more than 1.5 × 10 7 atoms.This large im-provement is primarily due to the application of onedimensional Doppler cooling inside the magnetic trap rather than three-dimensional Doppler cooling prior to magnetic trapping.Doppler cooling of polarized atoms was originally proposed for atomic hydrogen [9], and has recently been demonstrated for optically dense samples of magnetically trapped chromium atoms [10].Compared to the laser cooling methods we investigated previously [11,12], this configuration is more efficient and simple.
The experimental setup is an extended and improved version of our previous setup [11].In short, we start with a beam of metastable atoms produced by a liquidnitrogen cooled dc-discharge source.The atomic beam is collimated, deflected and slowed by applying laser beams resonant with the 2 3 S 1 → 2 3 P 2 transition at 1083 nm.Typically 2×10 9 atoms are loaded into a magneto-optical trap (MOT) at a temperature of 1 mK.Since our previous experiments [11] we have installed a new ultrahigh vacuum (UHV) chamber and magnetic trap.The coils for our cloverleaf magnetic trap are placed in water-cooled plastic boxes and positioned in re-entrant windows.Inside the UHV chamber two MCP detectors (Hamamatsu F4655) and an rf-coil are mounted.The first MCP detector is positioned ∼10 cm from the trap center and attracts positively charged ions produced in Penning ionizing collisions: He* + He* → He + + He(1 1 S) + e − (or He* + He* → He 2 + + e − ).These ionization processes are the primary loss mechanisms in cold clouds of He*.A second (identical) MCP detector shielded by a grounded grid is positioned 17 cm below the trap center and detects neutral He* atoms that fall upon it.This detector is mounted on a translation stage and can be displaced horizontally to allow a vertical laser beam to pass through the trap center for absorption imaging.Absorption imaging of the MOT cloud determines the number of atoms in the MOT with an accuracy of about 20% [11]; this is used to calibrate the He* MCP detector.When the MOT is loaded we switch off all currents and laser beams.In an applied weak magnetic field we spin-polarize the cloud (atoms are pumped into the m = +1 magnetic sublevel) and switch on the currents of the cloverleaf magnetic trap.Typically ∼60% of the atoms is transfered from the MOT into the magnetic trap in this procedure.We oper- ate the cloverleaf trap at a bias magnetic field B 0 =24 G to suppress excitation of depolarizing transitions in the subsequent one-dimensional Doppler cooling stage.One-dimensional Doppler cooling starts at the same time as the cloverleaf trap is switched on.It is implemented by retroreflecting a weak circularly polarized laser beam along the (horizontal) symmetry axis of the magnetic field.During the cooling pulse the temperature decreases reducing the size and increasing the optical thickness.Cooling in the radial directions relies on reabsorption of spontaneously emitted red-detuned photons by the optically thick cloud.Other possible energy redistribution mechanisms are collisional thermalization and anharmonic mixing.While the collision rate increases from 1.5 s −1 to 20 s −1 during Doppler cooling, anharmonic mixing is negligible in our trap.With a laser detuning of one natural linewidth below the resonance frequency for an atom at rest in the center of the trap and an intensity of 10 −3 I sat (I sat = 0.17 mW/cm 2 ), optimum cooling is realized in 2 seconds.In a separate experiment 10 8 3 He * atoms were loaded into the magnetic trap and cooled using the same technique.For identical fermions s-wave collisions are forbidden, while the contribution of the higher-order partial waves is highly suppressed in this temperature range for He * .We observed a temperature decrease from 1 mK to 0.15 mK, which suggests that reabsorption of red-detuned photons scattered by atoms in the cloud is the main cooling mechanism in the radial direction.
Figure 1 shows two TOF traces, illustrating the effect of one-dimensional Doppler cooling in our cloverleaf magnetic trap.We typically trap N = 1.5 × 10 9 atoms, which are cooled to a temperature T = 0.13 mK, three times the Doppler limit.This implies a temperature reduction by a factor of 8 and an increase in phase-space density by a factor of ∼600, while practically no atoms are lost from the trap.For comparison, reaching this temperature by means of rf-induced evaporative cooling would result in the loss of ∼90% of the atoms from the trap.In previous experiments [11,12] we applied three-dimensional Doppler cooling or a two-color magneto-optical trap to improve the starting conditions for evaporative cooling.One-dimensional Doppler cooling provides lower temperatures, higher phase-space density and is easier to implement.At this point the lifetime of the atoms in the magnetic trap is about 3 minutes, limited by collisions with background gas.To compress the cloud we reduce the bias field in the trap center to 3 G in 2.5 seconds, which increases the temperature to 0.2 mK.The parameters of our magnetic trap then are modest: the axial and radial trap frequencies are ω z /2π = 47 ± 1 Hz and ω ⊥ /2π = 237 ± 4 Hz respectively.Higher frequencies are possible but are not required to achieve BEC.Our procedure is similar to previous experiments on evaporative cooling in our group [13].After compression we cool the gas by rf-induced evaporative cooling in 12 s to BEC, which is achieved at a temperature of ∼ 2µK.We apply a single exponential rf ramp, starting at 50 MHz.The frequency decreases to zero but the ramp is terminated at around 8.4 MHz.Shorter ramps, down to 2 s, also produce a condensate, albeit with fewer atoms.
The most sensitive method to detect BEC is TOF analysis of the expanding cloud on the He * MCP detector.A typical TOF signal, obtained in a single shot, is shown in Fig. 2a.The double structure is a combination of the inverted parabola indicating the presence of a BEC released from the harmonic trap and a broad thermal distribution.This signal is used to determine the number of atoms in the condensate as well as in the thermal cloud.In contrast to the Orsay experiments [3,5], all atoms stay in the m = +1 sublevel during the trap switch-off.Applying the MCP calibration, the area under the fitted curve determines the number of atoms that have hit the detector.When we consider a thermal cloud, this number is only a small fraction of the total number of thermal atoms N th .Therefore the determination of N th relies upon the measured temperature and the MCP calibration.The condensate expansion, determined by the mean-field interaction energy, is much slower.Thus the condensate will fall completely on the detector sensitive area (diameter of 1.45 cm), allowing us to measure the number of condensed atoms N 0 using the MCP calibration alone.The maximum number of atoms in the condensate deduced in this way is 1 × 10 7 .This number is an underestimate due to MCP saturation effects, which will be discussed below.By applying an additional magnetic field pulse we compensate small residual field gradients, which otherwise lead to a slight deviation from free fall expansion.With stronger field pulses we can also push the cloud towards the detector and this way realize shorter expansion times.The model used to fit the time-of-flight signals of the partly condensed clouds (Fig. 2a) is a combination of a thermal part (Bose distribution) and a condensate part (parabolic distribution).The chemical potential µ, the number of atoms and the temperature T are the free parameters of the fit; effects of interactions are not included in the function used for the thermal part.In the Thomas-Fermi limit [14], the chemical potential is given by µ 5/2 = 15h 2 m 1/2 2 −5/2 N 0 ω3 a, where h is Planck's constant divided by 2π, ω is the geometric mean of the trap frequencies, m is the mass of the helium atom and a = 7.512(5) nm [7] is the scattering length.A maximum value of µ extracted from the fit of the TOF signal is ∼ 1.3 × 10 −29 J, which corresponds to 5.1 × 10 7 atoms in the condensate.A possible cause for the discrepancy between the number of atoms determined from the integrated signal and from the mea-surement of the chemical potential may be saturation of the MCP when detecting a falling condensate (peak flux of ∼ 10 9 atoms/second); this leads to an underestimation of N 0 as well as µ.Another possible cause is distortion of the velocity distribution during the trap switch-off and influence of remaining magnetic field gradients on the expansion of the cloud.This may lead to an overestimation of µ, and therefore also of N 0 .
When the MCP detector is shifted horizontally, we can detect the condensate on a CCD camera.A weak (I = 10 −1 I sat ), 50 µs long on-resonance laser pulse is applied to image the shadow of the atoms on the CCD camera for which a quantum efficiency of ∼1.6% is measured at 1083 nm.As expected, the condensate expands anisotropically while the thermal cloud shows an isotropic expansion (Fig. 2b).Absolute calibration of the number of atoms at ∼ µK temperatures could not be performed by optical means.The analysis of the absorption images, taken between 1 ms and 70 ms after the trap was switched off, shows that the condensate expansion deviates from the theoretical predictions [15]: it expands faster than expected in the radial direction and slower in the axial.From these observations we conclude that the expansion of the cloud is influenced by magnetic field gradients during the switch-off of the trap.A difference in the switch-off times of the axial and radial confinement could cause an additional imbalance in the redistribution of the condensate interaction energy between the two directions.This may influence the measurements of both the chemical potential and the temperature.In order to check if the interaction energy is conserved, we extract the asymptotic kinetic energy gained during the expansion from absorption images of the cloud [16].In the Thomas-Fermi approximation this so-called release energy should equal the interaction energy in the trap.We obtain N 0 = 4 × 10 7 from this analysis assuming that no extra energy is added to (or taken from) the system during the trap switch-off.This is not exactly fulfilled in our case as switching is not fast enough to ensure diabaticity.
To verify our TOF signal analysis, we plot the chemical potential as a function of N 2/5 0 using data obtained from the MCP measurements (here N 0 is the number of condensed atoms measured by integrating the MCP current).The data points lie on a straight line which goes through zero with a slope larger than expected, meaning that either µ is overestimated or N 0 is underestimated.The former is supported by the analysis of the absorption images so we correct µ.The corrected data points as well as the theoretical line are presented in the inset of Fig. 3.The plot also shows that the MCP detector saturates, when the number of atoms in the condensate exceeds ∼ 10 6 .When we now extract the number of atoms from the measured chemical potential, after the correction for the distortion during the trap switch-off, we find N 0 = 1.5 × 10 7 in our largest condensates.This number is still a lower limit, as the analysis assumes that µ is not affected by saturation of the MCP detector.We, however, measure a reduction in µ when we push the BEC towards the detector, thus increasing the saturation problem.
With a second MCP detector we observed the growth and decay of our condensate by counting the ions produced during evaporative cooling.Due to the increase in density the ion signal increases, although the number of trapped atoms decreases.When BEC sets in, a sharp increase is expected, indicating the formation of a dense cloud in the centre of the trap [3].This is demonstrated in Fig. 2c, which shows the growth of the condensate as well as its decay.
The dynamics of formation and decay of the condensate is an interesting aspect that was discussed and investigated earlier to some extent [18].In our group a model was developed describing the decay of the condensate in the presence of a thermal fraction [19].The model assumes thermalization to be fast compared to the rate of change in thermodynamic variables, so the system remains in thermal equilibrium during the decay.It was shown that under this assumption a transfer of atoms should occur from the condensate to the thermal cloud, enhancing the condensate decay rate.To verify this, we performed measurements of the BEC lifetime using the TOF signal.Due to the high detection efficiency it was possible to detect a condensate up to 75 s after it was produced.Results of these measurements are summarised in Fig. 3.We fit the model to the experimental data for a quasi-pure BEC decay; two-and three-body loss rate constants are used as free parameters.Good agreement with the experiment is found for two-and three-body loss rate constants β = 2 × 10 −14 cm 3 s −1 and L = 9 × 10 −27 cm 6 s −1 respectively, which compares well with theory [17].When we use the extracted values of β and L in our model for the decay of the condensate in the presence of a thermal fraction, the dashed curve included in Fig. 3 is obtained.The agreement with the experiment is good, so we can conclude that the model reproduces the data.
To summarize, we have realized a condensate of 4 He * containing more than 1.5 × 10 7 atoms and studied its growth and decay by measuring the ion production rate in situ, observing its ballistic expansion by absorption imaging and by recording the time-of-flight signal on an MCP detector.The main ingredient that made this large atom number possible is one-dimensional Doppler cooling in the magnetic trap.We demonstrated that this technique can also be applied to cool spin-polarized helium fermions, where the Pauli principle forbids s-wave collisions.Combining both isotopes in one setup may allow the observation of Fermi degeneracy in boson-fermion mixtures of metastable atoms.
9 FIG. 1 :
FIG.1: Time-of-flight signals of4 He * atoms released from the magnetic trap, with and without one-dimensional Doppler cooling.The apparent signal increase after Doppler cooling is due to the increased fraction of atoms that is detected at lower temperature.The line is a fit assuming a Maxwell-Boltzmann velocity distribution.
FIG. 2 :
FIG.2: Observation of BEC, (a) on the He * MCP detector; the dashed fit shows the condensed fraction and the dasheddotted fit the broader thermal distribution, (b) on a CCD camera; after an expansion time of 19 ms a round thermal cloud surrounding a cigar-shaped condensate is visible, (c) on the ion MCP detector; the condensate starts to grow at t = −0.95s, at t = −0.45rf ramp ends and at t = 0 the trap is switched off.
FIG. 3 :
FIG.3: Decay of a quasi-pure BEC (circles) and a BEC in the presence of a large (N th = N0) thermal fraction (squares).The dashed curve represents the atomic transfer model[19], with two-and three-body loss rate constants obtained from a fit (full curve) to the decay of the quasi-pure BEC.Data points that lie above N0 = 10 6 are corrected for the saturation effects.Inset: Chemical potential (µ) as a function of N 2/5 0 .The same data as for the quasi-pure BEC decay are used in the plot.The value of µ is multiplied by a constant (0.61), to bring the data points on the theoretical line (see text). | 2017-10-19T12:49:04.114Z | 2005-09-30T00:00:00.000 | {
"year": 2005,
"sha1": "db01d6fde4dbadf021b3cd4981710699b276e767",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/cond-mat/0510006",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "db01d6fde4dbadf021b3cd4981710699b276e767",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
14956392 | pes2o/s2orc | v3-fos-license | Successful Treatment of Ascites using a Denver® Peritoneovenous Shunt in a Patient with Paroxysmal Nocturnal Hemoglobinuria and Budd-Chiari syndrome.
A 56-year-old man was diagnosed with aplastic anemia and paroxysmal nocturnal hemoglobinuria at 43 years of age and treatment with cyclosporin A was started. Liver cirrhosis, ascites, and thrombus in the hepatic veins were found at 56 years of age and Budd-Chiari syndrome (BCS) was diagnosed according to angiography findings. He was treated with diuretics and paracentesis was performed several times, but with limited efficacy. A Denver® peritoneovenous shunt (PVS) was inserted into the right jugular vein; his ascites and renal function improved immediately and his general condition has remained good for 12 months since starting the above treatment regimen. A PVS is a treatment option for ascites due to BCS.
Introduction
A case of aplastic anemia-paroxysmal nocturnal hemoglobinuria (AA-PNH) syndrome was initially reported as AA presenting with symptoms characteristic of PNH during the course of the disease (1)(2)(3). PNH is a rare acquired disorder of pluripotent hematopoietic stem cells that is characterized by intravascular hemolysis and venous thrombosis (4,5). It is caused by a somatic mutation in the X-linked phosphatidylinositol glycan class A (PIG-A) gene, which is required for the synthesis of the glycosyl phosphatidylinositol (GPI) anchor. The mutation results in the absence of key complement regulatory proteins CD55 and CD59 (6)(7)(8)(9). On erythrocytes, CD55 and CD59 deficiency leads to intravascular hemolysis upon complement activation. Intravascular complement-mediated lysis results in anemia, hemoglobinuria, and venous thrombosis (4). Thrombotic events occur in 40% of PNH patients and hepatic vein thrombosis (Budd-Chiari syndrome; BCS) is the most frequent manifestation (40.7%) (10)(11)(12).
Thrombosis in a hepatic vein results in abdominal pain, hepatomegaly, and ascites, which are the characteristic findings in BCS (11,13). The management of ascites involves treating thrombosis and thrombolysis (14)(15)(16). Percutaneous hepatic vein balloon angioplasty (17,18) should be considered in early thrombosis. Anticoagulants are selected for long-term management (5,19). Patients with thrombosis have a high risk of recurrence and anticoagulant treatment is necessary, although the duration of treatment is controversial because these patients also have a high risk of bleeding (5).
The accumulation of ascites is a common complication in BCS and medically intractable ascites can be treated with a peritoneovenous shunt (PVS) (18,20), surgical portosystemic shunt (13,21), transjugular intrahepatic portosystemic shunt (TIPS) (22)(23)(24), liver transplantation (13,21), or paracentesis (20). Since Leveen et al. first reported the treatment of refractory ascites using a PVS (25), several modifications have been made. The Denver PVS transfers fluid from the peritoneal space to the circulatory system and can be used to treat ascites (20). We herein report the successful treatment of ascites using a Denver PVS in a patient with PNH and BCS.
Case Report
A 56-year-old man was diagnosed with AA at 34 years of age and treated with cyclosporin A. He developed PNH at 43 years of age and was thought to have AA-PNH syndrome. At 56 years of age, his abdomen became distended and his weight increased from 66 to 72 kg in 2 weeks. He was admitted to our hospital to evaluate the weight increase (Fig. 1a). Biochemical data showed liver dysfunction and pancytopenia (Table 1). Computed tomography (CT) revealed liver cirrhosis, ascites, and thrombus formation in the left hepatic vein (LHV) and hepatic inferior vena cava with stenosis ( Fig. 1b and c). The cause of thrombosis was thought to be AA-PNH. No other risk factors for BCS were found, such as JAK-V617F mutations (26) ( Table 1).
Angiography showed stenosis of the hepatic inferior vena cava and hepatic veins due to thrombosis, with collateral formation (Fig. 1d and e). According to these findings, the patient was diagnosed with BCS. The right and middle hepatic veins were obstructed by thromboses. The thrombosis in the inferior vena cava was extremely large. Urokinase injection and balloon dilation of the LHV were performed several times. Heparin was administered for 2 weeks and then switched to warfarin at day 40. Although there was no change in the thrombosis, his weight and the ascites decreased and he was discharged on day 97.
He was re-admitted to our hospital for variceal bleeding on day 165 and endoscopic variceal ligation (EVL) was performed ( Fig. 2a). After EVL, the ascites increased and was not controllable with diuretics. The increased ascites led to compartment syndrome and an altered renal function. In addition to paracentesis, cell-free and concentrated ascites reinfusion therapy (CART) was performed on days 200, 218, and 234. CART therapy was temporarily effective, however, the ascites re-accumulated rapidly a few days after drainage. The ascites was transudative, caused by cirrhosis and there was no evidence of infection. Because the right and middle hepatic veins and the hepatic inferior vena cava were obstructed by thrombosis, TIPS would have been ineffective. Therefore, we considered inserting a Denver PVS. His heart function was normal according to echocardiography and he could tolerate the intravenous return of ascites. The day before the procedure (day 248), he weighed 67.8 kg and 3 L of ascites were drained to reduce the returned-volume of ascites. On day 249, a percutaneous Denver PVS was inserted by a surgeon from the right upper quadrant of the abdomen, subcutaneously through the thorax, and placed via the right internal jugular vein (Fig. 2b). During PVS insertion, warfarin was replaced by heparin. After the procedure was completed, warfarin was re-started. The urine flow increased to 4 L/day soon after transferring the ascites intravascularly. His weight decreased from 65.4 to 63.1 kg the next day. His renal function improved and the use of diuretics could be reduced (Table 1). His weight did not increase after inserting the PVS and he was discharged on day 259. His weight had decreased to 50 kg 1 month after discharge and his distended abdomen was obviously improved with reduced ascites (Fig. 2c). Although his weight had increased slightly 4 months after shunt insertion because his appetite had improved, the ascites was controlled according to abdominal and pelvic CT 4 months after discharge ( Fig. 2d and e). While the thrombosis remained, it was found to have slightly diminished in size. His general condition was good, his quality of life had improved and he could finally return to work. Six months after inserting the Denver PVS, the renal and liver function tests improved markedly (serum creatinine, 0.69 mg/dL; estimated glomerular filtration rate (eGFR), 92.7 mL/min/1.73 m 2 ; total bilirubin (T-BIL), 2.0 mg/dL; aspartate aminotransferase (AST), 21 U/L; and alanine aminotransferase (ALT), 15 U/L). Although his weight had increased, an estimation of the body components using an impedance assay showed that this was because his muscle mass had increased ( Table 2). His esophageal varices became slightly enlarged at 10 months after PVS insertion.
Discussion
Our patient with AA-PNH syndrome who developed BCS and ascites was successfully treated using a Denver PVS. Prior to treatment, urokinase injection, percutaneous transluminal balloon angioplasty (PTA) of the LHV, fibrinolytic therapy, and anticoagulation all failed to resolve thrombosis caused by AA-PNH. The PVS dramatically improved the patient's quality of life and his general condition was good at the 12-month follow-up.
Thrombosis results in a high morbidity and mortality. Overall, 40-67% of PNH patients will die of thrombotic complications and an initial thrombotic event increases the relative risk of death by 5-to 10-fold (27). Differences have been reported in the incidence of thrombosis in PNH (28). Thrombosis was observed at the diagnosis/follow-up in 19.3/ 31.8% of Western patients versus only 6.2/4.3% of Japanese patients (28). Significantly more Western patients died from thrombosis. Retrospective studies have suggested that the risk of thrombosis is correlated with the size of the PNH granulocyte clone (29,30). A lower risk was reported in Chinese and Japanese patients, which is likely explained by a significantly lower PNH granulocyte size in these patients compared with Western patients (28). A survival analysis re- A poor survival was associated with an age over 50 years, severe leukopenia/neutropenia at diagnosis, and a severe complicating infection, in addition to complicating thrombosis at the diagnosis or follow-up in Western patients and renal failure in Japanese patients (28). Our case manifested predominantly as thrombosis in PNH. While this is rare in Japan, no other thrombosis risk factors, including the JAK2 mutation, were observed. In the treatment of thrombosis, in addition to antithymocyte globulin plus cyclosporin (31,32) and primary prophylaxis with vitamin K antagonists, such as warfarin (5,19), the usefulness of eculizumab, a monoclonal antibody against complement factor C5, has been reported (33,34). These agents effectively reduce intravascular hemolysis and thrombotic risk and have dramatically improved the prognosis of PNH (27). In our patient, fibrinolysis and anticoagulation therapy failed to resolve the thrombus. We suspected that the thrombus was mature and we could not increase the dosages of these agents because of the high risk of bleeding. Moreover, variceal bleeding altered the hemostatic system. Eculizumab was the next option, and we needed to consider its indications carefully.
The accumulation of ascites is the most worrisome complication of BCS and medically intractable ascites was treated with a shunt and drainage. Ascites control is achieved sooner after PVS insertion than after TIPS (73% vs. 46% after 1 month), although TIPS is favored for longterm efficacy (85% vs. 40% at 3 years) (23). PVS and paracentesis are reported to be equally effective at relieving refractory ascites (20). Liver transplantation has been performed to treat liver cirrhosis; however, the prognosis is poor, and thrombosis can recur (35)(36)(37). In our patient, thrombolysis, percutaneous hepatic vein balloon angioplasty, and warfarin treatment was partially effective, but not sufficient. We performed paracentesis several times, but its effect was temporary. Finally, the ascites was treated successfully for a longer period using a Denver PVS.
Reported complications after Denver shunt insertion include variceal bleeding, heart failure, shunt obstruction, disseminated intravascular coagulation (DIC), and pulmonary edema (38). In one study, DIC occurred in 37% of patients and was fatal in 78% (39). To reduce the risk of DIC, ascites should be drained prior to PVS insertion, which will reduce the intravenously returned volume (39). Minimizing the returned volume of ascites may also contribute to reducing the risk of heart failure and pulmonary edema (38). To prevent sepsis-induced DIC, ascites-induced infection should be ruled-out. In addition, heparin treatment will inhibit thrombus formation, bleeding, and the development of DIC. Although we could not control the speed at which the ascites returned, no severe complications of ascites were observed in our patient.
Variceal rupture was observed during the second hospitalization. A slight enlargement of the varices was seen 10 months after the insertion. As the PVS procedure is not a radical treatment of cirrhosis, the patient's remaining liver function was preserved after insertion and was not further exhausted thereafter. Thus, PVS-treated patients are able to eat well, resulting in a better nutritional status. Following insertion, our patient's anticoagulant therapy was switched from warfarin to heparin; warfarin was re-started after the procedure. Warfarin administration was responsible for the decrease in the PT%; however, there was no reduction in his liver function. Major prognostic factors for BCS are the prothrombin time, serum bilirubin level, creatinine, and presence of hepatic encephalopathy and ascites (18,40). Control of ascites might be important for improving the prognosis. In our institution, five cases of BCS have been seen in the last 15 years, including the present case. The cause of BCS was unknown in the other cases. One case was treated with PTA and remained alive for 14 years. In three cases, PTA and thrombolysis were performed; however, two of these cases needed liver transplantation. In our patient, liver transplantation was contraindicated due to the complication of PNH, the future medical treatment of which is currently under consideration. In the interim, the Denver PVS has been a useful treatment.
In conclusion, ascites control is important to improve the patient's quality of life and the prognosis of BCS. Although an improvement in the prognosis following PVS insertion remains to be confirmed formally, prior to treatment our patient was dying, whereas afterwards his nutrition improved and he was able to return to work. The Denver PVS is one treatment option if paracentesis is effective, but is required multiple times for intractable ascites. | 2018-04-03T01:30:20.047Z | 2016-10-15T00:00:00.000 | {
"year": 2016,
"sha1": "e551844d059a89a77511866d1127bd67e519bc62",
"oa_license": "CCBYNCND",
"oa_url": "https://www.jstage.jst.go.jp/article/internalmedicine/55/20/55_55.7087/_pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "e551844d059a89a77511866d1127bd67e519bc62",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
231676256 | pes2o/s2orc | v3-fos-license | Remdesivir therapy in patients with COVID-19: A systematic review and meta-analysis of randomized controlled trials
Purpose To perform a systematic review and meta-analysis of randomized controlled trials that examined remdesivir treatment for COVID-19. Materials and methods A systematic literature search was performed using Pubmed, Embase, and ClinicalTrials.gov to identify studies published up to October 25, 2020 that examined COVID-19 treatment with remdesivir. A total of 3 randomized controlled trials that consisted of 1691 patients were included in the meta-analysis. Results The odds for mechanical ventilation (MV) or extracorporeal membrane oxygenation (ECMO) following treatment was significantly lower in the remdesivir group compared to the control group (OR = 0.48 [95% CI: 0.34; 0.69], p < 0.001). The odds of early (at day 14/15; OR = 1.42 [95% CI: 1.16; 1.74], p < 0.001) and late (at day 28/29; OR = 1.44 [95% CI: 1.16; 1.79], p = 0.001) hospital discharge were significantly higher in the remdesivir group compared to the control group. There was no difference in the odds for mortality in patients treated with remdesivir (OR = 0.77 [95% CI: 0.56; 1.06], p = 0.108). Conclusions Remdesivir attenuates disease progression, leading to lower odds of MV/ECMO and greater odds of hospital discharge for COVID-19 patients. However, remdesivir does not affect odds of mortality.
Introduction
There have been approximately 65.8 million confirmed cases of coronavirus disease (COVID-19) as of December 7, 2020, which has led to approximately 1.5 million deaths worldwide [1]. COVID-19, the disease caused by severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2), can lead to acute respiratory distress syndrome (ARDS), primarily in immunocompromised patients, the elderly, and individuals with comorbidities (e.g., obesity, hypertension, chronic obstructive pulmonary disease) [2][3][4]. While current therapies aim to prevent respiratory complications, effective pharmacological therapies that target the virus are lacking.
Remdesivir, an adenosine nucleotide analog that inhibits SARS-CoV-2 RNA-dependent RNA polymerase [5], was recently approved for COVID-19 treatment in adults and pediatric patients (≥12 years) by the U.S. Food & Drug Administration [6]. However, the generalized treatment effects of remdesivir across multiple clinical outcomes and populations are poorly understood due to the small number of available studies, which includes studies that were terminated early due to a lack of COVID-19 patients [7]. Here, we performed a systematic review and meta-analysis in an effort to pool results from randomized controlled trials (RCTs) to better characterize the efficacy of remdesivir for the treatment of COVID-19.
Methods
A systematic literature search was performed to identify studies that examined COVID-19 treatment with remdesivir. Search terms included the following: "(remdesivir OR GS-5734) AND (COVID-19 OR SARS-CoV-2)". Literature searches were performed in PubMed, Embase, and ClinicalTrials.gov up to October 25, 2020. The following article types were excluded: meta-analysis or review, editorial, opinion article, correspondence, letter to the editor, technical note, in vitro or in vivo study, methods article, protocol, case report, recommendations, or guidelines.
Studies were also excluded if they failed to report remdesivir as a COVID-19 treatment, if they did not report patient outcomes, or if they only possessed one arm (no comparison group). Risk of bias and levels of evidence for each study was assessed as described in Supplemental Methods. Primary outcomes were need for mechanical ventilation (MV) or extracorporeal membrane oxygenation (ECMO), hospital discharge (early and late), and mortality.
Data analysis
All data were entered into a Microsoft Excel sheet and imported to R for analysis using the metafor package [8]. The 'digitize' package was used to extract data directly from figures in some cases [9]. We used Higgin's I 2 statistics to estimate the percentage of variability in effect estimates that is due to heterogeneity rather than sampling error [10]. Effect sizes were computed as log transformed odds ratios (ORs) using the exact Mantel-Haenszel method [11]. To aid in interpretation, log transformed effect sizes were converted to a probability scale. A separate random effects model was fit for each outcome measure. Accordingly, the between-study variance component was estimated using a restricted effects maximum likelihood (REML) estimator with 95% CIs computed using the Q-profile method [12]. All statistical analyses were performed in RStudio (Version 1.3.959, RStudio, PBC).
Results
A total of 655 articles were screened that fulfilled search criteria, of which 17 articles were selected for full-text review (Fig. 1). Three randomized controlled trials (RCTs) with a total study population of 1691 patients were included in the quantitative meta-analysis [7,13,14]. Among this patient population, 892 (52.7%) patients received remdesivir for 10 days and 799 (47.3%) patients received control therapies. Remdesivir was administered as a 200 mg loading dose on day 1, followed by 100 mg doses daily until the end of treatment. While remdesivir was administered over 10 days for all studies, Spinner et al. included a 5-day regimen in addition to the 10-day regimen. Baseline characteristics of the studies included in the meta-analysis are provided in Table 1.
Need for mechanical ventilation or extracorporeal membrane oxygenation
Cumulative rates of MV or ECMO over the 28-day or 29-day duration of studies were collected. At final follow-up, the proportion of patients requiring MV or ECMO was 0.036 (95% CI: 0.006; 0.181) in the remdesivir group and 0.088 (95% CI: 0.024; 0.277) in the control group. The need for MV or ECMO was significantly lower in the remdesivir group compared to the control group (OR = 0.48 [95% CI: 0.34; 0.69], p < 0.001; Fig. 2). The estimated between-study variability unattributable to sampling error ranged from low to high (I 2 = 0.0% [95% CI: 0.0%; 79.6%]).
Hospital discharge
Cumulative rates of hospital discharge at 14 or 15 days (early discharge) as well as at 28 or 29 days (final follow-up) were collected. Fig. 4). The estimated between-study variability unattributable to sampling error ranged from low to high (I 2 = 0.0% [95% CI: 0.0%; 88.3%]).
Risk of bias
Of the RCTs included in the quantitative meta-analysis, 2 studies were considered high-quality (++) and 1 study was considered acceptable (+) according to the SIGN methodology for controlled trials. All studies demonstrated sufficient congruity between the research methodology, methods of data collection, study methodology, and interpretation of results and conclusions. As such, no studies were excluded based on quality. The results of our quality appraisal are summarized in Supplementary File 1.
Discussion
Here, we performed a systematic review and meta-analysis of studies that examined remdesivir treatment for COVID-19. COVID-19 patients that received remdesivir had lower odds for MV or ECMO following treatment as compared to patients that received control therapy. Remdesivir increased the odds for hospital discharge; however, remdesivir treatment did not reduce the odds for mortality in COVID-19 patients.
Remdesivir appeared to attenuate the progression of COVID-19 as evidenced by lower odds of MV or ECMO and greater odds for patient recovery (discharge). In the Adaptive COVID-19 Treatment Trial (ACTT-1), which consisted of 1062 patients with confirmed COVID-19 and evidence of lower respiratory tract involvement, 13% of patients treated with remdesivir (10 days) required MV or ECMO after treatment while 23% of control patients required that same level of support [14]. Fewer remdesivir patients (17%) required noninvasive ventilation (NIV) or Age is expressed as mean ± standard deviation or median (interquartile range). RDV = remdesivir; NIV = noninvasive ventilation; HF = high flow oxygen; MV = mechanical ventilation; ECMO = extracorporeal membrane oxygenation; 5D = 5 day; 10D = 10 day.
Fig. 2. Forest plot of subgroup comparisons of need for mechanical ventilation or ECMO at 28/29 days.
Pooled results were computed using restricted effects maximum likelihood with 95% confidence intervals computed using the Q-profile method. A 95% prediction interval was also computed (black bar). [14]. Similarly, patients requiring any oxygen support at enrollment required fewer days of support with remdesivir (13 vs. 21 days) than placebo. These data indicate that remdesivir reduces the need for MV or ECMO and may provide some benefit to mitigate overall oxygen requirements in COVID-19 patients. The odds for hospital discharge were greater with remdesivir treatment at 14/15-and 28/29-day time points. In the ACTT-1 trial, the median time to recovery was 50% longer in the control group (15 days) as compared to patients that received remdesivir (10 days) [14], and Wang et al. reported a similar treatment effect (control: 23 days; remdesivir: 18 days) [7]. Moreover, patients who received remdesivir were more likely to experience improvement in clinical status as compared to placebo [14] or standard therapy [13]. Time to recovery favored remdesivir in patients who required supplementary oxygen but were not critically ill [14]. Indeed, remdesivir treatment was less effective at expediting recovery in patients with greater disease severity. Beigel et al. noted that the risk ratios for time to recovery did not favor remdesivir in patients that required NIV/high flow oxygen or MV/ECMO at baseline, although follow-up times may have been insufficient for definitive conclusions [14]. The median time from onset of symptoms to start of treatment ranged from 9 to 10 days [7,13,14]; however, Beigel et al. noted that the rate ratio for recovery decreased from 1.37 (1.14-1.64) to 1.20 (0.94-1.52) in patients treated after 10 days from onset of symptoms. In a patient with COVID-19, the viremic phase lasts for a few days, which can then be followed by a hyper-inflammatory response. Early treatment may attenuate viremia; and therefore, blunt the hyper-inflammatory response as well. This may explain why remdesivir works in patients with early illness but not in those where the hyper-inflammatory response has already set in. These data suggest that remdesivir is effective at mitigating disease progression and can expedite recovery in patients, especially when administered early during the disease course. However, remdesivir may be less effective at enhancing recovery times in critically ill patients.
Remdesivir did not lower the odds for mortality in COVID-19 patients. However, disease severity and age differed across studies (Table 1), which could have influenced these results. Total mortality data from Spinner et al. were the lowest observed in the present analysis, which was consistent with the moderate level of disease reported in this study [13]. Indeed, <1% of COVID-19 patients in this study required NIV/high-flow oxygen at baseline and no patients required MV/ECMO. Thus, the moderate severity of disease overall would be associated with a relatively lower odds for mortality, which would make it difficult to see a meaningful reduction in mortality odds with remdesivir. Wang et al. detected similar rates of mortality between remdesivir (14%) and placebo (13%) [7], while Beigel et al. noted lower, albeit nonsignificant, mortality rates with remdesivir (7% vs. 12%) [14]. Wang et al. had fewer total patients (16%) that required NIV/high-flow oxygen or MV/ECMO as compared to Beigel et al. (45%); however, the patient population was older (Table 1), which could have contributed to similar overall mortality despite lesser disease severity as observed in Beigel et al. As discussed previously, the absence of a mortality benefit could be due, in part, to remdesivir's inability to rescue clinical deterioration in critically ill patients. In Beigel et al. mortality analysis by clinical status favored remdesivir in patients with a category 5 status (hospitalized, requiring supplemental oxygen; HR: 0.30 [0.14-0.64]), while no benefit was observed in patients with category 6 (hospitalized, requiring NIV or high-flow oxygen; HR: 1.02 [0.54-1.91]) or 7 status (hospitalized, requiring MV or ECMO; HR: 1.13 [0.67-1.89]). Although Wang et al. failed to detect a significant effect of remdesivir on mortality, the authors did note a statistically insignificant shift in mortality as related to early remdesivir treatment. Patients that received remdesivir within 10 days from symptom onset exhibited lower rates of mortality (11% vs. 15%). In contrast, patients that received remdesivir >10 days from symptom onset exhibited higher rates of mortality (14% vs. 10%) as compared to placebo. In all, these data suggest that remdesivir does not lower the odds of mortality in COVID-19 patients, potentially due to its inability to rescue clinical deterioration of critically-ill patients.
We did not include clinical improvement in our meta-analysis due to the heterogeneity of methods used to determine clinical improvement in the included studies. Clinical status scales, while possessing similar qualities, differed in the number of clinical categories (6)(7)(8), the directionality of the scale (category 1: discharged [7,14] vs. dead [13]), and Pooled results were computed using restricted effects maximum likelihood with 95% confidence intervals computed using the Q-profile method. A 95% prediction interval was also computed (black bar).
Fig. 5. Forest plot of subgroup comparisons of mortality at 28/29 days.
Pooled results were computed using restricted effects maximum likelihood with 95% confidence intervals computed using the Q-profile method. A 95% prediction interval was also computed (black bar).
the methods used to determine clinical improvement could vary according to the magnitude of improvement achieved per patient (+1, +2). Nevertheless, 2 of the 3 randomized controlled trials detected significant improvements in clinical status [13,14], while Wang et al. was underpowered due to early termination of the study [7]. Patients that received remdesivir experienced clinical improvement with odds ratios that ranged from 1.60 to 1.65 in favor of remdesivir [13,14]. Clinical improvement was achieved with 5-and 10-day regimens and was assessed 4-5 days following the end of treatment. Spinner et al. did not report statistically significant clinical improvement with their 10-day remdesivir arm (assessed at day 11, p = 0.18), although this study was open-label and it was believed that this design had some effect on clinical outcomes [13]. It was noted that by day 14, patients that received either 5-or 10-day remdesivir treatments exhibited improvements in clinical status as compared to standard therapy. While Spinner et al. noted clinical improvement, the meaningfulness of improvement was uncertain (e.g., category 7 [not hospitalized]: 76% remdesivir vs. 67% standard therapy).
Several single and double (multiple remdesivir dosage regimens, no comparator) arm studies have noted improvements in clinical status with remdesivir, especially in patients with non-critical forms of disease [15,16]. Antinori et al. (2020) noted better clinical outcomes (7-point ordinal scale) with remdesivir in non-ICU patients [16]. At 28 days, only 33% of ICU patients treated with remdesivir had been discharged as compared to 82% of non-ICU patients. Gilead Sciences conducted a randomized, open-label trial that consisted of 397 severe COVID-19 patients treated with remdesivir, of which 200 patients were treated for 5 days and 197 patients were treated for 10 days [17]. It is important to note that only 2% of patients in the 5-day group and 5% of patients in the 10-day group required MV or ECMO at baseline. The study demonstrated similar improvement (2 points or more on an ordinal scale) in clinical status in both groups on Day 14 (5-day: 64%, 10-day: 54%), after adjusting for differences in baseline clinical status. Taken together, these data suggest that remdesivir treatment improves clinical status of COVID-19 patients, especially in patients with non-critical forms of disease. However, the magnitude of improvement in clinical status is moderate.
Limitations
There were a limited number of randomized controlled studies to assess the efficacy of remdesivir treatment for COVID-19. Given the moderate but significant treatment effect, we decided to exclude small observational studies to reduce the statistical noise and underlying bias these studies can potentially contribute to the analysis in order to better characterize treatment effects associated with remdesivir.
Conclusions
Remdesivir treatment reduced the need for MV and improved hospital discharge rates. However, a mortality benefit with remdesivir is unclear. Ongoing clinical trials will further elucidate remdesivir's role as a COVID-19 therapy.
Declaration of competing interest
The authors declare no interests with the subject of this manuscript. J.M.P. is employed by Nested Knowledge, Superior Medical Experts, and Marblehead Medical. K.M.K. works for and holds equity in Nested Knowledge, Superior Medical Experts, and Marblehead Medical. A.R.D. and K.W.E. are employed by Superior Medical Experts. | 2021-01-09T14:07:41.383Z | 2021-01-06T00:00:00.000 | {
"year": 2021,
"sha1": "6b36f18bb89a7e014f97cf7c6f26238c90abd218",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1016/j.amsu.2020.12.051",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "3720b5a9d28511c5a93161d011e3d0311001d242",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
104420690 | pes2o/s2orc | v3-fos-license | Slurry Preparation Effects on the Cemented Phosphogypsum Backfill through an Orthogonal Experiment
The cemented phosphogypsum (PG) backfill technique provides a new method for massive consumption of PG, and therefore alleviating the environmental pollution of PG. This study considered the effects of slurry preparation on the performance of cemented PG backfill. A L16(4) orthogonal experiment was designed to analyze four factors, namely the solid content, phosphogypsum-to-binder ratio (PG/B ratio), stirring time and stirring speed, with each factor having four levels. According to the range analysis, the solid content played the dominant role in controlling the bleeding rate, while the setting times strongly depended on the PG/B ratio. In terms of strength development of the backfill, the PG/B ratio was shown to be the most significant factor determining the unconfined compressive strength (UCS), followed by the solid content, stirring time and stirring speed. Furthermore, the results showed that the slurry preparation affected the environmental behavior of impurities that originated in PG. By analyzing the concentrations of impurities in the bleeding water of the slurry as well as the leachates of the tank leaching test, the results showed that the release of F− and SO4 was aggravated clearly with the increase in the PG/B ratio, while the release of PO4 always remained at relatively low levels.
Introduction
In the phosphate industry, phosphoric acid is usually extracted from phosphate ore by using concentrated sulfuric acid, leaving a by-product of phosphogypsum (PG) which is mainly composed of CaSO 4 •2H 2 O. Approximately 4~5 t of PG is generated when 1 t of phosphoric acid is produced, resulting in the global production of PG being about 100 to 280 Mt every year [1][2][3].Besides CaSO 4 •2H 2 O (>90%), PG also contains some impurities such as phosphate (PO 4 3− ), fluoride (F − ), organic matters, heavy metals and radioactive components [4][5][6][7].These minor compounds have not only caused serious environmental pollution at storage sites [8,9], but also hindered the reuse of PG, and thus only 15% of PG is recycled worldwide [10,11].The local government of Guizhou, China recently launched a new policy stipulating that no new surface land would be allocated for PG storage.Therefore, it is extremely urgent to find an effective way to consume such a large amount of phosphogypsum.Alternatively, it has been estimated that about 60% of PG generated could be consumed by using PG as the aggregates in the backfill process [12].In the cemented PG backfill process, the PG together with the hydraulic binder, water and some additives were mixed homogeneously to form a slurry on the land surface, and then the prepared slurry was transported to the underground stopes by gravity or pumping.In the stopes, the strength was developed by the hydration reaction of the binder, ensuring the stability of underground mine stopes and realizing the maximum recovery of ores.In addition, hardened backfill can well stabilize/solidify the minor compounds in PG, which is also a significant advantage for environment protection.
In the cemented PG backfill process, the first step is to prepare a qualified backfill slurry which could develop a desired strength after the placement.The effects of slurry preparation on the strength of cemented backfill have been investigated by some studies.For instance, Fall and Benzaazoua [13] demonstrated that an increase in the binder dosage could effectively improve the strength of cemented backfill samples.Ercikdi et al. [14] found that the solid content used for backfill process should be determined by the balance between the maximization of the strength and the effectiveness of pumping.Cao et al. studied the relations between strength performance of cemented tailings backfill and solid content, cemented-to-tailings ratio and curing time [15].Furthermore, the stirring is also a crucial factor in the backfill preparation.Inhomogeneous stirring would lead to inconsistent distribution of solid and water, and thus reduce the strength of the backfill [16].Although research on cemented PG backfill has been carried out for several years, the majority of the studies considered only a single factor for the optimization of backfill properties.Indeed, the slurry preparation contains multi-factors (such as solid content, aggregates-to-binder ratio and stirring), so it is important to determine which one is the most significant factor determining the slurry proprieties and the strength of the hardened backfill.
Furthermore, previous studies focused mainly on fluidity and/or strength of the backfill process, while relatively few studies paid attention to the environmental pollution related to the cemented backfill process.The potential pollution of backfill mainly stems from two aspects.First, when the slurry is placed into the stopes, the excess water used for improving the slurry fluidity will be secreted.Second, the backfill would suffer from the percolation of groundwater after forming the hardened backfill.Therefore, impurities in the bleeding water and the leachates would transfer into the groundwater, causing environmental pollution.Li et al. [12] showed that impurities in PG could be well solidified by comparing the concentrations of phosphate, sulfate and fluoride before and after the cementation.However, the question of whether the slurry preparation could affect the environmental behavior of cemented PG backfill has not been studied yet.In this case, this study will consider the influence of different preparing conditions on the bleeding water and the leachates in the backfill process.
An orthogonal experiment is usually designed to study multi-factor conditions for obtaining effective degree of each single factor on the results [17][18][19][20].Therefore, the current study aims to optimize the slurry preparation by considering both the properties and environmental behaviors of cemented PG backfill.After discussions with the operators in mines, four factors related to the slurry preparation are considered in this study, namely phosphogypsum/binder (PG/B) ratio, solid content, stirring speed and stirring time.According to the orthogonal experimental design, the cemented PG backfill samples were reconstituted in the laboratory.The slurry properties were measured, including bleeding rate and setting times.The strength of hardened backfill was analyzed by measuring the unconfined compressive strength (UCS).The concentrations of phosphate (PO 4 3− ), fluoride (F − ) and sulfate (SO 4 2− ) in the bleeding water and leachates of tank leaching test (TLT) were also determined.
Then, the range analysis was applied to calculate the effective degree of each single factor, and therefore figuring out the effects of preparation conditions of the process of cemented PG backfill.
Raw Materials
Phosphogypsum and a composite binder (the mix proportion of binder is yellow phosphorous slag: fly ash: cement clinker = 4:1:1, and 16-20% lime of the yellow phosphorus slag mass ratio is added) [21] were collected from Guizhou Kailin (Group) Co., Ltd., Guiyang, China.The particle sizes of the PG and binder were measured by a Malvern Mastersizer 2000 particle size analyzer, as shown in Figure 1 and Table 1.The main chemical compositions and physical characteristics of PG and binder used in this study are shown in Table 1.Coefficient of uniformity (Cu) and coefficient of curvature (Cc) are used to reflect the distribution of particle size.
Orthogonal Experiment
In this study, the cemented PG backfill slurry was prepared based on an orthogonal array [L 16 (4 4 ) matrix].The following four factors were studied: solid content (factor A), phosphogypsum-to-binder ratio (PG/B ratio, factor B), stirring time (factor C) and stirring speed (factor D).The solid content refers to the mass percentage of PG and binder in slurry.A total of 16 formulations of cemented PG backfill were prepared in this study.The designed levels and factors are listed in Table 2.In order to figure out which factor is the most significant one, range analysis is essential.Two parameters k ij and R j were used for evaluation.k ij is defined as the sum of the evaluation indexes of all levels (j, j = 1, 2, 3, 4) in each factor (i, i = A, B, C, D), and k ij (mean value of k ij ) is used to determine the optimal level and combination of factors.When k ij is the largest, it can be considered that this level is most optimal for this factor.R j is defined as the range between the maximum and minimum value of k ij , and it is used for evaluating the importance of the factors to each evaluation index.The larger R j value, the greater importance of this factor [22,23]. M is described as the evaluation index.For this L 16 (4 4 ) matrix, the relevant calculations are as follows (factor B, for example): Then, the range analyses applied the above method, and M is replaced by the index that needs to be evaluated, such as the bleeding rate, setting times, strength and concentrations of impurities.
Sample Preparation
The fresh slurry was prepared by mixing PG, binder and deionized water.After mixing for certain times according to the orthogonal design, the slurry was collected for the analysis of bleeding rate, initial setting time and final setting time.
Then, the prepared slurry was poured into a plastic mold with internal dimensions of 40 mm × 40 mm × 40 mm.There was a 2 mm hole at the bottom of each mold to allow the drainage of excess water.The bleeding water draining from the mold was collected and filtered through a 0.45 µm filter for further analysis to evaluate the environmental behavior.After the initial solidification, the samples were demolded and cured in a chamber at a constant temperature of 20 ± 2 • C and a humidity of 90 ± 5%.
Tank Leaching Test
The tank leaching test (TLT) is commonly used to study the release properties of minor compounds and to predict the number of released ions in the hardened backfill [24][25][26].The potential hazards related to the leachable impurities from hardened backfill samples were measured by conducting compliance leaching test according to EA NEN 7375.The backfill samples cured for 28 days were placed in 500 mL plastic bottles and immersed in the bottle with plastic thin threads.The deionized water was used as leachate, and the liquid/solid ratio was 5 cm 3 of solution per cm 2 of exposed solid.The leachate was changed with the same volume of deionized water after a cumulative leaching time of 0.25, 1, 2.25, 4, 9, 16, 36 and 64 days.Then the leachate at each period was collected and filtered (0.45 µm) for the subsequent analysis.The measured leaching per fraction was calculated separately for each factor using the Equation (1), and the cumulative leaching quantity was calculated by Equation ( 2), as specified in the EA NEN 7375:2004 standard: where E i is the measured leaching of a component in fraction i in mg/m 2 ; C i is the concentration of the component in fraction i in ug/L; V is the volume of the leachate in l; A is the surface area of the test sample in m 2 ; f is a conversion factor in 1000 µg/mg; T n is the cumulative leaching quantity of component in fraction i in g.
Bleeding Rate and Setting Times
The bleeding rate was measured in accordance with the Chinese standard GB/T 50080-2016.The slurry was poured into a container with a lid, which was then shaken for 20 s on a shaking table.The bleeding water was drawn with a syringe at intervals until no more water was secreted.The initial slurry and slurry after bleeding in containers were weighed, and then the bleeding rate was calculated.
The initial setting time (IST) and final setting time (FST) were determined according to the Chinese standard GB/T1346-2001 using a Vicat apparatus.The slurry was poured into the Vicat round mold, and bleeding water was secreted over a period of time.The IST was considered as the time when initial Vicat needle penetrated the sample to 5 ± 1 mm from the bottom of the mold.The FST was defined as no visible mark being left when the final Vicat needle was on the surface of the sample [27].Tests were carried out in triplicate and the reported values were the averages of these three tests.
Unconfined Compressive Strength of Cemented PG Backfill Samples
The minimum strength of backfill into the underground must meet the strength requirements in order to ensure the stability of underground mine stopes.Therefore, the strength development is one of the most important mechanical properties in the backfill system design [28].According to the Chinese Standard JGJ/T 70-2009 [29], cemented backfill cured for 28 days was used for testing the UCS.UCS tests were conducted using a servo-hydraulic machine of 200 kN loading capability at a constant displacement rate of 0.1 mm/min.In order to avoid the randomness and contingency of the test data, three samples were prepared for each test, and the average values of UCS were calculated.
Microstructural Analysis
After UCS tests, the broken samples were immediately soaked in an ethanol solution to stop the hydration reaction.The samples were then dried at 50 • C in an oven.SEM analysis with the HELIOS NamoLab 600i (FEI, Lake Oswego, OR, USA).was used to find the micromorphologic development inside the cemented backfill samples [30].Owing to the poor conducting performance of backfill, it was necessary to cover a layer of gold coating on samples to enhance the conductive properties of the samples in order to meet the inspection requirements.
Chemical Measurements
The pH of bleeding waters and leachates after TLT tests were measured by Ohaus TSARTR 300 pH meter (Ohaus, Parsippany, NJ, USA).The concentrations of F − were measured with the Leici PF-1-01 fluorine ion-selective electrode (Leici, Shanghai, China).The concentrations of SO 4 2− and PO 4 3− in bleeding waters and leachates of TLT tests were measured using baryta yellow spectrophotometric method and ammonium molybdate tetrahydrate spectrophotometric method, respectively, via a Shimadzu UV1800 spectrophotometric instrument (Shimadzu, Kyoto, Japan).
Differences of the Slurry Properties and Strength of Cemented PG Backfill
According to the L 16 (4 4 ) matrix, 16 batches were carried out, and the results are shown in Table 3.It can be seen that the bleeding rate ranged from 26.34~46.40%.The initial setting time (IST) and final setting time (FST) were in the ranges from 72 h to103 h and 85 h to 125 h, respectively.The unconfined compressive strength (UCS) of samples cured for 28 d ranged from 0.74 MPa to 2.26 MPa.These results indicate that the slurry preparation conditions have significant effects on the cemented PG backfill, including both properties of slurry and the strength of hardened backfill.Different mean values indexes k ij at different levels of each factor are listed in Table 4.By comparing R j of different factors of IST, FST, and UCS, the order of factor significance is as follows: the PG/B ratio > solid content > stirring time > stirring speed.However, the order of significant factors for bleeding rate is the solid content> stirring time>PG/B ratio > stirring speed.The setting times (IST and FST),which are related to the time periods for slurry transportation, are important slurry properties to evaluate the performance of backfill [31].As shown in Figure 2a,b, PG/B ratio was the most significant factor, and the IST and FST increased with the increase in PG/B ratio.It is well known that residual acid and other minor compounds in PG could retard the cementation process [32], leading to the extension of IST and FST with the high PG/B ratio.Furthermore, the IST and FST both reduced evidently with the increase in solid content.The increase in solid content meant a relatively higher binder to water ratio in the slurry, which would accelerate the hydration reactions, resulting in a reduction in IST and FST.In previous studies using cement as the binder, the setting times of the slurry were usually less than 1 day [27,33].However, the setting times in this study ranged from 2 days to 5 days, which could be attributed to the retarding effect of PG.Some studies have found that sulfate significantly affected the setting time of cement [34].As such, the sulfate in phosphogypsum might be one of the reasons for the increase of setting times.In this case, the smaller the IST and FST of cemented PG backfill, the better the filling efficiency.Hence, according to the above analysis, the optimal combination of slurry preparation conditions for shortening IST and FST is respectively, a solid content of 60% and a PG/B ratio of 2:1.As shown in Figure 2c, the bleeding rate is also affected by the slurry preparation.The bleeding rate decreased sharply with the increase in solid content, illustrating that the solid content was the key to determining how much water would be secreted into stopes after the placement of the slurry.With the increase in the solid content, more water would be consumed for the hydration reactions, and would thus lower the bleeding rate.Compared to the solid content as the most significant factor, variations of bleeding rate related to other factors (PG/B ratio, stirring time and stirring speed) were not significant.In the actual filling process, the excess water usually needs to be discharged from the mine by pumping machine, which would increase the cost.Meanwhile, impurities in the bleeding water would transfer into the groundwater.Therefore, considering from these aspects, a relatively low bleeding rate is recommended.
Effects of Slurry Preparation on Unconfined Compressive Strength
The strength of the backfill is one of the most important parameters that directly affect the safety performance of backfill [35].According to the range analysis as shown in Table 4, four factors were all affecting the strength of cemented PG backfill samples.For the PG/B ratio, the UCS decreased from 1.68 MPa to 0.88 MPa (decreased by 48%) with the PG/B ratio increasing from 2:1 to 5:1.However, for the solid content increasing from 45% to 60%, the UCS increased from 0.79 MPa to 1.52 MPa (increased by 92%).For the stirring time increasing from 5 min to 120 min, the UCS increased from 0.93 MPa to 1.27 MPa (increased by 37%).For the stirring speed, the UCS increased from 1.01 MPa to 1.28 MPa (increased by 27%) with the stirring speed increasing from 300 rpm to 600 rpm.
It is clear that the PG/B ratio is the most significant factor affecting the strength of the backfill, as indicated by the R j in Table 4 and the trend charts in Figure 2d.Previous studies showed that the strength development of the backfill was mainly ascribed to the formed cementitious products (such as C-S-H gel and ettringite) by hydration reactions [36,37].A high PG/B ratio meant a low binder proportion in the mixture, resulting in less hydration products and therefore low strength.In this study, two backfill samples with the highest PG/B ratio (13# at a ratio of 2:1) and the lowest PG/B ratio (16# at a ratio of 5:1) were taken for SEM-EDS analysis.As shown in Figure 3a,b, the SEM images showed that the ettringite and C-S-H gel in sample 13#were much more than those in sample 16#, corresponding to the UCS of 2.26 MPa and 1.02 MPa, respectively.EDS analyses were done to identify CaSO 4 , ettringite and C-S-H gel in the backfill samples, as shown in Figure 3c-e.According to the EDS analysis, in sample 13# with PG/B ratio of 2:1, great amount of hydration products (C-S-H gel and ettringite) were wrapping around the PG, providing strength development for cemented PG backfill.In addition, it is notable that the PG/B ratio could also affect the physical properties of the backfill samples, which in turn influenced its mechanical properties.In this study, the wet unit weights of the backfill samples were measured, as shown in Table 3. Orthogonal range analysis and the trend charts of wet unit weight (provided in the Annex File) show that the PG/B ratio has the most significant effect on the wet unit.Kitazume et al. [38] has demonstrated that strength ratio increased with the wet unit weight ratio in soil and Portland cement mixture.Therefore, it seems that the PG/B ratio influences the mechanical performance of the backfill samples by affecting both the physical properties and the cementitious products.More binder could effectively improve the strength, which could benefit the stability of underground mine stopes.However, it is suggested that the binder accounts for over 75% of the total cost of backfill process [31], so high binder dosage might burden the backfill process [39].Thus, considering both cost-effectiveness and strength requirements, a reasonable PG/B ratio must be decided according to the actual conditions of mines.
Impurities in Bleeding Water
The concentrations of impurities of the bleeding water were measured, as listed in Table 3.The concentrations of F − , SO 4 2− and PO 4 3− were in the ranges of 257-615 mg/L, 2149-3483 mg/L and 1.75-5.00mg/L, respectively.Compared to the impurities in PG leachate (the concentrations of F − , SO 4 2− and PO 4 3− of PG were1355 mg/L, 23,842 mg/L and 315 mg/L, respectively), the majority of these ions were solidified/stabilized when the PG was cemented in the backfill.The average solidification rates were 73% for F − , 89% for SO 4 2− and 99% for PO 4 3− .These results indicate that cementation was an effective way to control PG pollution.
To understand the degrees of effects of slurry preparation on the quality of bleeding water, k ij and R j of four factors are listed in Table 5.As shown in Table 5 and Figure 4a,b, the concentrations of SO 4 2− and F − in the bleeding water rose clearly with the increase in solid content and the PG/B ratio, but dropped slightly with the increase in stirring time and stirring speed.To be specific, for solid content, higher solid content meant more solids in the slurry system, leading to more dissolved ions in the bleeding water.The PG/B ratio was the most significant factor influencing the quality of the bleeding water, as shown in Figure 4a,b.The higher PG/B ratio, the higher concentrations of F − and SO 4 2− that were detected in the bleeding water.The reason for this might be that the F − and SO 4 2− were mainly sourced from PG, so higher PG dosage led to higher concentrations of F − and SO 4 2− in the bleeding water.For stirring time, the impurities concentrations decreased within a certain small range with the increase in stirring time.This can be explained by the fact that the longer the stirring time, the longer the reaction time for hydration, resulting in more ions consolidated in the backfill and thus less free ions dissolved in the bleeding water [40].Similarly, with the increase in the stirring speed of slurry, the contact frequency between the impurities and cementitious materials increased, leading to the enhancement of the cementation reactions, which would reduce the concentrations of F − and SO 4 2− in the bleeding water.[41,42].At the same time, the EDS analysis revealed that the peaks of Ca, O and P were found, inferring the precipitation of calcium phosphate.Although cemented PG backfill technique has a strong capacity to consolidate impurities in backfill, there is still a certain amount of F − and SO 4 2− , which might transfer into the groundwater.
Therefore, in bleeding water, the concentrations of impurities should be kept as low as possible.Thus, the slurry preparation conditions with 45% of the solid content, 2:1 of PG/B ratio, 120 min of stirring time and 600 rpm of stirring speed was thought to be the optimal combination by considering impurities in the bleeding water.
Impurities in the Leachates of Tank Leaching Test (TLT)
After the initial bleeding water secretion, the backfill would harden and suffer from the underground water passing through, so it is significant to understand the leaching behavior of impurities in the long-time period.The tank leaching test (TLT) is commonly used to study the dynamic and static properties of impurities in the cement materials [43].Therefore, in this study, the TLT test with eight leaching periods was adopted to study if the slurry preparation could affect the impurities performance in the long-term water immersion.The concentrations of impurities after TLT were measured as listed in Table 3.
pH Variation
pH values during total eight leaching periods are listed in Table 6.It can be seen that with the replacement of leachate, pH increased from the first leaching period to the sixth leaching period, which was likely due to the constant release of the hydroxide ions generated in the hydration process.The pH values reached the peak at the sixth leaching on 16 d, and then decreased from the sixth leaching period to the last leaching period.This indicates the finish of the hydration process on 16 d, and less hydroxide ions could be released into the leachates [44].To investigate if the slurry preparation conditions could affect the pH of the leachates in the long run, the pH values at the last leaching period were used to approximate the pH of the underground water after long-term immersion.Different evaluation indexes of four factors are shown in Table 7 and Figure 5a.The pH decreased sharply with the increase in PG/B ratio, and gradually increased with the increase in solid content, stirring time and stirring speed.Apparently, the PG/B ratio was the most significant factor influencing the pH of leachate.It is well known that the PG is usually acidic, while the binder is alkaline [45].Therefore, a high PG/B ratio meant a high dosage of acid and a low dosage of alkali, leading to the low pH values of leachates.At the same time, the solid content is also a significant factor influencing the pH value of TLT.As shown in Figure 5a, the pH raised with the increase in solid content, which might be explained by more hydroxide ions being produced due to the higher solid content.
Cumulative Effects of Impurities on the Environment
In order to evaluate the long-term environmental behavior of impurities in the backfill samples, the cumulative leaching quantities of impurities in the PG and those in the cemented PG backfill (taking sample 13# as an example) was used for comparison, as shown in Figure 6.It is clear that the total quantities of impurities in leachates of backfill samples were much less than those in the leachates of PG.The cumulative leaching quantity of SO 4 2− in PG is 5 times as much as that in backfill, the cumulative leaching quantity of F − in PG is 81 times in backfill, and the cumulative leaching quantity of PO 4 3− in PG is 1678 times in backfill.The reason for this is that, when PG formed a dense structure, the impurities were precipitated and/or incorporated in the hydration products, and thus impurities were less likely to escape from the cemented PG backfill [46].
Conclusions
In this study, the orthogonal experiment was designed to understand the effect of slurry preparation conditions on the performance of cemented PG backfill.Four factors were examined: solid content, PG/B ratio, stirring time and stirring speed.The following conclusions can be made.Firstly, according to the range analysis, the most significant factor affecting the setting times and strength was the PG/B ratio, followed by the solid content.The solid content has the most significant effect on the bleeding rate.Considering both the slurry properties and strength development, it is determined the optimal condition is the slurry concentration of 60% and PG/B ratio of 2:1.Secondly, the lower the concentrations of impurities in bleeding water and TLT leachate, the friendlier the conditions are for the ground water.Considering environmental behaviors, it is determined that optimal combination is a solid concentration of 45%, PG/B ratio of 2:1, stirring time of 120 min and stirring speed of 600 rpm.Results showed that the differently optimal combinations exist for considering the slurry properties, mechanical strength and environmental behaviors of cemented PG backfill.Therefore, it is recommended that mines choose different optimization conditions according to their actual demands.
Figure 1 .
Figure 1.Particle size distributions of phosphogypsum and binder.
Figure 2 .
Figure 2. Relationships between mean value of each factor under different evaluation index: (a) Initial setting time, (b) Final setting time, (c) Bleeding rate, (d) UCS.
Figure 4 .
Figure 4. Relationship between mean value of each factor under different evaluation index: (a) F − , (b) SO 4 2− , (c) PO 4 3− .As opposed to the SO 4 2− and F − varying with slurry preparation conditions, the concentrations of PO 4 3− always kept at very low levels at all experimental batches, which was likely due to the different solidification mechanisms of these ions.It is reported in previous studies that the PO 4 3− reacts rapidly with the large amounts of calcium in the alkaline environment, leading to the precipitation of dissolved PO 4 3− and thus a low concentration of PO 4 3− in the bleeding water [41,42].At the same time, the EDS
Figure 6 .
Figure 6.The cumulative leaching quantity of impurities in (a) PG and (b) cemented PG backfill.3.3.3.Effects of Slurry Preparation on Leaching Behavior of Impurities According to the tank leaching test, the ranges of cumulative leaching quantities of F − , SO 4 2− and PO 4 3− were 25~60 mg, 876~2054 mg and 0.25~0.41mg, respectively.The variation trends of four factors are shown in Figure 5.It is clear that the PG/B ratio was the most influential factor, and the cumulative leaching quantities of F − and SO 4 2− increased with the increase in PG/B ratio.It is likelydue to that the low strength of backfill related to the high PG/B ratio could not well solidify the impurities in PG[47], resulting in the increase in the cumulative leaching quantities of F − and SO 4 2− .However, the cumulative quantity of PO 4 3− was much less compared to those of F − and SO 4 2− in the leachates, as shown in Figure5.The hydration reactions would provide an alkaline environment and a great amount of calcium ions for the precipitation of F − , SO 4 2− and PO 4 3−[12].However, the solubility product constant of calcium phosphate (2.0 × 10 −29 ) was several orders of magnitude lower than those of calcium fluoride (5.3 × 10 −9 ) and calcium sulfate (9.1 × 10 −6 ), resulting in a lower leaching quantity of PO 4 3− in the leachate under long-term immersion (as shown in Figure5d).
Table 1 .
Chemical compositions and physical characteristics of phosphogypsum and binder.
Table 2 .
Factors and levels in the orthogonal experiment.
Table 3 .
Test data of evaluation indexes under different experimental conditions.
Table 4 .
Range analysis data of mechanical and physical properties.
Table 5 .
Range analysis data of conductivity and impurities concentration in bleeding water.
Table 6 .
pH values in eight leaching periods.
Table 7 .
Range analysis of pH and cumulative leaching quantity of TLT. | 2019-04-10T13:12:59.517Z | 2019-01-10T00:00:00.000 | {
"year": 2019,
"sha1": "20cab4969a86dfe7e8c8c7aaee715f8c0515ae44",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2075-163X/9/1/31/pdf?version=1547120182",
"oa_status": "GOLD",
"pdf_src": "ScienceParseMerged",
"pdf_hash": "20cab4969a86dfe7e8c8c7aaee715f8c0515ae44",
"s2fieldsofstudy": [
"Materials Science"
],
"extfieldsofstudy": [
"Geology"
]
} |
230658043 | pes2o/s2orc | v3-fos-license | Global Health Education in the Time of COVID-19: An Opportunity to Restructure Relationships and Address Supremacy
Global health and its predecessors, tropical medicine and international health, have historically been driven by the agendas of institutions in high-income countries (HICs), with power dynamics that have disadvantaged partner institutions in low- and middle-income countries (LMICs). Since the 2000s, however, the academic global health community has been moving toward a focus on health equity and reexamining the dynamics of global health education (GHE) partnerships. Whereas GHE partnerships have largely focused on providing opportunities for learners from HIC institutions, LMIC institutions are now seeking more equitable experiences for their trainees. Additionally, lessons from the COVID-19 pandemic underscore already important lessons about the value of bidirectional educational exchange, as regions gain new insights from one another regarding strategies to impact health outcomes. Interruptions in experiential GHE programs due to COVID-19-related travel restrictions provide an opportunity to reflect on existing GHE systems, to consider the opportunities and dynamics of these partnerships, and to redesign these systems for the equitable benefit of the various partners. In this commentary, the authors offer recommendations for beginning this process of change, with an emphasis on restructuring GHE relationships and addressing supremacist attitudes at both the systemic and individual levels.
Global health and its predecessors, tropical medicine and international health, have historically been driven by the agendas of institutions in highincome countries (HICs), with power dynamics that have disadvantaged partner institutions in low-and middle-income countries (LMICs). 1 Since the 2000s, however, the international academic global health community has embraced definitions of global health that focus on working to achieve health equity through collaborative and multidisciplinary practice that incorporates both individualand population-level actions and concentrates on health concerns and determinants that are not bound to a single geography or culture. [1][2][3] This shift has coincided with growing critiques of the power structures, built by past colonization of partner countries, that have influenced global health practice.
Global health education (GHE) programs engage learners to develop an understanding of health and related issues in communities that are typically different from their own. There is great demand for both classroom-based and experiential learning opportunities which emphasize cultivating the outsider's perspective. Many institutions provide opportunities for learners to immerse themselves in unfamiliar medical and social cultures, so as to gain insights into disease and pathology as well as the impacts of power, privilege, and socioeconomic inequality on the health of individuals and communities. However, much of this has centered on HIC institutions seeking GHE opportunities in LMIC settings or ways to provide volunteer service opportunities in less privileged health care settings. Over time, LMIC institutions have become more empowered partners, as faculty travel and improved access to scientific literature have provided increased exposure to information about HIC health care system resources. LMIC institutions now seek more equitable GHE relationships and opportunities for their trainees. The increasing calls to decolonize the field of global health point to the historical impact of these relational imbalances on LMIC institutions. [4][5][6] Thus, if GHE is to evolve as a field that meets the needs of both HIC and LMIC institutions (and, ultimately, the needs of patients around the world), the existing power structures must be critically examined and redesigned with a focus on achieving equitable institutional relationships and promoting leaders who more accurately represent the gender, professional, and geographic balance of the global health workforce.
As we write this commentary in December 2020, COVID-19-related travel restrictions have paused many GHE programs and prompted others to embrace different goals and pedagogies. We believe this disruption offers a valuable opportunity to drive GHE in a new direction by allowing institutions to reflect on their priorities and the power dynamics of existing GHE systems and to work to redesign these systems for the equitable benefit of all partners. We recommend beginning this process of change with an emphasis on restructuring GHE relationships and addressing supremacist attitudes at both the systemic and individual levels.
Restructuring Relationships
The COVID-19 pandemic has driven many GHE programs to enact changes to their learning activities and, consequently, some are reexamining their educational relationships with partner institutions at home and in other countries. This process should start with a focus on communicationboth to ensure a stable platform for communication (sometimes in the face of disparities in internet connectivity for programs that have shifted to increased virtual engagement with partners) and to address key stakeholders' roles and responsibilities as well as issues related to systems, language, and/or cultural differences. Establishing this foundation should then lead to open conversation about the needs of each partner and their contributions to the relationship. This is the first step in establishing mutually beneficial goals and shifting the relationship toward one in which each partner is satisfied with the degree to which its needs are being met.
Our perspective on GHE partnerships is driven in part by our experience with the Makerere University/Yale University collaboration, a bidirectional GHE capacity-building program which 2 of us co-direct (H.M.-K. and T.L.R.). 7 This program, which is in its 15th year and has expanded to incorporate participants from other U.S. institutions, is structured according to a framework of 4 global health ethics principles (introspection, humility, solidarity, and social justice) 8 that are useful in guiding conversations about partnership equity. Examples of other HIC-LMIC academic partnerships that have similar goals with respect to building equitable relationships include the Academic Model Providing Access to Healthcare 9 and the Toronto Addis Ababa Academic Collaboration. 10 For those who seek additional guidance, Adams et al 11 provide a set of core components for equitable HIC-LMIC GHE and practice partnerships, including the presence of: interdisciplinary teams that work together in a respectful and open collaborative manner; shared leadership; explicit, shared goals; the LMIC partner as the driver of partnership priorities, the research agenda, and program management; and prioritization of the education of LMIC trainees over HIC trainees.
Extrapolating from these models, it is important to ask 3 questions of all GHE institutional partners, both those in one's home community or region and those in other countries: "What does your institution expect to gain from interaction with my institution?" "How is a relationship with my institution going to benefit yours?" and "What are the potential added burdens on either side that need to be addressed?" Focusing on the quality of relationships may lead to the demise of some partnerships that are not able to achieve a mutually beneficial arrangement, but this may also pave the way for changes to systems for implementing GHE activities or the establishment of new partnerships. Through conversations with local institutions, both HIC and LMIC institutions may find potential partners within their own country or region that meet their needs and educational objectives just as well as, or even better than, more distant partners.
Additionally, academic institutions are using online tools in creative ways to continue providing medical education during the COVID-19 pandemic, including developing opportunities to conduct shared GHE experiences (e.g., led by faculty from one institution or run jointly by faculty from multiple institutions). This can allow more trainees to be exposed to the experiences and expertise of partner institution faculty, as well as to a wider breadth of perspectives through paired or teambased learning with students from other sites. As has been noted, the presence of diverse perspectives is a key characteristic of successful teams and may inspire further innovations in education or practice. 12 Lastly, as the GHE community focuses more on building up learning experiences at home during the pandemic, opportunities exist to partner more closely with colleagues working domestically and intrainstitutionally in the areas of health disparities and social determinants of health. Leveraging these relationships to highlight and delve into the power and privilege dynamics that affect health equity in one's home community may have an even greater impact than experiences abroad as these local lessons directly relate to learners' future practice.
Addressing Supremacy
The colonial (and do-good) roots of global health and related fields, along with the resultant web of entrenched power structures that maintain the status quo, have been well described. [4][5][6] The central issues relate to the possession and flow of money and control of global research and training agendas, which have largely rested in the hands of HIC institutions. These structural inequities, coupled with socially ingrained attitudes that equate power with knowledge, reinforce the perception that individuals from HIC institutions are best positioned to play the role of teacher. Thus, as learners travel to other communities and countries for the purpose of experiential education, the influence of global power structures that have historically favored wealthy institutions may manifest among the visitors as counterproductive supremacist attitudes. Abimbola and Pai describe these attitudes as taking the form of "persisting disregard for local and Indigenous knowledge, pretence of knowledge, refusal to learn from places and people too often deemed 'inferior, ' and failure to see that there are many ways of being and doing." 5 Changes related to the COVID-19 pandemic can impact these power dynamics in 3 ways. First, as we note above, the pause in immersive, travel-based experiences creates an opportunity for evaluation and open conversations between partners. These should include discussion of the degree to which supremacist attitudes have previously impacted the experience for both hosting and sending institutions. This pause also allows institutions time to implement recommendations for revamping or developing curricula and predeparture training that incorporate the colonial history of global health and teach the concept of cultural humility as a strategy for navigating future experiences. 6 Second, despite the marked differences in the financial resources of HIC and LMIC institutions that affect the implementation of GHE experiences, it is imperative that all partners consider innovative approaches and different, largely virtual educational modalities in the context of the pandemic and for the future. Given the importance of global engagement, the goal should be to foster meaningful learner experiences, within the limits of each partner's financial/ socioeconomic ability and bolstered by the resources of global partners.
Third, as some GHE programs turn to learning experiences in their home communities, opportunities exist to focus attention on the dynamics of power and privilege that affect individuals locally. Bringing a GHE focus to clinical training opportunities at home provides an important gateway for conversations about systemic racism in medicine and society and its many impacts on health, both direct and indirect (i.e., upstream disparities in socioeconomic determinants).
Supremacist attitudes could also be addressed through development of a GHE framework that elevates the experience of all stakeholders and redistributes power by redefining who is qualified to serve as a leader or teacher, based on individual country and/or institution leadership and academic standing, not simply on the HIC institution's needs and expectations. As we mentioned above, recent definitions of global health advocate collaborative and multidisciplinary approaches and attention to a broad spectrum of health determinants. [1][2][3] And there is increasing recognition of the value of bidirectional educational exchange, as different regions gain new insights from each other regarding strategies to impact health outcomes. 13 This is most recently evidenced in the context of the COVID-19 pandemic, as the mortality and morbidity statistics in the United States and other HICs are more sobering than those in many LMICs, where strong community networks and lessons from previous experiences with health emergencies have contributed to the success of public health initiatives against COVID-19. 14-16 A relationship that promotes bidirectional educational exchange at the level of individual learners recognizes that, although faculty from one institution will have expertise in certain areas, students (traditionally thought of as learners only), community members, and faculty/practitioners from other disciplines and communities also have knowledge and lived experiences to contribute to the learning process. Importantly, a GHE framework that incorporates roles for nontraditional experts and those from different backgrounds will highlight the value of diverse sets of knowledge and change the educational power dynamic.
Conclusion
The COVID-19 pandemic has had a major impact on GHE programs, requiring many to pause learning opportunities and academic institutions to develop new ways to meet learner needs. It is possible, however, that this magnitude of disruption is the catalyst necessary to accelerate changes in the relationships, power structures, and attitudes that have been preventing the field of global health from moving past its colonial foundations. As the pandemic and the calls for the critical examination of global health structures continue, we hope the changes that have already begun will usher in a new era for the field, grounded in equitable partnerships, with a firm understanding of history and a clear vision of health equity goals.
Other disclosures: None reported. | 2020-12-31T09:02:11.504Z | 2020-12-29T00:00:00.000 | {
"year": 2021,
"sha1": "0bf24edc58f6fb9f4349dffbff0a15ffba26a3b0",
"oa_license": null,
"oa_url": "https://journals.lww.com/academicmedicine/Fulltext/2021/06000/Global_Health_Education_in_the_Time_of_COVID_19_.27.aspx",
"oa_status": "BRONZE",
"pdf_src": "PubMedCentral",
"pdf_hash": "7cb5256d49ad42c6a39b2688db7d52a293e3daa6",
"s2fieldsofstudy": [
"Education"
],
"extfieldsofstudy": [
"Political Science",
"Medicine"
]
} |
248979477 | pes2o/s2orc | v3-fos-license | Centering Indigenous Knowledges and Worldviews: Applying the Indigenist Ecological Systems Model to Youth Mental Health and Wellness Research and Programs
Globally, Indigenous communities, leaders, mental health providers, and scholars have called for strengths-based approaches to mental health that align with Indigenous and holistic concepts of health and wellness. We applied the Indigenist Ecological Systems Model to strengths-based case examples of Indigenous youth mental health and wellness work occurring in CANZUS (Canada, Australia, New Zealand, and United States). The case examples include research, community-led programs, and national advocacy. Indigenous youth development and well-being occur through strengths-based relationships across interconnected environmental levels. This approach promotes Indigenous youth and communities considering complete ecologies of Indigenous youth to foster their whole health, including mental health. Future research and programming will benefit from understanding and identifying common, strengths-based solutions beyond narrow intervention targets. This approach not only promotes Indigenous youth health and mental health, but ripples out across the entire ecosystem to promote community well-being.
Mental Health as an Essential Component of Health and Well-Being
Mental health is a vital and inseparable component of health and well-being for Indigenous communities throughout the lands now-called Canada, Australia, New Zealand, and the United States (CANZUS). Te Whare Tapa Whā [1], an influential Māori mental health model from New Zealand, acknowledges four cornerstones of Māori health, Taha tinana (physical health), Taha wairua (spiritual health), Taha whānau (family health and social relationships), and Taha hinengaro (mental health), the foundation of which is connection to whenua (land). The First Nations Mental Wellness Continuum Framework developed in Canada includes a model that centers the interconnectedness of mental health,
Strengths-Based Approaches to Mental Health and Wellness
Indigenous communities, leaders, mental health providers and clinicians, and scholars have called for strengths-based approaches to mental health and wellness that align with traditional views of healing and wellness [6]. Such approaches oppose deficit, risk-focused, individualized, and pathologizing narratives that have been over-used in Indigenous-health research and intervention development with Indigenous peoples [11,12]. Deficit-based approaches also threaten to perpetuate colonialism, both by assuming that Indigenous communities are inferior and by proposing that the solution is to import Western approaches to correct these failings, thus further suppressing Indigenous knowledges and practices [11,13].
Strengths-based approaches inherently contextualize Indigenous mental health and wellness through understanding of historical, intergenerational, social, cultural, and political contexts that support existing community strengths, uphold self-determination and sovereignty, and promote justice and equity [6,11,[14][15][16]. A conceptual approach that is more synonymous with and inclusive of Indigenous worldviews on well-being are socialecological frameworks. Social-ecological frameworks are deeply contextual as they reflect that individuals are embedded within various environmental domains, including family, community, and broader societal and political contexts [6,11,14,17,18].
Indigenizing Ecological Models
Bronfenbrenner's ecological systems model [19,20] is among the most influential and globally accepted of the social-ecological frameworks [21]. Its intention is to illustrate that youth develop as the result of their interactions with their environments (or ecological systems). In Bronfenbrenner's model, the individual is situated at the center of a series of nested environments, from the individual at the center, to concentric circles of more distal influences, called the microsystem, mesosystem, exosystem, macrosystem, and chronosystem (See Figure 1) [20]. To make Bronfenbrenner's model [20] more relevant for Indigenous populations, Fish and Syed [17] developed the Indigenist Ecological Systems Model (see Figure 2) [14], which reconceptualizes the order and meaning of the environments to better account for the influence of historical and cultural contexts on Indigenous peoples' 3 of 15 developmental beginnings and outcomes. By foregrounding Indigenous development in the histories and cultures of Indigenous peoples, the Indigenist model originates in a strengths-based vision of development through interconnectedness and relationality. The Indigenist model was initially used as a theoretical framework to better understand the historical and cultural factors that influence American Indian students' experiences in higher education [17]. Since then, it has been used as a framework for developing a historically and culturally congruent mental health intervention for Indigenous students [14]. As the findings of these undertakings suggest, Indigenous populations experience the most positive developmental outcomes when they are able to access their histories and cultures in their environments, and less desirable outcomes when their environments prohibit this.
(see Figure 2) [14], which reconceptualizes the order and meaning of the environments to better account for the influence of historical and cultural contexts on Indigenous peoples' developmental beginnings and outcomes. By foregrounding Indigenous development in the histories and cultures of Indigenous peoples, the Indigenist model originates in a strengths-based vision of development through interconnectedness and relationality. The Indigenist model was initially used as a theoretical framework to better understand the historical and cultural factors that influence American Indian students' experiences in higher education [17]. Since then, it has been used as a framework for developing a historically and culturally congruent mental health intervention for Indigenous students [14]. As the findings of these undertakings suggest, Indigenous populations experience the most positive developmental outcomes when they are able to access their histories and cultures in their environments, and less desirable outcomes when their environments prohibit this.
The purpose of this paper is to apply the Indigenist Ecological Systems Model [14] to strengths-based approaches to Indigenous mental health and wellness work occurring in the countries now known as Canada, Australia, New Zealand, and the United States.
Materials and Methods
Our application of the Indigenist Ecological Systems Model [14] expands upon previous applications to American Indians/Alaska Natives in the United States to Indigenous youth and communities in Canada, Aotearoa/New Zealand, and Australia. We focus on CANZUS (Canada, Australia, New Zealand, United States) because of: shared values of intergenerational, family, community, environmental, and spiritual connectedness, as well as ongoing movements to pass on and/or revitalize Indigenous languages, cultures, and traditions to promote youth and community well-being [8]; similar Indigenous legal and political agreements/treaties with the national governments and self-governance among Indigenous groups [23]; shared similarities in histories of British invasion and settler colonialism resulting in countries today with English being the dominant national language [23]; and similarities in disproportionate mental health inequities (e.g., suicide, psychological distress, depression, and anxiety) endured by Indigenous youth across these four countries compared to non-Indigenous youth, linked to social determinants of health, including colonialism [24][25][26][27]. Though there are similarities and shared experiences, we also recognize the diversity of specific histories, contemporary political and social contexts, and tribal/community cultures both within and between Indigenous communities across these settler colonial countries and that may contribute to differences in health and wellness outcomes. Below, we describe each level of the Indigenist Ecological Systems Model and apply this model to case examples of strengths-based Indigenous mental health and wellness, including cases from research, community-led programs, and national advocacy across CANZUS (see Figure 2). We purposively highlight case examples outside of Westernbased scientific research to honor Indigenous epistemologies, practices, and communitydefined evidence as valid and necessary [28]. The authorship team included nine Indigenous and five allied co-authors with vast knowledge of research, community-based programs, and national initiatives. Our approach to purposively selecting case examples was an iterative and collaborative process among all the co-authors. The first author met with the co-authors to discuss each level of the Indigenist Ecological Systems Model to generate The purpose of this paper is to apply the Indigenist Ecological Systems Model [14] to strengths-based approaches to Indigenous mental health and wellness work occurring in the countries now known as Canada, Australia, New Zealand, and the United States.
Materials and Methods
Our application of the Indigenist Ecological Systems Model [14] expands upon previous applications to American Indians/Alaska Natives in the United States to Indigenous youth and communities in Canada, Aotearoa/New Zealand, and Australia. We focus on CANZUS (Canada, Australia, New Zealand, United States) because of: shared values of intergenerational, family, community, environmental, and spiritual connectedness, as well as ongoing movements to pass on and/or revitalize Indigenous languages, cultures, and traditions to promote youth and community well-being [8]; similar Indigenous legal and political agreements/treaties with the national governments and self-governance among Indigenous groups [23]; shared similarities in histories of British invasion and settler colonialism resulting in countries today with English being the dominant national language [23]; and similarities in disproportionate mental health inequities (e.g., suicide, psychological distress, depression, and anxiety) endured by Indigenous youth across these four countries compared to non-Indigenous youth, linked to social determinants of health, including colonialism [24][25][26][27]. Though there are similarities and shared experiences, we also recognize the diversity of specific histories, contemporary political and social contexts, and tribal/community cultures both within and between Indigenous communities across these settler colonial countries and that may contribute to differences in health and wellness outcomes. Below, we describe each level of the Indigenist Ecological Systems Model and apply this model to case examples of strengths-based Indigenous mental health and wellness, including cases from research, community-led programs, and national advocacy across CANZUS (see Figure 2). We purposively highlight case examples outside of Western-based scientific research to honor Indigenous epistemologies, practices, and community-defined evidence as valid and necessary [28]. The authorship team included nine Indigenous and five allied co-authors with vast knowledge of research, community-based programs, and national initiatives. Our approach to purposively selecting case examples was an iterative and collaborative process among all the co-authors. The first author met with the co-authors to discuss each level of the Indigenist Ecological Systems Model to generate initial ideas for case examples. Six of the co-authors searched peer-reviewed literature using PubMed and/or PsycINFO to identify the case examples relevant to each level of the model. To augment the academic literature search, gray literature was also searched using Google, and known Indigenous websites and organizations. Examples were then discussed as part of the paper drafting process. If a case example did not fit, the authors searched for alternatives that fit the model more accurately.
Results from the case examples are discussed below, framing each case's approaches and findings within each dimension of the Indigenist Ecological Systems Model. Descriptions of the model and case examples are outlined by level in Table 1.
Historical Contexts
Cultivation of understanding of ancestral histories of place, resilience, perseverance, family, and community development via intergenerational learning. Additionally, the importance of understanding survivance, perseverance, and healing in context of historical trauma and loss, and colonialism Intergenerational connection between past, present, and future [8] Canada, Australia, New Zealand, United States [8] Storytelling as a method of intergenerational learning and knowledge transmission [29] United States [29] Cultural Contexts Intergenerational learning and transmission of Indigenous cultural knowledges, values, practices, customs to undergird culturally grounded initiatives and/or prevention interventions Camp Pigaaq: Elders share cultural knowledge and traditions with youth [30] United States [30] Te Kōhanga Reo: Māori immersion language preschools [31] New Zealand [31] Kaehkēnawapatāēq: language revitalization program, Menominee Indian Tribe of Wisconsin [32] United States [32] Individual Understanding that individuals are relational beings and interconnected with ancestors, family, community, environment, spirit, past, present, and future generations Youth well-being is dependent upon internal, spiritual, cultural, family, community, environmental, historical, and intergenerational connectedness [8] Canada, Australia, New Zealand, United States [8] Immediate Contexts Developing positive strengths-based interactions with caregivers, peers, schools, extended family, and community members Thiwáhe Gluwáš'akapi: an adolescent community-engaged substance use prevention intervention grounded in family and kinship teachings to emphasize family and community relationships, responsibilities, and roles [33] United States [33]
Definition of How Contexts Operate Select Examples Location of Examples
Surrounding Contexts Promoting positive interactions between two or more immediate contexts Listening to One Another Grow Strong: a culturally adapted program that includes activities that span across youth, caregiver, school, and Elder communities [34] Canada [34] Distant Contexts Promoting positive policy changes and advocacy through societal and human service systems, including governance systems (e.g., federal and tribal), sports teams, mass media, and healthcare systems Self-determination, community control, tribal sovereignty [5,24,35] Canada, Australia, New Zealand, United States [5,24,35] Federal policy to enact culturally safe healthcare programming for Indigenous communities [24,36] Australia [24,36] Advocacy to increase visibility and accurate representations of Indigenous peoples across sectors of national society, including media [37] United States [37]
Historical Contexts
Historical contexts (i.e., the chronosystem) are the core and third dimension of the Indigenist model [14,38]. Privileging historical contexts in development indicates that histories affect Indigenous peoples' health and well-being in the past, present, and future. This includes histories of colonialism, histories of resilience and perseverance, and familial, ancestral, and place-based histories that center the importance of lands. The important connection between past, present, and future is a shared teaching among many Indigenous communities in CANZUS [7][8][9]16] and has been used to structure mental health and wellness promotion among youth. Intergenerational engagement encourages Indigenous communities to repair, build, and strengthen relationships between Elders or traditional knowledge holders and youth. Connections between Elders and traditional knowledge holders with youth foster transfer of cultural values, historical cultural knowledge, and observational learning of cultural activities and lifeways. Intergenerational learning and engagement are recognized as imperative to restoring overall health and well-being for Indigenous youth and communities, including mental health [39].
Intergenerational learning is fostered through community, referred to by Cajete as "the living place" [29]. The living place includes immediate family, extended family, clan relatives, the larger tribal community, and lands and the natural environment [29]. These sources of learning are varied and reflect the communal responsibility to ensure Indigenous lifeways persist throughout time. Intergenerational learning occurs through a process that is holistic and integrated with daily living and land-based teachings, differing from the colonial idea of education that often separates learning from other aspects of life [29]. The process of intergenerational learning begins with the development of social structures and relationships and is followed by using creative exploration to foster skills in listening and observation [29]. Storytelling, public speaking, and singing are often used to promote intergenerational learning, while teaching of sacred cultural knowledges may be reserved for ceremonies [29]. Cultural continuity depends on intergenerational learning, which is a lifelong process for Indigenous communities.
As emphasized above, time is crucial to Indigenous flourishing, as connections between the past, present, and future offer access to time-honored knowledges and traditions. This also offers a powerful restorative to historical trauma and losses and protection against ongoing effects of settler colonialism on Indigenous communities and youth [40]. Our examples foreground Elders and knowledge keepers who can provide these connections through a time-honored tradition, the intergenerational transmission of knowledge and oracy [41], exposing Indigenous youth to ancestral and place-based histories to guide them now and for years to come. In addition to important histories, intergenerational learning offers an insight into Indigenous cultural factors-which we discuss next.
Cultural Contexts
Following historical contexts that focus on sources and processes of intergenerational learning, cultural contexts (i.e., the macrosystem) are the second level of the Indigenist model that combine intergenerational learning with cultural practices. Cultural contexts include patterns in beliefs, practices, norms, and customs unique to Indigenous peoples (e.g., language, spirituality) and that give structure to their environments. Cultural contexts are an outgrowth of historical contexts, as various histories (e.g., settler colonial histories and Indigenous histories) mold and shape Indigenous peoples' cultures. Despite legacies of settler colonialism and violence, Indigenous peoples have maintained or are actively revitalizing their connections to Indigenous knowledges and practices, including connections to lands and cultures [42]. According to the First Nations Mental Wellness Continuum Framework developed in Canada, Indigenous leaders, Elders, youth, and community members affirm that culture is central to mental health wellness [2]. Thus, similar to historical contexts, cultural contexts are relevant to all the environments in the Indigenist model.
When intergenerational engagement and learning (i.e., historical contexts) and cultural activities (i.e., cultural contexts) are combined, prevention against negative mental health inequities occurs. Culturally grounded initiatives to foster intergenerational engagement and cultural knowledge transmission have always existed in Indigenous communities, yet there is growing interest in understanding how these programs promote youth mental health and applying these modalities to prevent mental health inequities (e.g., suicide) through research [43]. One intervention that leverages intergenerational learning and cultural knowledge transfer includes Camp Pigaaq, a camp for Alaska Native youth that provides space for Elders and other guest presenters to share cultural knowledge and teach traditional skills and wellness practices [30]. Participation in Camp Pigaaq has shown to significantly increase positive mood, feelings of belongingness, and perceived coping among Alaska Native youth [30]. Culturally grounded mental health promotion underscores that cultural values and lifeways can be taught through intergenerational engagement to form healthy communities that persist through time.
Language is an important vehicle for passing down culture from generation to generation [44]. Within Aotearoa/New Zealand, Te Kōhanga Reo (Māori immersion language preschools), meaning "the language nest", is a national movement providing a "culturally structured environment" for child development and aims to strengthen Māori language and culture among youth and future generations [31,44]. During their attendance, Māori children from birth to the of age six are culturally immersed and learn about values, traditions, and language in a warm environment with whānau (extended family). Established in 1982, more than 50,000 children have participated in a Kōhanga Reo, which has been vital to the revitalization of Te Reo Māori (Māori language) and ensuring tamariki (children) grow up and develop immersed in their language and culture [31]. Similar language programs are growing across Indigenous communities. In 2018, the Menominee Indian Tribe of Wisconsin established a language nest named Kaehkēnawapatāēq (Menominee Language Revitalization Program) to train early childhood language teachers [45,46]. Kaehkēnawapatāēq translates to "we learn by observing", referring to the process of teaching the language to adults while children in the daycare observed and were also immersed in the language nest. The Menominee Indian Tribe of Wisconsin currently has resources and materials for learning history and language that can be accessed in-person or online [32]. Indigenous languages facilitate connectedness to family, community, lands, spirituality, and intergenerational connectedness [8]. In this way, language revitalization is critical to promoting positive mental health, community healing, and wellness [47].
Indeed, historical and cultural contexts are intimately bound with one another. As we have illustrated via case examples, historical contexts provide a foundation through which Indigenous youth gain meaningful access to cultural contexts, including language, spirituality, local values, and various cultural practices and teachings. Historical and cultural contexts converge to provide Indigenous youth with the necessary foundation for living full and healthy lives.
Indigenous Youth
Instead of being at the center of development, Indigenous youth (i.e., the individual level in Bronfenbrenner's model [20]) are the third level of the Indigenist model [14], including core psychological phenomena (e.g., identity, self-understanding, and self-efficacy). This position indicates that Indigenous histories and cultures come before and are integral to Indigenous youth development. It also signifies that developmental outcomes-both positive and negative-are born from Indigenous youths' connections to their communities' histories and evolving cultures.
Within many Indigenous worldviews and cultures, it is impossible to consider an individual separate from their connectedness with other people, lands, and all living beings. Connectedness has been defined as the interrelated welfare of an individual, family, community, and the Earth [48]. To learn more about connectedness and the relational processes that promote child well-being, research was conducted through a literature review of Indigenous communities in Canada, Australia, New Zealand, and the United States [8], and an interview process with 25 Alaska Native knowledge bearers [49]. This research led to the development of an Indigenous Connectedness Framework that describes child well-being as depending on the existence of internal, spirit/culture, family, community, environment, and intergenerational connectedness. These relationships help a child know who they are and where they come from as a relational human being that is interconnected with a collective [8]. When children are perceived as unique beings that are part of a collective, it expands our awareness of well-being to include the wellness of everyone and everything to which they are connected. In this light, when we serve individual children, we are also serving their family, community, the environment, culture/spirit, and ancestors and future generations because who they are is embedded in those interconnected relationships.
It is evident from the Indigenous Connectedness Framework [8] that, as a case example, Indigenous youth experience clear benefits as a result of being immersed in and learning about longstanding cultural practices and traditions. No doubt, these benefits have a ripple effect, extending out to Indigenous families, communities, and transcending the physical universe to an ancestral and spiritual one. Now we turn our attention to the environments that make such historical and cultural connections possible, starting with immediate environments.
Immediate Contexts
Immediate contexts (i.e., the microsystem) are the first level of Bronfenbrenner's model [20] and the fourth environment in the Indigenist model. Immediate contexts refer to the environments that Indigenous youth have direct interactions with on a regular, ongoing basis [14]. While this can include parents or caregivers, peers, schools, and community (i.e., reservations and urban neighborhoods), it can also refer to extended family depending on the nature of the interactions Indigenous youth have with them [50].
The Thiwáhe Gluwáš'akapi Program (translated as sacred home in which family is made strong) provides a strong example of centering connectedness to and engagement with various levels of environment in promoting mental health and wellness among Indigenous youth. Thiwáhe Gluwáš'akapi was derived from a community-based participatory research substance use prevention study; an Indigenous researcher living and working in the community led the cultural adaptation of the program and paid strong attention to the immediate context of adolescents in the process. The resulting intervention is deeply rooted in family and kinship ties and integrates kinship teachings while emphasizing the relationships, responsibilities, and roles youth hold within their families and their larger communities [33]. Adolescents were enrolled in the study with one caregiver to participate in seven weekly group sessions held at their local school. Kinship ties were emphasized by including extended family beyond the enrolled caregiver in group sessions. Thiwáhe Gluwáš'akapi also promoted tribal values through curricula designed to develop listening skills that align with cultural traditions of oral storytelling and learning [51]. Further, the Thiwáhe Gluwáš'akapi program utilized traditional language for kinship relationship terms to emphasize the interconnectedness of kinship and culture.
What is remarkable about interventions such as these is that they leverage the existing strengths of Indigenous youths' immediate contexts (i.e., family and community) and build on them through culturally relevant programming. On their own, immediate contexts can have a robust impact on Indigenous youth. However, as we describe next, immediate contexts can also create partnerships with each other to further their impact.
Surrounding Contexts
Surrounding contexts (i.e., the mesosystem) are the second level of Bronfenbrenner's original model [20], and the fifth level of the Indigenist model [14]. Surrounding contexts are interactions between two or more immediate contexts (e.g., peers and parents) that affect Indigenous youth. Previous research with the Indigenist model indicated that surrounding contexts in the form of partnerships can address structural inequities in Indigenous peoples' environments, critical to promoting Indigenous youths' mental health and well-being.
Listening to One Another to Grow Strong (LTOA) is an example of a communitydriven and culturally adapted program rooted in the philosophy that family well-being (e.g., microsystem) is foundational for individual and community (microsystem) health [52]. This program was developed through a collaboration between First Nations communities in British Columbia, Manitoba, Ontario, Quebec, and university teams in the United States and Canada. LTOA is designed to be inclusive of the family unit, and therefore the immediate contexts of youth, by providing activities to be completed by a family, as well as youthand caregiver-specific activities. The family program component of LTOA is delivered across 14 two-and-a-half-hour sessions, while the school program is delivered through 6 one-hour sessions [34,53]. The final lesson of the school program includes feasting with families in schools to highlight the achievements of students and to connect families to youth in their school environments [34]. All sessions are facilitated by a local facilitator and usually in partnership with local Elders, who are provided with an Elder manual designed to orient them to the curriculum and their role in delivery [54]. Qualitative evaluation of the LTOA program found positive impacts on family bonding and communication skills, while quantitative evaluations found positive impacts on youth well-being in the form of reduced feelings of distress and elevated sense of connection to family and community [6]. Therefore, the LTOA program demonstrates how interactions and partnerships between immediate contexts (families, Elders, schools, peers, and communities) can act in synergy to support positive mental health and wellness for Indigenous youth.
Surrounding environments have the potential to overcome barriers to youth accessing their Indigenous histories and cultures. By establishing partnerships and building relationships with other immediate environments (e.g., Elders, communities, families, and peers), schools can develop local and culturally appropriate mechanisms for making Indigenous histories and cultures accessible in places where it matters most. Other immediate environments that are ripe for these types of partnerships include healthcare centers eager to provide suitable and relevant services to Indigenous youth.
Distant Contexts
The application of Bronfenbrenner's [20] exosystem in the Indigenist Ecological Systems Model [14] depicts social and political contexts that affect Indigenous peoples and their communities. Distant contexts (i.e., the exosystem) are the sixth and final level of the Indigenist model and represent environments that Indigenous peoples may or may not be actively involved in, but are indirectly impacted by these distant contexts. The case examples presented span some of the following critical domains, including the government (e.g., federal government, self-determination, and tribal sovereignty), sports teams' names and mascots, Indigenous visibility and representation in mass media, and healthcare systems.
Self-determination, community control, and tribal sovereignty have been identified as vital to health, including mental health promotion and well-being across Indigenous communities in CANZUS [5,24,35]. In Aotearoa/New Zealand, tino rangatiratanga is a Māori concept deeply rooted in Māori worldviews and historical contexts representing the essential domains of self-determination, sovereignty, self-governance, and autonomy vital to health and well-being [13,16,55]. Tino rangatiratanga is described as having a cyclical and interdependent relationship with the well-being of an individual and the collective, including whānau (extended families), hapū (sub-tribes), and iwi (tribes), and if supported and promoted nationally, can benefit health and well-being for all New Zealanders [55]. There are other examples demonstrating the potential power of community autonomy, control, and sovereignty in promoting mental health and wellness among Indigenous youth and communities. For example, Chandler and Lalonde [40] documented among First Nations communities in Canada that cultural continuity was related to reduced youth suicide. Specifically, they identified six variables that comprise cultural continuity: assertion of or political movements toward sovereignty over (a) traditional lands; (b) governance; (c) education; (d) law enforcement and first responders and (e) health services; and (f) formally recognized fora, which promote culturally meaningful values and traditions [40]. Among First Nations communities with a higher amount of cultural continuity factors, they observed lower youth suicide rates compared to communities with fewer factors.
Federal policy can also impact culturally safe mental health programming for Indigenous communities. For example, in Australia, cultural competency has been deemed a professional requirement for the national mental health sector working with Aboriginal and Torres Strait Islander clientele [36]. The iterative nature of cultural competency acquisition warrants emphasis, as it is only through sustained dedication toward providing culturally safe services that transformational practice is possible [24]. Further work can be conducted through national policies across CANZUS to align mental health services with Indigenous epistemological and ontological positions, ensure human rights and decolonizing practices, and offer critical reflection tools to support mental health service providers to incorporate such principles into their work [24].
The visibility of Indigenous peoples within society and accurate portrayals may also promote Indigenous youth mental health and well-being. This follows from the research showing that negative portrayals, such as American Indian/Alaska Native sports mascots, have deleterious impacts on American Indian/Alaska Native youth mental health [56]. Within the United States, an Indigenous non-profit organization, IllumiNative [37], is leading initiatives to increase visibility and accurate narratives and portrayals of American Indians/Alaska Natives in the United States. They published the Reclaiming Native Truth Report [57], which underscores how visibility and representation of American Indians/Alaska Natives can be strengthened across multiple forms of media, including social and news media, the entertainment industry, and education.
For decades, settler governments and structures have made decisions that affect Indigenous peoples with limited and insufficient input from Indigenous peoples and tribal nations themselves. As these case examples indicate, there are shifts in this trend wherein Indigenous peoples are asserting their right to sovereignty and self-determination, advocating for new culturally congruent policies and other structural changes, and challenging settler depictions of Indigenous peoples. Collectively, this work aims to create a better tomorrow for future generations of Indigenous youth to develop and thrive.
Discussion
We applied a novel framework, the Indigenist Ecological Systems Model [14], to positive case examples of Indigenous youth mental health and wellness research, communityled programs, and national initiatives in Canada, Australia, New Zealand, and the United States that reflect a deep contextual and cultural understanding of Indigenous conceptualizations of mental health and well-being. This framework recognizes that Indigenous youth development and well-being occur through strengths-based relationships across interconnected environmental levels [14]. By utilizing an Indigenous framework and strengthsbased case examples, we resisted deficit and pathologizing narratives that tend to dominate health research with or about Indigenous peoples [11,13]. While social-ecological frameworks have been critiqued due to positioning health as a goal [11], our approach aimed to describe broad and multi-level initiatives that naturally promote Indigenous youth mental health and well-being (e.g., visibility and positive representation; self-determination). Further, this approach respected the interconnected nature of physical, mental, emotional, and spiritual health, and connection to family, community, and larger contexts that are common among Indigenous communities in Canada, Australia, New Zealand, and the United States [5].
We purposely included positive examples of Indigenous-led, community-based programs, and national initiatives outside of Western research to take "a comprehensive view of what constitutes evidence beyond colonial constructs" [58]. While many of these programs are familiar at community levels, they may be unfamiliar or missing from larger ecosystems of health research. For example, within distant contexts, we highlighted the work of IllumiNative, an Indigenous-led non-profit organization in the United States that seeks to increase visibility and positive representations of American Indians/Alaska Natives throughout society [37]. There is empirical research linking negative psychosocial impacts experienced by American Indian/Alaska Native youth and adults to negative stereotypes about American Indians/Alaska Natives [56]. However, understanding how societal visibility and positive representations promote American Indian/Alaska Native health, mental health, and holistic well-being has been largely absent from Indigenous health research. Research and advocacy can promote Indigenous interests, positive outcomes, and social and political change [13]-in this case, through promoting Indigenous youth mental health and overall well-being, which has an undeniable connection to promoting Indigenous communities' wellness [8].
Indigenous Ecologies of Health and Wellness
Our application of the Indigenist model revealed several notable findings. An essential theme that cuts across all case examples is: Indigenous peoples are taking into consideration the complete ecologies of Indigenous youth to foster their holistic health [3]. Rather than simply considering the health of the individual, we see Indigenous peoples creating innovative approaches to gifting Indigenous youth with the intergenerational and cultural foundation that is necessary for living full and meaningful lives [5]. These approaches harness Indigenous histories and cultures across places and spaces that are crucial to Indigenous youth development-immediate environments, such as family, school, Elders, and community; surrounding environments, such as school-Elder partnerships; and distant environments, such as tribal self-governance and policy. Taken together, Indigenous approaches to whole health weave together the various environments in the Indigenist model, creating a generative network of health that encompasses Indigenous families and communities across past, present, and future generations [11].
Further, we used a cross-Indigenous approach, rather than a cross-cultural approach that is often applied in psychology and mental health fields [59]. Indeed, "the spiritual, creative, and political resources that Indigenous peoples can draw on from each other pro-vide alternatives for each other" [13]. Toward that end, we provided distinct case examples from different countries and tribal communities, yet drew similarities that demonstrate an engagement across levels of the Indigenist Ecological Systems Model [14]. At times, it was challenging to determine which environment a particular program belonged to, given the degree of overlap between the levels. However, this lends greater support to Indigenous concepts of health and wellness as holistic and comprehensive, spanning multiple environments. Ultimately, these health-related programs shine a light on how Indigenous peoples are cultivating cultures of health rooted in their knowledges [10] that can inform future research and interventions.
Future Directions
Fostering Indigenous youth well-being is integral to the future of Indigenous communities [8]. While each section of the Indigenist model holds promise for Indigenous health, its power lies in what it collectively offers across environments. The Indigenist model further elucidates what Indigenous peoples have been voicing and advocating for since time immemorial-that there are countless strengths within Indigenous communities that, when channeled, enable Indigenous youth to thrive [6,15,16]. It is now time for federal governments, policymakers, funders, and health researchers to support initiatives that embolden Indigenous lifeways as legitimate strengths and health approaches, albeit long overdue.
There are various Indigenous-led solutions to increasing and enhancing the strengths of entire Indigenous communities and fostering inherent strengths of Indigeneity. This includes creating local and context-specific curricula for teaching language and cultural beliefs and practices, creating opportunities to connect youth with Elders and other knowledge keepers, developing or adapting culture-forward interventions, and engaging in national efforts that draw on Indigenous ecologies to make sociopolitical changes across the public landscape. Although some interventions (e.g., Thiwáhe Gluwáš'akapi [33]) were developed to address one priority area (i.e., substance use prevention) among Indigenous youth, such programs often can be applied to address the root causes of other concerns (i.e., suicide prevention). However, note that the shared paths are rooted in Indigenous strengths and how to bolster them, rather than targeting reductions in "problematic" outcomes. Accordingly, future research will benefit from understanding the ways in which Indigenous-led programs promote the collective health and well-being of youth beyond narrow targets of intervention. Future research must focus on identifying common, strengths-based solutions for promoting mental health to not only promote well-being for Indigenous youth, but ripple out across their entire ecosystem.
Conclusions
The case examples we selected from Indigenous research and community-based programs illustrate the Indigenist Ecological Systems Model and demonstrate the alignment of the model with Indigenous concepts of health and wellness. Through this social-ecological model, mental well-being is understood from a vantage of holism, where the individual is understood within a highly relational context, interconnected with historical and cultural contexts, and with ancestors, family, community, spirit, lands, and future generations. The flexibility of the Indigenist Ecological Systems Model demonstrates its utility to guide the development of Indigenous youth health and mental health research, interventions, and programs. Using this model centers Indigenous knowledges and worldviews, and will help to ensure that these perspectives guide the future research and program development aimed at youth mental health and well-being, with attention to each social-ecological level and the interconnections between them. Although we highlighted projects from across global Indigenous contexts, it remains critical for such research to be Indigenous-led and grounded in the specificity of local place and culture. By focusing on complete ecologies and domains of strength, Indigenous-led research and action is leading the way in advancing Indigenous youth well-being and promoting flourishing within and by communities. | 2022-05-23T15:03:02.217Z | 2022-05-01T00:00:00.000 | {
"year": 2022,
"sha1": "42a28a5ff4b16431f12ba00004f3392435b806ed",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/1660-4601/19/10/6271/pdf?version=1653129400",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "d5b31ac069c4a3d9a30a02976955fea62bb7abcb",
"s2fieldsofstudy": [
"Psychology",
"Political Science"
],
"extfieldsofstudy": [
"Medicine"
]
} |
688713 | pes2o/s2orc | v3-fos-license | PREDICT: a new UK prognostic model that predicts survival following surgery for invasive breast cancer
Introduction The aim of this study was to develop and validate a prognostication model to predict overall and breast cancer specific survival for women treated for early breast cancer in the UK. Methods Using the Eastern Cancer Registration and Information Centre (ECRIC) dataset, information was collated for 5,694 women who had surgery for invasive breast cancer in East Anglia from 1999 to 2003. Breast cancer mortality models for oestrogen receptor (ER) positive and ER negative tumours were derived from these data using Cox proportional hazards, adjusting for prognostic factors and mode of cancer detection (symptomatic versus screen-detected). An external dataset of 5,468 patients from the West Midlands Cancer Intelligence Unit (WMCIU) was used for validation. Results Differences in overall actual and predicted mortality were <1% at eight years for ECRIC (18.9% vs. 19.0%) and WMCIU (17.5% vs. 18.3%) with area under receiver-operator-characteristic curves (AUC) of 0.81 and 0.79 respectively. Differences in breast cancer specific actual and predicted mortality were <1% at eight years for ECRIC (12.9% vs. 13.5%) and <1.5% at eight years for WMCIU (12.2% vs. 13.6%) with AUC of 0.84 and 0.82 respectively. Model calibration was good for both ER positive and negative models although the ER positive model provided better discrimination (AUC 0.82) than ER negative (AUC 0.75). Conclusions We have developed a prognostication model for early breast cancer based on UK cancer registry data that predicts breast cancer survival following surgery for invasive breast cancer and includes mode of detection for the first time. The model is well calibrated, provides a high degree of discrimination and has been validated in a second UK patient cohort.
Introduction
Accurate prediction of survival is an essential part of the decision making process following surgery for early breast cancer and allows clinicians to determine which patients will benefit from adjuvant therapy. At present these decisions are largely based on known pathological prognostic factors that retain independent significance on multivariate analysis including tumour size, tumour grade and lymph node status in addition to the efficacy of any adjuvant therapy. The predicted treatment benefit can be calculated by applying the relative risk reduction of a particular adjuvant therapy to the breast cancer specific mortality for an individual patient to give an absolute percentage survival benefit for that patient.
The Nottingham Prognostic Index (NPI), a prognostic scoring system based on a large cohort of patients with early breast cancer treated in a single institution, is based on tumour size, grade and lymph node status and when first described divided patients into three groups with significantly different survival [1]. The NPI has been prospectively validated in a second Nottingham dataset [2], as well as in other centres [3], and now allocates patients to one of six prognostic groups [4]. More recently a model has been developed to allow prediction of survival based on individual NPI scores rather than the mean survival of the six groups previously described [5].
Adjuvant! is a web-based prognostication and treatment benefit tool for breast cancer that is now widely used in the UK to help clinicians and patients make decisions about adjuvant therapy. The mortality estimates used in Adjuvant! were based on 10-year observed overall survival (OS) of women aged 36 to 69 who were diagnosed between 1988 and 1992 and recorded in the Surveillance, Epidemiology and End Results (SEER) registry [6]. Breast cancer specific survival (BCSS) without adjuvant therapy was calculated based on estimates of the number of patients likely to have received systemic therapy and the risk reductions outlined in the Early Breast Cancer Trialists' Collaborative Group [7,8]. Although these assumptions have now been validated in a population-based Canadian dataset [9] there has always been some uncertainty about how applicable the Adjuvant! model is to contemporary patients diagnosed and treated in the UK. A recent paper has shown that Adjuvant! overestimated the overall survival by 6% in a UK cohort of 1,065 women with early breast cancer treated in Oxford between 1986 and 1996 [10].
The primary aim of this study therefore was to develop a prognostication model to predict OS from a large cohort of UK women diagnosed in East Anglia from 1999 to 2003 using cancer registration and OS data recorded by the Eastern Cancer Registration and Information Centre (ECRIC). ECRIC provides near complete breast cancer registration for 10 hospitals in East Anglia as well as information on systemic treatment and mode of detection. A secondary aim of this study was to validate the model in a second UK cancer registry dataset to facilitate development of an online prognostication and treatment benefit tool for UK-based patients with early breast cancer.
Study population
The primary analysis was based on data from patients with invasive breast cancer diagnosed in East Anglia, UK between 1999 and 2003 identified by ECRIC. ECRIC covers a catchment area population of approximately 5.5 million people and registers all malignant tumours occurring in people resident in East Anglia at the time of diagnosis. ECRIC also records prospectively demographic, pathologic, staging, general treatment and outcome information. Death certificate flagging through the Office of National Statistics provides the registries with notification of deaths. The lag times for this are a few weeks for cancer deaths and two months to a year for non-cancer deaths. In addition, ECRIC checked vital status by querying the National Health Service Strategic Tracing Service. Vital status was ascertained at the end of June 2008 and all analyses were censored on 31 December 2007 to allow for delay in reporting of vital status. Breast cancer specific mortality was defined as deaths where breast cancer was listed as the cause of death on Parts 1a, 1b, or 1c of the death certificate.
Information obtained from ECRIC included age at diagnosis, number of lymph nodes sampled and number of lymph nodes positive (categorised as 0, 1, 2 to 4, 5 to 9, and 10+ nodes positive), tumour size (categorised as <10 mm, 10 to 19 mm, 20 to 29 mm, 30 to 49 mm, 50+ mm), histological grade (I, II, III), oestrogen receptor (ER) status (positive or negative), mode of detection (screening vs. clinical), information on local therapy (wide local excision, mastectomy, radiotherapy), and type of adjuvant systemic therapy (chemotherapy, endocrine therapy, both). Exact chemotherapy regimens are unknown, but the majority of breast cancer patients in the ECRIC population received first or second generation chemotherapy during this time period. Patients who did not undergo surgery, patients with incomplete local therapy (wide local excision without radiotherapy) and patients with fewer than four nodes excised with a diagnosis of nodenegative disease were excluded from the analyses, leaving a study population of 5,694 individuals (Table 1).
An independent validation dataset was comprised of women diagnosed with invasive breast cancer between 1999 and 2003 within the boundaries of the West Midlands Cancer Intelligence Unit (WMCIU). The geographic area served by WMCIU has a population of approximately 5.3 million individuals. Identical patient demographic information and study endpoints were retrieved from the WMCIU cancer registration database, with the same exclusions applied as for the ECRIC dataset. The total validation study population included 5,468 individuals ( Table 1). As this was a large populationbased study, with full anonymisation of all data, informed consent and ethical approval was not sought.
Prognostic model parameters
Breast cancer specific mortality and mortality from other causes (competing mortality) were modelled separately. For breast cancer specific mortality, a Cox proportional hazards model was used to estimate the hazard ratio associated with each prognostic factor. As the effect of ER status varies over time [11] ER negative and ER positive tumours were modelled separately. Nodal status, tumour grade and tumour size were modelled both as categorical variables and as ordinal variables. The models with ordinal variables fit the data better, and so these were chosen for the final models. Chemotherapy, endocrine therapy, and tumour detection by screening were treated as simple indicator variables. For the purposes of this study, screendetected cancers were those discovered by screening mammography in the NHS Breast Screening Programme
development (Eastern Cancer Registration and Information Centre-ECRIC) and validation (West Midlands Cancer Intelligence Unit-WMCIU) cohorts (Continued)
which at the time offered three-yearly mammography to women aged 50 to 64. In an exploratory analysis, age at diagnosis was included as a categorical variable in five age groups (<40, 40 to 49, 50 to 59, 60 to 69 and 70+) but these were not found to be significantly associated with breast specific mortality (data not shown) and age was excluded from subsequent models.
Competing mortality was modelled separately and adjusted for age at diagnosis. Exploration of the age specific beta-coefficients suggested that the effect varied exponentially with age; the best fit model was age to the power of 2.38.
Model discrimination and calibration
We used the baseline survivor function from the ER negative and ER positive Cox proportional hazards models for breast cancer specific survival adjusted for the other prognostic factors to estimate the predicted number of deaths from breast cancer. Deaths from other causes were estimated from the baseline survivor function for competing mortality after adjusting for age. The total number of deaths at Years 5 and 8 after diagnosis was estimated by summing the breast-specific and competing mortality. Observed and predicted deaths were compared using a standard Chi-squared test. Model discrimination was evaluated by calculating the area under the receiver-operator-characteristic (ROC) curve (AUC) calculated for breast cancer specific and overall deaths at Year 8 past diagnosis. The ROC curve plots sensitivity against 1specificity at different predicted risk thresholds. Model calibration was assessed using a simplified goodness-offit (GOF) method for the Cox proportional hazards model proposed by May and Hosmer [12] in which observed and model-based estimated deaths at Year 8 after diagnosis within deciles of risk score were compared. This provides a goodness of fit Chi-square test. As the baseline hazards and prognostic variable coefficients differed for ER positive and ER negative models, separate GOF tests were carried out for these models. In subgroup analyses, where numbers within deciles of risk score were small, quartiles of risk scores were used. Person-years lost were calculated by taking the area under the cumulative risk curve. Analyses were performed using STATA, version 9.2 (StataCorp, College Station, TX, USA).
Initial model fit
The ECRIC data set was used to derive the primary prognostic models for breast specific and competing mortality. Beta-coefficients and standard errors for each prognostic factor in both the ER negative and ER positive models are provided in Table 2. The estimated relative hazard associated with treatment with adjuvant hormone therapy was smaller than the published estimate based on randomised clinical trials [7,8] in women with ER positive tumours, and was associated with a poorer prognosis in women with ER negative tumours where no effect is expected based on clinical trial data. These differences are likely to represent bias due to clinical selection or patient non-compliance in the observational data.
As expected, this model was well calibrated. The model tended to over-predict mortality, but the difference between actual and predicted deaths was less than one percent at five and eight years after diagnosis (14.8 vs. 15.6 percent and 18.9 vs. 19.0 percent, respectively), dif-ferences that were not statistically significant (P = 0.10 and 0.83 respectively). There were 31,904 person-years of follow-up compared to 31,662 predicted. Model discrimination was also good -the calculated area under the ROC curve (AUC) for the overall model was 0.81 (SE 0.0074) (Table 3). Similarly, breast cancer specific actual and predicted mortality were within one percent at Years 5 and 8 past diagnosis (10.6 vs. 11.0 percent, P = 0.28 and 12.9 vs. 13.5 percent, P = 0.26, respectively; AUC = 0.84, SE = 0.008) (Additional file 1, Table S2). The ER positive and ER negative prognostic models were also well-calibrated overall and for all subgroups, and the goodness of fit tests suggest that the models fit well across different risk categories. The ER positive model provided better discrimination (AUC = 0.82, SE = 0.0111) than the ER negative model (AUC = 0.75, SE = 0.0171).
Validation
The WMCIU study population of 5,468 individuals was used for independent prognostic model validation. Over- Table S1). Overall, the ER positive and ER negative prognostic models were well-calibrated, although both models predict more breast cancer deaths than observed. The overestimation was slightly greater for the ER negative model than the ER positive model. In ER negative disease, the Year 8 actual breast cancer mortality rate was 25.0 percent compared to 30.6 percent predicted; for ER positive tumours, Year 8 actual and predicted breast cancer mor- tality were within one percent (8.9 vs. 9.2 percent). Overall model fit was good (GOF P-values > 0.05), although the fit was less good for some sub-groups. Specifically, for ER positive disease, the fit was not so good in women aged <35 years (GOF P = 0.01) and 35 to 49-year-old age category (P = 0.045). For ER negative disease, the model fit in node negative disease (GOF P = 0.03), 30 to 49 mm tumours size category (GOF P = 0.02) and high grade tumours (GOF P = 0.001) was not so good (Additional file 1, Table S2). Model discrimination was also good, again being somewhat better for the ER positive model (AUC = 0.81, SE = 0.0111) than the ER negative model (AUC = 0.75, SE = 0.0169). There were no significant differences between the ROC curves generated with the ECRIC and WMCIU data (ER positive χ 2 = 0.17, P = 0.68, ER negative χ 2 = 0.00, P = 0.95) (Figure 1).
We also explored the overall and breast cancer specific mortality within T1N0 and T2N0 good prognosis subgroups where decisions regarding adjuvant therapy can be difficult and challenging (Additional file 1, Table S3). In the WMCIU population, 1,931 individuals were diagnosed with T1N0 tumours, while 1,182 individuals were diagnosed with T2N0 tumours. For T1N0 tumours, actual and predicted five-and eight-year overall mortality rates were within 2.1 percent (5.5 vs. 7.6 percent and 6.1 vs. 8.2 percent, respectively); actual and predicted five-and eight-year breast cancer specific mortality rates were within one percent (2.4 vs. 3.3 percent and 2.8 vs. 3.6 percent, respectively). For T2N0 tumours, actual and predicted five-and eight-year overall mortality was within 2.5 percent (11.7 vs. 14.1 percent and 13.5 vs. 15.2 percent, respectively); actual and predicted five-and eightyear breast cancer specific mortality was within one percent (7.9 vs. 8.7 percent and 9.1 vs. 9.4 percent, respectively).
Summary comparison of overview vs. model-derived therapy benefit estimates
Given the difference in the estimates of the effects of hormone therapy from the ECRIC dataset compared to published clinical trial data, we also fit models (constrained models) with the relative hazard of hormone therapy constrained to the published estimate from the 1998 overviews (relative hazard 0.68 for ER positive tumours). Under this constrained model, the coefficient estimates for the other prognostic factor coefficients were similar to the original, data-driven model (Additional file 1, Table S4). Performance of the constrained model was slightly poorer in the ECRIC data than the full data driven model (Table 5), but the difference between actual and predicted mortality at eight years and between actual and predicted person-years of follow-up was still small. In the WMCIU validation dataset, the constrained model performed bet- Finally, we tested the performance of models using the data derived coefficients for grade, node status, tumour size and mode of detection from the full and constrained models with the benefit estimates from the 1998 overviews (Table 5). First generation chemotherapy benefit estimates were applied in all these analyses. Again the models performed slightly poorer than the full, datadriven model in the ECRIC dataset, but somewhat better in the WMCIU validation dataset.
Discussion
We have developed a prognostication model for early breast cancer based on data collated from a large number of patients within a single UK cancer registry. The model was validated using data from a second UK registry. As both model and validation datasets contain over 5,000 patients this model is likely to be predictive of overall survival for all women diagnosed with early breast cancer in the UK. The model was well calibrated and provides a high degree of discrimination across different prognostic groups. A particular strength of this project was the ability to access breast cancer specific mortality from ECRIC, based on death certificate reporting rather than being estimated from population data.
Accurate prediction of survival, and subsequent calculation of treatment benefit, has become increasingly sophisticated in the management of early breast cancer in the UK. Although the introduction of the NPI allowed risk stratification into five then six [4] prognostic groups, the original models provided survival estimates based on the average survival for each individual group. Furthermore, the model was based on treatment from a single institution where individual treatment bias may have an effect on overall survival. Despite this potential shortcoming, the NPI has been successfully validated in external datasets [3] and has now been further developed to include more individual survival prediction based on individual rather than group NPI scores [5].
The publication of the Adjuvant! prognostication and treatment benefit tool in 2001 led to widespread and early adoption in the UK. The web-based system allowed free access and was recognised as being user friendly for both clinicians as well as patients with breast cancer. Adjuvant! was seen to provide several advantages over and above the NPI including individual survival predictions and calculation of potential treatment benefits for that patient. The use of coloured bar charts to display this information facilitated the often difficult discussions surrounding systemic adjuvant therapies with patients and allowed the development of treatment thresholds for chemotherapy in individual breast units. The Adjuvant! model is based on population data collected by the Surveillance, Epidemiology and End Results (SEER) registry [6]. Breast cancer specific survival (BCSS) without adjuvant therapy was calculated based on estimates of the number of patients likely to have received systemic therapy and the risk reductions outlined in the Early Breast Cancer Trialists' Collaborative Group [7,8]. In contrast, systemic therapy was recorded for all patients used to generate this model, as well as breast cancer specific mortality. Breast cancer registration is close to 100 percent for both SEER and ECRIC data across specific geographic regions. This may limit their generalisability, but the good performance of the model based on ECRIC data in an independent dataset from a different region of the UK and validation of Adjuvant! using data from a population registry from Canada [9] suggests that this is not likely to be a significant problem.
A key aim that underpinned development of this model was to develop a prognostication and treatment benefit tool that benefited from the many attributes of the Adjuvant! model but which was specifically tailored to the UK population. UK cancer registries have near complete prospective data collection on breast cancer registration, pathological features, treatment and death notification. The ECRIC data used in this study included all female breast cancer cases that were treated surgically and were fully characterised for mode of detection, tumour size and grade, lymph node and ER status and details of adjuvant therapy. ECRIC collects data from more than 10 hospitals in East Anglia including only two teaching hospitals with strong research activity. As a result the data collected by ECRIC are likely to be representative of the UK as a whole and reflect good practise rather than best practise and was an ideal data source on which to base the initial model. In addition, the success of the NHS Breast Screening Programme in the UK has meant that there has been a shift to better prognostic groups at diagnosis than previously. Two recent papers, have suggested that screen detection confers an additional survival benefit beyond stage shift and reduces the risk of systemic recurrence when compared with symptomatic cancers of a similar stage [13,14]. Although the majority of the survival advantage associated with breast screening can be explained by this shift to an earlier stage at diagnosis, recent evidence suggests that approximately 25 percent of the survival advantage is still unexplained [15]. Introduction of mode of detection (screen-detected versus symptomatic) was therefore a key requirement for this model, as was adjustment of the nodal status groups with creation of a single node positive group. The inclusion of a group with a single positive node will allow these patients to have more accurate survival prediction than previously, as prognosis in Adjuvant! is based on the average of the one to three node positive group.
The model performs well across all prognostic groups in the development (ECRIC) dataset except in patients ≥ 75 years old, where the predicted mortality at Year 8 past diagnosis was less than observed (250 predicted vs. 276 actual deaths). This was also seen in the validation (WMCIU) data. In these data the model also predicted a more favourable outcome than observed for low grade tumours and a less favourable outcome than observed for high grade and ER negative tumours.
A key decision, when considering the application of this model as a predictor of treatment benefit, is whether to use the data-derived coefficients for hormone therapy or chemotherapy or the benefit estimates from published overview data [7,8]. The application of the overview estimates to the full model was a strong predictor of both eight-year mortality and person-years follow-up in the WMCIU validation dataset and has the advantage of allowing regular updates as further overview results are published.
Conclusions
In conclusion we have developed a prognostication model for early breast cancer based on data from a UK cancer registry that has included mode of detection for the first time. The model is well calibrated, provides a high degree of discrimination and has been validated in a second UK patient cohort. This model, together with application of published relative risk reductions for systemic therapy, will underpin a new web-based prognostication and treatment benefit tool for early breast cancer in the UK.
Additional material
Abbreviations AUC: area under ROC curve; BCSS: breast cancer specific survival; ECRIC: Eastern Cancer Registration and Information Centre; ER: oestrogen receptor; GOF: goodness-of-fit; NPI: Nottingham Prognostic Index; OS: overall survival; ROC: receiver-operator characteristic; SEER: Surveillance, Epidemiology and End Results; WMCIU: West Midland Cancer Intelligence Unit.
uscript. GL participated in the data acquisition, analysis and writing of the manuscript. CC participated in the design, analysis and writing of the manuscript. | 2017-08-03T02:18:08.857Z | 2010-01-06T00:00:00.000 | {
"year": 2010,
"sha1": "3bf7c725775f60701132796e627f2c5f83220e85",
"oa_license": "CCBY",
"oa_url": "https://breast-cancer-research.biomedcentral.com/track/pdf/10.1186/bcr2464",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "6810b3f75cf77780f3974448aff32d97a8956b61",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
257250477 | pes2o/s2orc | v3-fos-license | Psychometric properties and normative data of the childhood trauma questionnaire-short form in Chinese adolescents
Background The Childhood Trauma Questionnaire-Short Form (CTQ-SF) is a widely utilized instrument of childhood maltreatment (CM). However, psychometric properties and normative data of the CTQ-SF for Chinese adolescents are still unknown. Objective To examine psychometric properties and normative data of Chinese version CTQ-SF in a nationally representative sample of Chinese adolescents, including internal consistency reliability, test–retest reliability, structural validity, and convergent validity. Method A total of 20,951 adolescents aged 12 to 18 years were recruited from five provinces across China. Item analysis was used for 25 clinical items of the CTQ-SF. Confirmatory factor analysis was performed to examine fit indices of the factor structure. The Adverse Childhood Experiences Scale (ACEs) was used to evaluate convergent validity. The percentile ranks for scores of the CTQ-SF and each subscales were presented. Results According to the results of three methods in Item analysis, Item 4 should be dropped. The remaining 24 clinical items achieved satisfactory fits in an alternative four-factor model. The alternative CTQ-SF showed acceptable internal consistency and the Cronbach’s α of the four subscales was 0.824 (Neglect), 0.755 (Sexual Abuse), 0.713 (Physical Abuse), and 0.666 (Emotional Abuse), respectively. Besides, test–retest reliability and convergent validity of the alternative CTQ-SF were also acceptable. Conclusion The alternative four-factor model CTQ-SF exhibits good reliability and validity among Chinese adolescents. Additionally, the normative information of the CTQ-SF could provide practical support for determining severity of different subtypes of CM.
Introduction
Childhood maltreatment (CM) is a major public health concern worldwide (WHO, 2020). Individuals who experience maltreatment during childhood and adolescence are more likely to develop various mental and psychological problems (Hughes et al., 2017), such as depression (Dunn et al., 2013), anxiety (Li et al., 2016), sleep disorders (Kajeepeta et al., 2015;Xiao et al., OPEN ACCESS EDITED BY 2020), post-traumatic stress disorder (PTSD) (Messman and Bhuptani, 2017), bipolar disorder (Li et al., 2014a), personality disorder (Li et al., 2014b), substance use (Zhang S. et al., 2020), and suicidal behaviors (Grattan et al., 2019). More importantly, most of these adverse effects on mental health will last till to adulthood (Norman et al., 2012). Therefore, it has great significance to make efforts to prevent and address CM worldwide because of its high prevalence and direct or indirect damage to individuals and society (WHO, 2020). For targeted detection and intervention, the most primary and important task is to validate a suitable and efficient measurement tool to screen victims of CM.
A recent systematic review of 52 eligible self-reported measurements for CM found that the Childhood Trauma Questionnaire (CTQ) is the only scale that has the strongest psychometric properties and meets most standards of adequate reliability and validity (Saini et al., 2019). The CTQ is a 70-item selfadministered inventory of abuse and neglect experiences, which was developed by Bernstein et al. (1994). In 2003, they revised the original CTQ and developed the Childhood Trauma Questionnaire-Short Form (CTQ-SF) (Bernstein et al., 2003), which showed better psychometric properties than the original CTQ (Saini et al., 2019). Thus, in this decades, the CTQ-SF is more widely used than the original CTQ in numerous studies internationally (He et al., 2019;Kongerslev et al., 2019;Aloba et al., 2020;Petrikova et al., 2021). In China, although some researchers have examined psychometric properties of the CTQ-SF among Chinese high school students, these previous studies only included small-size samples of adolescents with a convenience sampling method (Zhao et al., 2005;Zhang, 2011). As a consequence, these existing findings could hard to generalize for whole Chinese adolescents (He et al., 2019). Thus, the main purpose of the current study was to examine psychometric properties of the CTQ-SF among Chinese adolescents based on a large-size and representative sample across China.
The CTQ-SF has 28 items, including 25 clinical items (maltreatment evaluation items) and 3 validity items. The 25 clinical items are used to measure the five subtypes of CM: physical abuse, emotional abuse, sexual abuse, physical neglect, and emotional neglect (Bernstein et al., 2003). With regard to internal consistency, many previous studies have found that the physical neglect subscale generally has the poorest internal consistency among all five subscales (He et al., 2019;Aloba et al., 2020;Petrikova et al., 2021;Wu et al., 2022). In the light of these results, more research should try to re-examine or modify the items from the original CTQ-SF, especially for the physical neglect subscale (Georgieva et al., 2021). Besides, due to the cultural and social development differences between different countries and regions, it is not clear whether all items from the original CTQ-SF are suitable in the Chinese population (Charak et al., 2017;Rodriguez et al., 2019). Therefore, it is necessary to retest and screen items from the original CTQ-SF.
Bernstein et al. used confirmatory factor analysis (CFA) to confirm the five-factor model for 25 clinical items and demonstrated that the model had a good fit across several different population (Bernstein et al., 2003). However, some subsequent studies found that the original five-factor model of the CTQ-SF was not universal for clinical or community samples of adolescents and adults (Garrusi and Nakhaee, 2009;Klinitzke et al., 2012;Grassi-Oliveira et al., 2014). Because these previous studies have failed to replicate the original five-factor model (Gerdner and Allgulander, 2009;Kongerslev et al., 2019;Aloba et al., 2020). Meanwhile, other studies recommend that a four-factor structure of the CTQ-SF could be a good alternative, where items from the Physical neglect and Emotional neglect subscale were collapsed into one single Neglect subscale (Sacchi et al., 2018;Kongerslev et al., 2019;Şar et al., 2021). Some existing evidence supported that the conception of physical neglect and emotional neglect in the original CTQ-SF model was too interwoven to distinguish between these two forms of childhood neglect (Kim et al., 2011). However, to date, there is no empirical study to examine fit indices of the four-factor model. Therefore, the current study aims to determine whether the alternative four-factor model of the CTQ-SF has good structural fits in Chinese adolescents.
Beyond to the raw scores of the CTQ-SF and its subscales, normative data and cut-off scores can be applied to describe histories of abuse and neglect as well as its severity (Bernstein et al., 2003). Bernstein et al. first developed the norm of the CTQ-SF based on 286 patients with alcohol and drug abuse (Bernstein et al., 1994). In addition, Scher et al. represented another normative data of the CTQ-SF by recruiting 1,007 community residents between the ages of 18 and 65 years in the United States (Scher et al., 2001). This study reported scores of the CTQ-SF and its subscales based on percentiles P 25 , P 50 , P 75 , P 90 , and P 95 . However, although the above criteria to classify CM are widely used (Bernstein et al., 2003), there are still differences of cut-off values in various research (Jiang et al., 2018;Zhang S. et al., 2020). More importantly, all existing norm data and cut-off values of the CTQ-SF were developed via small-size samples from western countries. To date, it is still unknown whether these data are useful and suitable for Chinese adolescents. To the best of our knowledge, there is no published cut-off values or normative information of the CTQ-SF for a representative sample of Chinese adolescents.
To fill the gaps, this study aims to examine the psychometric properties and normative data of the Chinese version CTQ-SF based on a large-size and representative sample of Chinese adolescents. Our objectives are fourfold: First, we retest and screen items from the original CTQ-SF to determine whether all 25 clinical items are suitable and available in the context of Chinese culture. Second, we further examine the fit indices of the four-factor model of the CTQ-SF via confirmatory factor analyses (CFA). Third, we assess the internal consistency reliability, test-retest reliability, structural validity, and convergent validity of the CTQ-SF. Fourth, we aim to present the means, standard deviations, and percentile ranks of scores for the CTQ-SF and each subscale based on a nationwide representative sample.
Procedure and participants
A multi-stage cluster sampling method was adopted from April to December in 2021. In Stage 1, China was divided into five main geographic locations (eastern, southern, western, northern, and central regions). Five representative provinces (Jiangsu, Guangdong, Yunnan, Gansu, and Hubei) were randomly selected from each region. In Stage 2, two cities were chosen randomly in each selected province. In Stage 3, we selected one district in an urban area and one county in a rural area from each selected city. In Stage 4, one junior high school Frontiers in Psychology 03 frontiersin.org and one senior high school were selected randomly in each sample district or county. In Stage 5, we used random digits to choose four or six classes from every grade (7th to 12th) in each selected school based on enrollment size. Finally, we invited all students in the selected class to participate in this survey voluntarily. Among 23,207 questionnaires in total, 1,425 were excluded due to the missing data was more than 15% of items in the whole questionnaire, and 831 were excluded since their age was more than 18 or less than 12 years old. Finally, 20,951 students' questionnaires were qualified for the final analysis, and the actual response rate was 90.28% (20,951/23,207). Besides, for assessing test-retest reliability, we recruited 1,500 of 23,207 participants to re-finish the questionnaire again after 6 months. Finally, 1,389 retest questionnaires were qualified, and the response rate was 92.60% (1,389/1500).
Instruments Childhood trauma questionnaire-short form
The original CTQ-SF was developed by Bernstein et al. (2003), and it was first translated into the Chinese version by Zhao et al. (2005). The CTQ-SF contained five subscales to evaluate five subtypes of CM. Each subscale was consisted of 5 items and each item was rated on a 5-point Likert scale (1 = Never true, 2 = Rare true, 3 = Sometimes true, 4 = Often true, 5 = Always true). Therefore, the score of each subscale ranged from 5 to 25. The total score of the CTQ-SF ranged from 25 to 125. In addition to 25 clinical items, the Minimization and Denial Scale (MD) was consisted of 3 items (Item 10, 16, and 22), which were not classified in any subtypes of abuse or neglect. Since the MD scale was used to reveal the denial of problems in CM, we did not process the MD scale in our study (Aloba et al., 2020;Petrikova et al., 2021).
Adverse childhood experience scale
The Adverse Childhood Experience Scale (ACEs) was also used to assess experiences of CM in prior to the age of 18 (Felitti et al., 1998;Finkelhor, 2018). In the present study, the ACEs was used to assess the convergent validity of the CTQ-SF (Schmidt et al., 2020;Petrikova et al., 2021). We used five items from ACEs to represent five subtypes of CM. Each item was dichotomized (0 = No, 1 = Yes) and the total score of the ACEs ranged from 0 to 5 in the study. According to previous studies conducted among Chinese population, the ACEs has acceptable validity and reliability among Chinese students and could be generalized to evaluate adverse childhood experiences for Chinese children and adolescents (Wang et al., 2018;Zhang L. et al., 2020).
Data analysis
All data were analyzed with the 26th version of SPSS and AMOS software (IBM Corp.) for Windows. The significance level was set at p < 0.05 (two-sided) for all statistical significance testing. Descriptive statistics (M ± SD) were used to depict the CTQ-SF and its subscale scores. An independent sample t-test was used to compare the significant difference in scores between males and females (He et al., 2019). In addition to the p-values, the Cohen's d effect size coefficient was evaluated (Cohen, 1988), in which effect size was either small (d = 0.20), medium (d = 0.50), or large (d = 0.80).
After excluded the 3 minimization/denial items (Item 10, 16, and 22), we conducted three methods of Item analysis for 25 clinical items from the original CTQ-SF (Bernstein et al., 2003). First, the correlation coefficients between each item and its subscale were used. If the correlation coefficient was less than 0.30 (r < 0.30), the item should be deleted. Second, the contribution of items was measured by factor loading analysis. If the maximum factor loading of an item was less than 0.40 (< 0.40), the item should be deleted. Third, if the internal consistency of a subscale did not decrease but increased after an item was removed, it generally indicated that this item reduced the homogeneity of the subscale. Thus, the item should be deleted. Finally, if an item was recommended to be deleted by all three methods of Item analysis, we would drop it in the following analysis (DeVellis, 1991;Sun and Xu, 2014).
Psychometric properties were further explored. The internal consistency of the CTQ-SF was evaluated with the Cronbach's alpha (α) coefficient. In general, the Cronbach's α > 0.70 is considered acceptable but α > 0.60 is also used (DeVellis, 1991). Due to scores of the CTQ-SF and the ACEs were not normally distributed, Spearman's rho correlations (r) were calculated for assessing test-retest reliability, concurrent validity, and convergent validity (Jiang et al., 2018). The effect sizes of correlation coefficients were based on the criteria developed by Cohen (1988), in which effect size was either small (r = 0.10), medium (r = 0.30), or large (r = 0.50).
Confirmatory Factor Analysis (CFA) was done with the AMOS using the Maximum Likelihood Estimation of the covariance matrix input method. Since the χ 2 /df ratio has a limitation of inappropriately rejection for a model because of its sensitivity to large sample sizes (Joereskog, 1993). Therefore, the following fit indices were used to evaluate the four-factor model of the CTQ-SF: the comparative fit index (CFI), the goodness of fit index (GFI), and the root mean square error of approximation (RMSEA) (Steiger, 1990). Generally, the criteria of fit indices were used to evaluate an acceptable model: CFI ≥ 0.85, GFI ≥ 0.90, and RMSEA ≤0.08 (Brown, 2006;Hu & Hu and Bentler, 1998;Marsh et al., 2004).
Demographic characteristics of the participants
Among 20,951 participants, females (50.4%) were slightly more than males (49.6%). Their age ranged from 12 to 18 years old, and the mean (SD) of age was 15.27 (1.75). More information is depicted in Table 1.
Item analysis for 25 clinical items from the original CTQ-SF
The correlation coefficient between Item 4 and the Physical neglect subscale was less than 0.30. The maximum factor loading of Item 4 and Item 17 was less than 0.40. The Cronbach's α of the Physical neglect subscale was 0.491. After dropping Item 4, the Cronbach's α was increased to 0.494 (Table 2). According to the results of the three methods, Item 4 (My parents were too drunk or high to take care of the family) should be deleted in the alternative CTQ-SF.
Reliability and validity of the CTQ-SF
The Cronbach's α of the original CTQ-SF was 0.852, and the Cronbach's α for the five subscales from high to low was 0.857 (Emotional neglect), 0.755 (Sexual abuse), 0.713 (Physical abuse), 0.666 (Emotional abuse), and 0.491 (Physical neglect), respectively. The Cronbach's α of the alternative CTQ-SF was 0.851, and the Cronbach's α of the Neglect subscale was 0.824 (Table 3). The 6-month test-retest reliability coefficient of the alternative CTQ-SF was 0.548, and the corresponding correlation coefficients for the four subscales ranged from 0.256 (Physical abuse) to 0.509 (Emotional abuse). All these correlations were statistically significant (p < 0.01) ( Table 3). The alternative CTQ-SF score was significantly correlated with the ACEs score (r = 0.355, p < 0.01). Besides, all the four subscales of the alternative CTQ-SF had significantly positive correlations with the ACEs score (p < 0.01), and the effect sizes of these correlations ranged from 0.195 to 0.379. The correlation coefficient between the original and alternative CTQ-SF was 0.999 (p < 0.01). The correlation coefficients between the Neglect subscale from the alternative CTQ-SF and the Physical neglect and Emotional neglect subscale from the original CTQ-SF were 0.829 and 0.910, respectively (p < 0.01) ( Table 4).
Scores of the alternative CTQ-SF and the normative data
The mean scores of the alternative CTQ-SF are provided in Table 5. Males scored significantly higher than females in the CTQ-SF (t = 2.584, p = 0.010, Cohen's d = 0.04), Physical abuse (t = 7.534, p < 0.001, Cohen's d = 0.10), Sexual abuse (t = 10.953, p < 0.001, Cohen's d = 0.15), and Neglect (t = 4.181, p < 0.001, Cohen's d = 0.06). The score of the Emotional abuse subscale among females was significantly higher than males (t = 12.845, p < 0.001, Cohen's d = 0.18). Due to the small effect sizes of sex difference for scores of the CTQ-SF and subscales (all Cohen's d < 0.2), we computed normative scores in all participants regardless of different sex (Table 6).
Discussion
This is the first study to explore the psychometric performance and normative information of the CTQ-SF in a nationally representative sample of Chinese adolescents. Our study provides several major and new findings. First, one item (Item 4) has very low correlation and homogeneity with the original CTQ-SF in our data, which should be dropped in the alternative CTQ-SF. Second, when combining the Physical neglect and Emotional neglect subscale into the Neglect subscale, the alternative CTQ-SF with four subscales has better internal consistency than the original CTQ-SF with five subscales. Third, the first Chinese norm of the CTQ-SF will help to classify different severity of abuse and neglect among Chinese adolescents. These findings can largely benefit for promotion of the CTQ-SF in Chinese culture and society. Further, it can help scholars, clinicians, and social workers to detect and screen childhood maltreatment among Chinese adolescents.
According to the results of Item analysis in the current study, Item 4 (My parents were too drunk or high to take care of the family) is supposed to be removed because it is less related to the CTQ-SF and its subscale. Moreover, Item 4 can increase the heterogeneity of the Physical neglect subscale. Although this finding is consistent with the original CTQ-SF from Bernstein and his colleagues (Bernstein et al., 2003), some scholars support that not all items from the original CTQ-SF are appropriate for populations with different languages and cultures (Charak et al., 2017;Rodriguez et al., 2019;Şar et al., 2021). In many western countries, parents fail to take good care of their children for any reason, which is a typical behavior of neglect and is even subject to legal sanctions. However, under the traditional Chinese family values, if parents are unable to take care of their children, these children will usually be taken care of by their grandparents or other relatives. Therefore, this common phenomenon is not seen as physical neglect for most Chinese people. Additionally, compared to most western developed countries, Chinese laws and rules are more strict in regulating alcohol, drug use, or gambling. Consequently, the incidence of alcohol abuse, drug abuse, gambling, and other illegal acts in China is far lower than that in the western countries (Chen et al., 2019;Hoggatt et al., 2021). Therefore, it is rare that parents are too drunk or high to take care of their family in the Chinese community. The Cronbach's α of the original CTQ-SF reached accepted standards (α > 0.70), which reflects good internal consistency of the scale (He et al., 2019). However, the Physical neglect subscale has very low internal consistency coefficient (α = 0.491), which is much lower than an acceptable standard. This finding is consistent with most previous studies (Grassi-Oliveira et al., 2014;He et al., 2019;Aloba et al., 2020;Petrikova et al., 2021;Wu et al., 2022). The lack of homogeneity in the Physical neglect subscale may indicate a construction problem in the original CTQ-SF. The Physical neglect subscale contains three items focused on poverty and other two items focused on a lack of care. However, these behaviors refer to physical neglect can also represent to a lack of emotional care (English et al., 2005). Additionally, Gerdner et al. suggest that the poor internal consistency may be due to poor differentiation of physical neglect from emotional neglect (Gerdner and Allgulander, 2009). Moreover, these two seemingly separate factors are conceptually intermingled in the construct of neglect (Hernandez et al., 2013). Collectively, we can assume that the poor internal consistency of the Physical neglect subscale is not attributed to the Chinese translation version, but probably reflects a heterogeneous Frontiers in Psychology 06 frontiersin.org problem of the Physical neglect subscale in the original CTQ-SF (Petrikova et al., 2021). After merging items of the Physical neglect and Emotional neglect subscale into the Neglect subscale, the four-factor model of the alternative CTQ-SF has better internal consistency than the original five-factor model. This is a surprising finding based on an innovative approach, which also partly cover up the heterogeneity of the Physical neglect subscale. In the alternative CTQ-SF, the Cronbach's α for the Neglect subscale was high (α = 0.824). In addition, the correlations between the Neglect subscale of the alternative CTQ-SF and the Physical neglect and Emotional neglect subscale of the original CTQ-SF were very strong (r > 0.80). These results suggest a robust association between physical neglect and emotional neglect, and these two forms of maltreatment overlap to a large extend (Kim et al., 2011). In theory, to operationally distinguish the construct of neglect is difficult because nearly all definitions are based on personal discernment of lack of care (English et al., 2005). What's more, neglect can generally occur emotionally even when needs are met physically (Grassi-Oliveira et al., 2014).
The normative data of the CTQ-SF among Chinese adolescents in this study are quite different from those from the previous study (Scher et al., 2001). For example, the percentiles of P 25 , P 50 , P 75 , and P 90 of the PA subscale in the American community population are 5, 6, 7, and 9, respectively (Scher et al., 2001). In our study, the numbers are Frontiers in Psychology 07 frontiersin.org 5, 5, 5, and 7, respectively. Moreover, the P 25 , P 50 , and P 75 of the EA subscale in American adults are 5, 5, and 7, respectively. Among Chinese adolescents, the P 25 , P 50 , and P 75 are 5, 6, and 8, respectively.
The results indicate that the score distribution of the CTQ-SF varies greatly between different populations, so the norm of the CTQ-SF may also change greatly. Therefore, the development of norms should take account of the social and cultural differences between different countries. Besides, the cut-off values of the CTQ-SF should be modified based on specific normative data. If scholars use the same norm and cut-off values in different research populations, the prevalence of childhood maltreatment will be significantly overestimated or underestimated. For example, when we take the cut-off value (score ≥ 10) of the Physical abuse subscale according to Bernstein et al. (2003), the prevalence of physical abuse was near 3% for Chinese children and adolescents aged 12 to 18 in the current study. Apparently, the prevalence is much lower than that reported in a recent meta-analysis, which indicated that the prevalence of physical abuse in Chinese primary and secondary school students is 20% (95%CI: 13-27%) . Therefore, the formulation of the CTQ-SF norms and cut-off values should consider different characteristics of participants. On the other hand, the normative and psychometric data of the CTQ-SF from our report could largely help researchers and educators to screen the prevalence of childhood maltreatment.
Limitations
The current study has several limitations. First, we recruited a large sample of Chinese adolescents from 40 junior and senior high schools. However, this study did not recruit community adolescents who are unenrolled in school. Although the proportion of these unenrolled adolescents aged between 12 and 18 is very small in China, we can also recruit some unenrolled adolescents in future research to make our participants more representative. Second, the test-retest reliability of the alternative CTQ-SF in this study was 0.548, which did not meet a common standard of more than 0.70. The result may attribute to the duration between the first survey and the retest survey is being 6 months, which is far longer than a common duration of 2 to 4 weeks for the test-retest survey. In future research, we are supposed to attempt to overcome the impact of COVID-19 for field investigation and shorten the duration of test-retesting. Third, although we have developed the first norm of the CTQ-SF among Chinese adolescents in the current study, we are unable to dichotomize the experience of abuse or neglect via valid cut-off values. In the next step, we will conduct a structured clinical interview for participants and depict the receiver operating characteristic curve (ROC) for the Chinese version CTQ-SF (Walker et al., 1999), which will have great significance for the diagnosis of childhood maltreatment.
Conclusion
The findings from the current study support a good reliability and validity of the alternative CTQ-SF in Chinese adolescents, which includes 24 clinical items and four subscales: the Physical abuse, Emotional abuse, Sexual abuse, and Neglect. Moreover, the first Chinese norm of the CTQ-SF could provide great benefits for scholars, educators, and clinicians to detect and screen adolescents' maltreatment experiences and reveal the epidemiological characteristics of abuse and neglect among Chinese adolescents.
Data availability statement
The raw data supporting the conclusions of this article will be made available by the authors, without undue reservation.
Ethics statement
The studies involving human participants were reviewed and approved by The Medical Ethics Committee of Tongji Medical College, Huazhong University of Science and Technology. Written informed consent to participate in this study was provided by the participants' legal guardian/next of kin.
Author contributions
CP developed the initial manuscript. JC and FR were responsible for the data collection and the data analysis. YW contributed substantially to the revision and refinement of the final manuscript. YY guided the overall design of the study. All authors contributed to the article and approved the submitted version. | 2023-03-01T16:17:04.062Z | 2023-02-27T00:00:00.000 | {
"year": 2023,
"sha1": "389baee6a6ba5255c256287089444cb8648aead2",
"oa_license": "CCBY",
"oa_url": "https://www.frontiersin.org/articles/10.3389/fpsyg.2023.1130683/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "3a4c665fe2aad3d4f0d642c05f2e04add4bb75fe",
"s2fieldsofstudy": [
"Psychology",
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
11581897 | pes2o/s2orc | v3-fos-license | Reevaluation of the role of Pex1 and dynamin-related proteins in peroxisome membrane biogenesis
Analysis of Pex1 and dynamin-related protein function indicates peroxisomes multiply mainly by growth and division in Saccharomyces cerevisiae, whereas no evidence was found for the previously proposed role for Pex1 in peroxisome formation by fusion of ER-derived preperoxisomal vesicles.
Introduction
For many years, peroxisomes were thought to be autonomous organelles that multiply by growth and division and that import membrane and matrix proteins posttranslationally from the cytosol (Lazarow, 2003). Most peroxisomal matrix proteins contain a C-terminal peroxisomal targeting signal type 1 (PTS1; Gould et al., 1987). PTS1-containing proteins (cargo) are recognized in the cytosol by a soluble receptor (Pex5), which delivers its cargo by binding the docking complex (Pex13/14/17) on the peroxisomal membrane (Otera et al., 2002;Agne et al., 2003). The receptor and its cargo dissociate, and the receptor is recycled to the cytosol (Liu et al., 2012). Recycling requires the receptor to be monoubiquitinated (Platta et al., 2007;Okumoto et al., 2011) by the RING finger complex (Pex2/10/12; Williams et al., 2008;Platta et al., 2009) and extracted from the peroxisomal membrane by the AAA − ATPases Pex1 and Pex6 (Platta et al., 2004(Platta et al., , 2005Miyata and Fujiki, 2005). The docking complex and RING finger complex are physically linked via Pex8 and together form the importomer (Agne et al., 2003). After deubiquitination, the receptor is ready for another round of import (Debelyy et al., 2011;Miyata et al., 2012).
Targeting and insertion of peroxisomal membrane proteins (PMPs) does not require the machinery used for matrix protein import. Targeting of most PMPs (class 1) depends on the predominantly cytoplasmic Pex19, which has a chaperone function that helps it to function as a targeting signal receptor; Pex19 binds targeting signals in newly synthesized PMPs and delivers them to the peroxisomal membrane by docking onto Pex3 (Sacksteder et al., 2000;Fang et al., 2004;Rottensteiner et al., 2004;Pinto et al., 2006;Yagita et al., 2013;Chen et al., 2014). Some PMPs (class 2), including Pex3, contain targeting signals that are not recognized by Pex19, and these proteins follow an alternative route to peroxisomes Hoepfner et al., 2005;Tam et al., 2005;Kim et al., 2006;Matsuzaki and Fujiki, 2008;Halbach et al., 2009;Fakieh et al., 2013;Knoops et al., 2014).
Most yeast mutants that lack functional peroxisomes (i.e., that are unable to import PTS1 proteins) contain peroxisomal membranes. However, two mutants, pex3 and pex19, appear to lack peroxisomal membranes altogether (Hettema et al., 2000;Koek et al., 2007). Upon complementation of these mutants, peroxisomes form from the ER ( Fig. 1; Hoepfner et al., 2005;Tam et al., 2005). This process was visualized by inducing the expression of Pex3-GFP in Saccharomyces cerevisiae pex3 cells. Pex3 was first observed in ER-associated puncta, which subsequently dissociated from the ER and matured into peroxisomes (Hoepfner et al., 2005). Since then, the involvement of the ER in de novo peroxisome formation has been confirmed in various experimental setups (Haan et al., 2006;Toro et al., 2009).
Two models for peroxisome multiplication in wild-type (WT) yeast cells have been proposed (Fig. 1). In the first, peroxisomes multiply predominantly by growth and division, with the ER providing membrane lipids and a subset of PMPs, including Pex3 and Pex22 (Motley and Hettema, 2007;Halbach et al., 2009;Hettema and Motley, 2009;Nuttall et al., 2011;Fakieh et al., 2013), via vesicles that fuse with existing peroxisomes.
A recent model for peroxisome biogenesis postulates that peroxisomes form de novo continuously in wild-type cells by heterotypic fusion of endoplasmic reticulum-derived vesicles containing distinct sets of peroxisomal membrane proteins. This model proposes a role in vesicle fusion for the Pex1/Pex6 complex, which has an established role in matrix protein import. The growth and division model proposes that peroxisomes derive from existing peroxisomes. We tested these models by reexamining the role of Pex1/Pex6 and dynamin-related proteins in peroxisome biogenesis. We found that induced depletion of Pex1 blocks the import of matrix proteins but does not affect membrane protein delivery to peroxisomes; markers for the previously reported distinct vesicles colocalize in pex1 and pex6 cells; peroxisomes undergo continued growth if fission is blocked. Our data are compatible with the established primary role of the Pex1/Pex6 complex in matrix protein import and show that peroxisomes in Saccharomyces cerevisiae multiply mainly by growth and division.
An alternative model postulates that all PMPs insert first into the ER (van der Zand et al., 2010), where docking complex proteins (Pex13/14) are sorted away from Pex11 and RING finger complex proteins (Pex2/10/12) before the exit of these complexes in distinct vesicles (Fig. 1;van der Zand et al., 2012). Heterotypic vesicle fusion is proposed to result in the formation of an active translocon, after which the import of matrix proteins can occur (van der Zand et al., 2012). According to this model, vesicle fusion requires the AAA + ATPases Pex1 and Pex6 and gives rise to a continuous stream of new peroxisomes in WT cells that add to the existing population as well as being the mechanism of peroxisome formation in cells lacking peroxisomes (van der Zand et al., 2012;Tabak et al., 2013;van der Zand and Tabak, 2013). DRPs are proposed to act after the Pex1/ Pex6-mediated vesicle fusion event. Pex1 and Pex6 have also been suggested to mediate membrane fusion reactions of preperoxisomal structures during the maturation of peroxisomes in Yarrowia lipolytica .
Studies in plants, yeast, and mammals have revealed that peroxisomes do not fuse homotypically (Arimura et al., 2004;Motley and Hettema, 2007;Bonekamp et al., 2012). However, both models described above require delivery of membrane material (lipids and proteins). Whereas the vesicle fusion model proposes heterotypic fusion of distinct ER-derived vesicles, the growth and division model proposes that ER-derived vesicles fuse with peroxisomes.
In this study, we reexamined the role of Pex1, Pex6, and the DRPs Vps1 and Dnm1 in peroxisome biogenesis. We found that depletion of Pex1 rapidly blocks matrix protein import but does not affect membrane protein delivery to peroxisomes. We show by genetic analysis that peroxisomal membranes are not maintained by a linear pathway whereby Pex1 acts upstream of DRPs and that reintroduction of peroxisomal membranes does not require Pex1. We find markers previously reported to be present in distinct vesicles localize to the same membranes in pex1 and pex6 cells. We show that peroxisomes undergo continued growth if fission is blocked and do not form de novo if peroxisomes are already present. These data support a model whereby peroxisomes multiply mainly by growth and division, and whereby Pex1/Pex6 has a direct role in matrix protein import and not in PMP biogenesis.
To understand the discrepancy between our conclusions and those of van der Zand et al. (2012), we replicated their experimental (bimolecular fluorescence complementation [BiFC]) setup. We found an increased turnover of peroxisomal membranes in pex1 and pex6 cells caused by pexophagy, and BiFC-positive peroxisomes divide asymmetrically. We discuss the implications of these new findings. Figure 1. Schematic representation of models for peroxisome multiplication in S. cerevisiae. The vesicle fusion model proposes all PMPs traffic via the ER and exit in distinct vesicles containing Pex11 and RING finger proteins (Pex2/10/12; green) or docking complex proteins (Pex13/14/17; red). Heterotypic fusion between these vesicles requires the Pex1/6 complex and results in an intermediate compartment (yellow) in which the importomer is fully functional and matrix protein import commences. Peroxisomes form continuously, regardless of whether peroxisomes are already present. According to the growth and division model, preexisting peroxisomes receive newly synthesized membrane and matrix proteins and multiply by DRP-dependent fission. Pex1 and Pex6 are required for matrix protein import by the recycling of the PTS receptors. Only a subset of membrane proteins traffic via the ER (cyan), the remainder being inserted directly into peroxisomes (black). Peroxisomes form de novo only if no peroxisomes are present (reintroduction of peroxisomes): Pex3 localizes first to ER-associated puncta, which subsequently lose ER association (cyan) and acquire other PMPs (yellow), eventually importing matrix (PTS1 containing) proteins (black). Once a cell has formed peroxisomes de novo, they continue to multiply by growth and division.
Peroxisomes grow in the absence of DRP-dependent fission
According to the growth and division model, DRPs are required for fission of peroxisomes that grow as they continue to receive PMPs (and other membrane constituents). To test this, we forced cells lacking DRPs to form peroxisomes de novo (Fig. 2). We replaced the PEX19 promoter with the GAL1 promoter (in WT and vps1/dnm1 backgrounds) so that de novo formation is driven by conditional Pex19 expression. When cells are grown on glucose, Pex19 is not expressed, and the peroxisomal matrix marker HcRed-PTS1 is cytosolic. Pex19 expression is induced by switching cells to galactose medium. Import of HcRed-PTS1 first becomes detectable after ∼3.5 h on this carbon source, the minimum time required to form peroxisomes de novo (Hoepfner et al., 2005;Tam et al., 2005;Motley and Hettema, 2007). At the early time points of de novo formation, both strains form multiple small peroxisomes per cell. The cultures were kept under conditions of exponential growth, with images being captured at intervals (shown for 4.5 h onwards). The frequency of vps1/dnm1 cells with more than four peroxisomes decreased with time. After 16 h, >80% of vps1/dnm1 cells contained a single enlarged peroxisome. That this reduction in peroxisome number is not a consequence of fusion of peroxisomes in vps1/dnm1 cells is shown in Video 1, Fig. 2 C, and Fig. S1. In the video, we show that the newly formed peroxisomes in vps1/dnm1 cells segregate until there is just one peroxisome, which becomes elongated as both mother and daughter try to inherit it. In Fig. S1, we pulse labeled peroxisomes in Matα or MatA cells with HcRed-PTS1 or GFP-PTS1, respectively. After mating, the red and green peroxisomes remained separate, in contrast to mitochondria, which fuse readily ( Fig. S1; Nunnari et al., 1997). We have previously shown that peroxisomes in vps1/dnm1 cells do not fuse (Motley et al., 2008). We conclude that the reduction of peroxisome number in vps1/dnm1 cells is caused by dilution of peroxisomes by segregation in the absence of fission. We have tested a variety of PMPs (Pex3, Pex11, and Pex13-GFP) in this de novo assay, and as for the PTS1 protein, the number of puncta decreases until there is just a single peroxisome per cell in the vps1/dnm1 background at the later time points (Fig. S2). We conclude that peroxisomes in dividing vps1/dnm1 cells, once formed, proceed to grow into enlarged structures. The presence of a single enlarged peroxisome per vps1/dnm1 cell strongly suggests that new peroxisomes do not form if a peroxisome is already present and is in line with our previous observations (Motley and Hettema, 2007).
Genetic analysis shows Pex1 does not act upstream of DRPs in peroxisomal membrane biogenesis
The vesicle fusion model proposes a role for Pex1/6-mediated vesicle fusion before fission of peroxisomes by DRPs (Fig. 1). To test this, we determined the number of peroxisomal membrane structures in various mutant backgrounds (Fig. 3). Because pex1 cells are deficient in matrix protein import, we used the peroxisomal membrane markers Pex11-and Pex13-GFP.
pex1 cells have a reduced number of peroxisomal membrane puncta (>70% have one to three puncta; Fig. 3) caused by increased turnover (Nuttall et al., 2014). Peroxisomal membrane structures are more abundant in pex1 cells lacking the pexophagy receptor Atg36 (>90% of pex1/atg36 cells have four or more fluorescent puncta; Motley et al., 2012). Cells with quadruple deletion of vps1/dnm1/pex1/atg36 have strongly reduced numbers of membrane structures (i.e., their phenotype resembles that of vps1/dnm1 and not pex1/atg36 cells; Fig. 3). This cannot be explained by a linear model whereby Pex1 acts upstream of DRPs.
Pex1 is not required for de novo formation of peroxisomal membranes
We tested the role of Pex1 in the formation of peroxisomal membranes using an approach similar to that described above for Fig. 2, in which we forced cells to form peroxisomes de novo by replacing the PEX19 promoter with the GAL1 promoter. We used Pex13-GFP and Pex11-monomeric RFP (mRFP) as markers for the distinct vesicles (van der Zand et al., 2012), which should persist in the absence of Pex1. Both PMPs show very faint signals at early time points, as PMPs are unstable in the absence of peroxisomes (Hettema et al., 2000), with Pex11-mRFP faintly labeling a tubular network and Pex13-GFP labeling puncta ( Fig. 4 A). In pex19 cells, these puncta also contain Pex3 (Fig. 4 C). Pex3-GFP has previously been seen in the peroxisomal ER (pER) in cells forming peroxisomes de novo (Hoepfner et al., 2005;Tam et al., 2005). The tubular network faintly labeled by Pex11-mRFP in peroxisome-deficient cells colocalizes with MitoTracker ( Fig. 4 B). After 3-h growth on a galactose medium, Pex11-mRFP becomes detectable in the Pex13-GFP puncta both in the presence or the absence of Pex1. Therefore, during de novo peroxisome formation (peroxisome reintroduction), peroxisomal membrane structures containing both Pex11-mRFP and Pex13-GFP are formed independently of Pex1.
Induced degradation of Pex1 confirms its role in matrix protein import
We depleted Pex1 using an auxin-inducible degron tag (Nishimura et al., 2009;Nuttall et al., 2014). Degron-tagged Pex1 is undetectable 60 min after the addition of auxin ( Fig. 5 A). We induced fluorescent reporter proteins after Pex1 depletion and examined their localization. 90 min after auxin addition, newly synthesized PMPs were transported to preexisting peroxisomes (labeled with HcRed), whereas the import of newly synthesized GFP-PTS1 was blocked. This shows that PMPs still reach peroxisomes in Pex1-depleted cells and are not trapped in the ER or a preperoxisomal compartment. A role for Pex1 and Pex6 in PTS1 import has been well documented (Miyata and Fujiki, 2005;Platta et al., 2005) and is confirmed by our depletion experiment. We noticed occasional PMP-GFP puncta that did not contain detectable matrix marker in both WT and Pex1-depleted cells (Fig. 5 C). We think this arises because of asymmetric fission of peroxisomes (see Fig. 6).
Colocalization of PMPs in pex1 and pex6 cells
The experiments in Fig. 4 indicate that Pex1 is not required for de novo formation of peroxisomal membranes or for PMPs to reach existing peroxisomes. To investigate this further, we quantified colocalization of Pex11-mRFP and Pex13-GFP by comparing the coordinates of the center of each fluorescent spot in pex1, pex6, pex1/pex6, and WT cells (see Materials and methods subsection Image acquisition and processing; Fig. 6). We found that the absence of Pex1 or Pex6, or both, does not significantly decrease the degree of colocalization of these markers (Fig. 6, A and B), with the median distance between them being 113, 113, 160, and 160 nm for pex1, pex6, pex1/pex6, and WT, respectively. This is below the resolution of our setup.
Furthermore, we show that Pex11-mRFP and Pex13-GFP clearly colocalize in the extended membranes of vps1/dnm1/pex1/atg36 cells (i.e., the absence of Pex1 does not prevent their trafficking to the same membrane; Fig. 6 C) and that a short (15 min) pulse of Pex11-GFP reaches Pex13-mCherry-labeled peroxisomal membranes in pex1/pex6 cells (Fig. 6 D). Homogenates of WT and pex1, pex6, and pex1/pex6 cells were analyzed by flotation equilibrium density gradient centrifugation (Fig. 7). Tagged Pex11 and Pex13 cofractionated with each other in all strains analyzed. Endogenous Pex13 floats to the same density as Pex13-GFP and Pex11-GFP in transformed and untransformed cells, indicating that tagging Pex11 and Pex13 does not affect the fractionation of peroxisomal membranes. The distribution of PMPs within the gradients is different in WT cells compared with mutants: the bulk of the PMPs (tagged or untagged) are present in fractions 4 + 5 in WT cells and in fractions 4-7 in pex1, pex6, and pex1/pex6 cells (Fig. 7). The shift to lower density fractions of peroxisomal membranes (ghosts) has been reported before (Santos et al., 1988;Gärtner et al., 1991;van Roermund et al., 1991;Motley et al., 1994;Hettema et al., 2000). Furthermore, differential centrifugation experiments (Fig. 7) show that the 25,000 g pellet contains the bulk of Pex11-GFP, Pex13-GFP, and Pex13 in both WT and pex1/pex6 cells, whereas the bulk of the cytoplasmic marker (Pgk1) is present in the 25,000 g supernatant. A small but reproducible amount of Pex13, Pex11-GFP, and Pex13-GFP was observed in the supernatant fraction of pex1/pex6 cells. The subcellular fractionation experiments support the fluorescence microscopy observations that Pex11 and Pex13 are present in the same membrane structures in cells lacking Pex1 and/or Pex6.
As our conclusions contrast with those of a previous study (van der Zand et al., 2012), we replicated their experimental setup by reconstructing some of their strains and BiFC Venus tags. We generated identical strains (same tags, linker length, and parental strains) and used the same experimental conditions. To detect interactions between peroxins as a measure of importomer assembly, van der Zand et al. (2012) used BiFC after mating haploid yeast tagged in the genome with Venus GFP N-and C-terminal halves (VN and VC). BiFC occurs when nonfluorescent GFP halves are brought together by interaction between the proteins they are fused to. The presence of a signal was interpreted as the presence of complex formation (i.e., that the proteins are present in the same membrane structure). The absence of a signal was interpreted as the proteins being present in distinct membrane structures (van der Zand et al., 2012).
As markers for the docking complex, we used Pex13-VN and Pex14-VC, and for the RING finger complex, we used Pex2-VN. As shown in Fig. S3, peroxisomes formed within 3 h of mating pex1 with pex6 cells, but the BiFC signals took longer to develop. Similar to the observations of van der Zand et al. (2012), 24 h after mating, BiFC was evident in all combinations of Pex13-VN and Pex14-VC (pex1 × 6, 6 × 6, and 1 × 1), and a weak signal between Pex2-VN and Pex14-VC was observed in the pex1 × pex6 mating combination, although, in our hands, only in a minority (<10%) of mating cells. This signal is very Figure 3. Quantitation of membrane structures in pex1Δ, pex1Δ/atg3Δ6, vpsΔ1/ dnm1Δ, and vps1Δ/dnm1Δ/pex1Δ/atg3Δ6 cells. Mutants expressing Pex11-or Pex13-GFP from their endogenous promoters (on plasmids) were imaged by epifluorescence microscopy. Cells were kept in a log phase on glucose-containing medium for 18 h before imaging. The doubling time of WT, vps1Δ/ dnm1Δ, pex1Δ/atg36Δ, and vps1Δ/dnm1Δ/ pex1Δ/atg36Δ strains under this condition was 101 min, 100 min, 106 min, and 115 min, respectively. The number of Pex13-GFP puncta per cell was determined for at least 200 cells per strain. Bar,5 µm. weak and varies between experiments. The lack of Pex2-VN/ Pex14-VC BiFC in pex1 × pex1 and pex6 × pex6 mating combinations was previously interpreted as an indication that Pex2 and Pex14 are present in distinct vesicles. However, the lack of an already faint signal could also be a consequence of enhanced pexophagy that occurs in pex1 and pex6 cells (Nuttall et al., Figure 4. Pex1 is not required for de novo formation of peroxisomal membranes. (A) Galactose-controllable PEX19 strains expressing Pex13-GFP and Pex11-mRFP from their endogenous promoters (on plasmids) were grown on raffinose (top) or galactose medium for the times indicated. The Pex11-mRFP signal was weak and was enhanced strongly to show localization. (B) WT, pex3Δ, and pex19Δ cells expressing Pex11-mRFP from its endogenous promoter on plasmid were grown to log phase and stained with MitoTracker green. (C) WT and pex19Δ cells expressing Pex13-GFP and Pex3-mCherry (Pex3-mCH) from endogenous promoters (on plasmids) were grown to log phase. Bars, 5 µm. 2014). Furthermore, the significance of BiFC signals forming from constitutively expressed tags after such long incubations (24-72 h;van der Zand et al., 2012) in cells that have peroxisomes after 3 h is not clear. We therefore tested for these interactions in haploid pex1 and pex6 cells. Fig. 8 A shows BiFC between Pex2-VN and Pex14-VC in haploid cells. This BiFC is readily detectable in WT cells, but interestingly, we also see a signal in pex1 and pex6 cells, although only in a minority of cells (∼25%). Double labeling with the PMP marker Ant1-mCherry stains the vacuole in many pex1 and pex6 cells, which we would expect as a result of enhanced pexophagy. When the pexophagy receptor is disrupted in these haploid strains, vacuolar labeling of Ant1 is prevented and Pex14-VC is stabilized to near-WT levels (by Western blotting; Fig. 8 B). BiFC between Pex2-VN and Pex14-VC now becomes evident in most pex1/atg36 and pex6/atg36 cells. We conclude that the weak signal of Pex2/Pex14 BiFC in pex1 and pex6 cells is a consequence of increased peroxisome turnover.
To control for specificity, we tested for BiFC in haploid cells between all combinations of Pex2-VN, Pex14-VC, and mitochondrial outer membrane proteins Tom20-VN and Tom70-VC. As shown in Fig. 8 C, the Tom20/Tom70 pair shows mitochondrial BiFC, and as expected, the Tom70/Pex2 pair is negative. Surprisingly, however, the Pex14/Tom20 pair gives a signal. Furthermore, this BiFC is stronger than that between Pex14 and Pex2. Whether this signal is meaningful is not Figure 5. Short-term depletion of Pex1 affects matrix protein import but not PMP transport. (A) Western blot analysis of Pex1-HA-AID levels in cells as grown in <0-90 min after addition of 0.5-mM auxin. (B) Pex1-HA-AID cells expressing the peroxisomal matrix marker HcRed-PTS1 were transformed with various PMP-GFP expression plasmids driven by the GAL1 promoter. Cells were grown to log phase on raffinose medium, and auxin was added to 0.5 mM. 45 min later, cells were resuspended in a galactose medium + auxin for 15 min (Pex3, Pex11, and Pex13) or 45 min (Pex10) to induce PMP-GFP expression and imaged by epifluorescence microscopy. (C) WT cells expressing HcRed-PTS1 constitutively and Pex11-or Pex13-GFP from the GAL1 promoter were grown as described in B in the presence of auxin. Bars, 5 µm.
clear. BiFC signals between proteins in different membranes have been previously reported (Pu et al., 2011;Mattiazzi Ušaj et al., 2015). The strength of the BiFC signal correlates with the level of expression of the proteins tested: quantification of C-tagged proteins indicates the number of molecules per cell is 339 for Pex2, 2,570 for Pex14, 5,680 for Tom20, and 45,300 for TOM70 (Ghaemmaghami et al., 2003). The strength of the BiFC signal may be influenced more by the relative abundance of the proteins tested than by an interaction between them or even their presence in the same membrane.
Asymmetric segregation of BiFC signal with matrix marker
In agreement with van der Zand et al. (2012), we noticed that some peroxisomes lack a BiFC signal when mating WT VC-and VN-tagged strains ( Fig. 9 A and their Figs. 1 and 6). Although the formation of BiFC puncta without matrix content may reflect de novo formation (van der Zand et al., 2012; and their Fig. 6), an alternative explanation is that BiFC and peroxisomal matrix marker do not segregate equally. In support of this, red-only peroxisomes are evident in haploid WT cells expressing VN and VC tags and HcRed-PTS1 ( Fig. 9 B), which is striking because we and others have shown that newly synthesized PMPs traffic to existing peroxisomes (Motley and Hettema, 2007;Fakieh et al., 2013;Menendez-Benito et al., 2013). If unequal segregation of BiFC and red-PTS1 content explains the occurrence of red-only peroxisomes, we would expect such a segregation defect to be more evident on the elongated peroxisomes of vps1 cells, and this is what we saw: BiFC signals appear as puncta on the extended peroxisomes of haploid (Fig. 9, C and D) and diploid (Fig. 9 E) vps1 cells. In contrast, Pex13-GFP and Pex14-GFP label the whole peroxisomal structure (Fig. 9 F), although high magnification reveals that this labeling is not evenly spread over the peroxisomal membrane, particularly in the case of Pex14-GFP. Figure 6. Colocalization of Pex11 and Pex13 in pex1 and pex6 cells. (A) pex1, pex6, pex1/pex6, and WT cells expressing Pex11-RFP and Pex13-GFP (from endogenous promoters on plasmids) were imaged after 3-h growth in log phase. (B) The distance between the center of each fluorescent Pex11-RFP and Pex13-GFP spot was calculated as described in the Materials and methods subsection Image acquisition and processing and is depicted as a box plot. n > 400 spots for each strain. The pale green line across each box indicates the median. (C) vps1/dnm1/pex1/atg36 cells expressing Pex11-RFP and Pex13-GFP (from endogenous promoters on plasmid) were imaged after 18 h in exponential growth phase. (D) WT and pex1/pex6 cells expressing plasmid-based Pex13-mCherry constitutively and Pex11-GFP from the GAL1 promoter were given a short pulse of Pex11-GFP as indicated. Bars, 5 µm.
We imaged diploid vps1 cells expressing Pex13-VN and Pex14-VC, as this BiFC pair resulted in the strongest signal and allowed for time-lapse microscopy over several cell divisions ( Fig. 9 G and Video 2). The time-lapse analysis illustrates that BiFC signal frequently fails to segregate when the HcRed-PTS1-labeled peroxisome divides on cytokinesis. We conclude that red-only peroxisomes arise because the BiFC complex often fails to segregate when peroxisomes divide.
Discussion
By studying the role of DRPs and AAA + ATPases, we have tested two models of peroxisome multiplication. The first model proposes that peroxisomes multiply by growth and division. According to this model, peroxisome growth is the result of delivery from the ER of vesicles carrying lipids and a subset of PMPs. Other PMPs and matrix proteins are imported directly into peroxisomes, and division is mediated by DRPs. When peroxisomes are absent, they can be reintroduced by de novo formation from the ER (Fig. 1).
The second model proposes that peroxisomes form de novo regardless of whether peroxisomes are already present.
All PMPs enter the ER and exit in distinct preperoxisomal vesicles that undergo heterotypic fusion, bringing together components of the import machinery. Fusion is mediated by the AAA ATPases Pex1 and Pex6. After fusion and assembly of the importomer, matrix protein import commences, resulting in a new peroxisome. This linear maturation model ends with fission of peroxisomes by DRPs ( Fig. 1; van der Zand et al., 2012;Tabak et al., 2013;van der Zand and Tabak, 2013).
The results reported here indicate that peroxisomes multiply mainly by growth and division in S. cerevisiae. We found that peroxisomes can grow in size and receive newly synthesized PMPs, and that membrane growth and delivery of PMPs occurs independent of Pex1 and Pex6. Our data support previous findings of the direct involvement of Pex1 and Pex6 in matrix protein import. We do not find evidence to support the proposal that Pex1 and Pex6 are required for the formation of new peroxisomal membranes by fusion of ER-derived vesicles.
Most cells lacking the DRPs Vps1 and Dnm1 contain a single peroxisome. This phenotype is difficult to explain if new peroxisomes form continuously, unless these would form de novo as large structures that neither divide nor segregate between mother and daughter cell during cell division. Our results show this is not the case: multiple small peroxisomes appear Figure 7. Cofractionation of Pex11-GFP with Pex13 and Pex13-GFP in WT, pex1, pex6, and pex1/pex6 cells. (A and B) Homogenates (H; 800 g postnuclear supernatant) adjusted to 60% sucrose were separated by flotation analysis through sucrose equilibrium density gradient centrifugation. Fractions were collected from the bottom, and equal volumes were analyzed by Western blotting. Homogenates were prepared from glucose-grown cells of the strains as indicated. Cytosolic (Pgk1; A and B) and endosomal (Pep12; A) markers were included as the control for separation of membranes from cytosol. Pex13 is detected as a doublet, as Pex13 is prone to partial breakdown by proteolysis during subcellular fractionation (Elgersma et al., 1996). The samples in B were TCA precipitated and concentrated fourfold, as Pex13-GFP signals were weak. X and Y indicate control samples containing a P2 fraction of WT or pex1/pex6 cells expressing Pex11-GFP only and untransformed WT or pex1/pex6 cells. Red arrowheads indicate doublets of Pex13-GFP. Black arrowheads indicate Pex11-GFP. (C) Homogenates from WT and pex1/pex6 cells transformed with either Pex11-GFP or Pex13-GFP were separated by centrifugation first at 2,500 g and then at 25,000 g into a 2,500 g pellet (P1), a 25,000 g pellet (P2), and a 25,000 g supernatant fraction (S). Equivalent portions of each fraction were analyzed by Western blotting. Black lines indicate that intervening lanes have been spiced out.
initially in vps1/dnm1 cells forced (by conditional Pex19 expression) to form peroxisomes de novo. These new peroxisomes increase in size but decrease in number during several rounds of cell division as they distribute between mother and daughter cell until a single peroxisome per cell is observed. Upon cytokinesis, this single peroxisome is split into two, allowing segregation between mother and daughter cell. The single elongated peroxisome phenotype of vps1/dnm1 cells is stably inherited over many cell generations (Video 2; Hoepfner et al., 2001;Kuravi et al., 2006;Motley and Hettema, 2007), implying that peroxisomes continue to receive new membrane and matrix constituents. This is consistent with pulse chase experiments showing that peroxisomal markers segregate between mother and daughter peroxisomes on fission (Motley and Hettema, 2007;Knoblach et al., 2013;Menendez-Benito et al., 2013). Not all proteins distribute equally on peroxisome fission, however (Cepińska et al., 2011;Knoblach et al., 2013), and this is observed to some extent in vps1/dnm1 cells for Pex14-GFP (Fig. 9 F). The asymmetric distribution in the membrane of Pex13/Pex14 BiFC is much greater than that of Pex13-GFP or Pex14-GFP (Fig. 9, E and F). Asymmetric fission most likely underlies the occasional occurrence of PMP-GFP puncta that lack detectable matrix content (Figs. 5 and 6), as PMP-GFP puncta without content are not observed in vps1/dnm1 cells (Fig. 9 F). Distinct peroxisome populations are generated by asymmetric distribution of PMPs and content followed by membrane fission during Woronin body biogenesis (Managadze et al., 2007;Liu et al., 2008). Asymmetric segregation also underlies the removal of intraorganellar aggregates from peroxisomes (Manivannan et al., 2013). In accordance with van der Zand et al. (2012), we found that BiFC signals fail to label all peroxisomes. However, we show that this is not a consequence of de novo formation of peroxisomes, but of asymmetric fission of BiFC-positive peroxisomes. Cells were transformed with a plasmid expressing Ant1-mCherry and grown on a glucose-containing medium to allow BiFC signals to develop. Mean number of Ant1-mCherry puncta per cell + SD were as follows: WT, 2.5 ± 1.6; atg36, 3.5 ± 2.3; pex1, 0.55 ± 0.6; pex1/atg36, 1.4 ± 0.7; pex6, 0.6 ± 0.6; and pex6/atg36, 1.6 ± 0.9. Percentage of cells showing colocalization between BiFC and Ant1 were as follows: WT, 84%; atg36, 89%; pex1, 95%; pex1/atg36, 93%; pex6, 98%; and pex6/atg36, 90%. At least 100 cells were analyzed. (B) Western blot showing Pex14-VC in strains as indicated. PGK1 was used as a loading control. (C) BiFC between mitochondrial and peroxisomal proteins in WT and pex1 cells as indicated. A singe slice of each sample was captured so that the fluorescence intensity in the images reflects the BiFC signal strength in the samples. Bars, 5 µm.
A recent modeling study (Mukherji and O'Shea, 2014) proposed that peroxisomes form mainly de novo when S. cerevisiae cells are grown on glucose, whereas under conditions of peroxisome proliferation, the DRPs contribute to multiplication by peroxisome fission. In that study, the effect of disruption of DRPs on peroxisome abundance in cells grown on glucose was not tested. We (Motley and Hettema, 2007;Motley et al., 2008) and others (Kuravi et al., 2006) have shown that peroxisome abundance is severely reduced in DRP-deficient cells grown on glucose (Fig. S4). Therefore, at least under our experimental conditions, peroxisomes multiply mainly by growth and division. In plants and mammals, there is support for the multiplication of peroxisomes by growth and division (Huybrechts et al., 2009;Delille et al., 2010;Barton et al., 2013), although both forms of multiplication have been reported to occur simultaneously in mammalian cells (Kim et al., 2006). HcRed-PTS1 from a plasmid. VN, VC, and GFP tags were genomically integrated (i.e., a WT copy in addition to tagged protein was present in diploid cells). Red arrowheads indicate peroxisomes or parts of peroxisomes without a BiFC signal. White arrowheads indicate punctate BiFC signals colocalizing with the elongated peroxisomes. White arrowheads in G indicate peroxisome with BiFC signal from which a peroxisome without BiFC signal splits off (red arrowheads). Bars, 5 µm.
Although we have not detected de novo peroxisome formation in S. cerevisiae cells containing peroxisomes, we may have missed low levels of this. There may be conditions whereby de novo formation is induced in WT cells. A complex of ER reticulons and Pex30 has been reported to form a site of close contact between ER and peroxisomes. The absence of this complex enhances the rate at which new peroxisomes form during peroxisome reintroduction. This suggests a link between regulation of de novo peroxisome formation and ER morphology (Yan et al., 2007;David et al., 2013).
That the ER plays a central role in peroxisome biogenesis is supported by studies in many different organisms (Dimitrov et al., 2013;Agrawal and Subramani, 2015;Kim and Hettema, 2015). In S. cerevisiae, there is strong support for a role for the ER during reintroduction of peroxisomes in mutants temporarily lacking them (Hoepfner et al., 2005;Kragt et al., 2005;Tam et al., 2005;Haan et al., 2006). When Pex3-GFP expression is induced in pex3-deficient yeast cells, new peroxisomes form from a subdomain of the ER, the pER. In S. cerevisiae, the pER was reported to be ER associated (Bascom et al., 2003;Hoepfner et al., 2005), although no continuation with the ER was found in Hansenula polymorpha (Knoops et al., 2014). Other PMPs localized to this compartment in pex3 or pex19 cells, including Pex3, Pex13, Pex14, and Pex22 (Figs. 4 and 6;Faber et al., 2002;Bascom et al., 2003;Hoepfner et al., 2005;Kragt et al., 2005;Tam et al., 2005;Haan et al., 2006;Kim et al., 2006;Toro et al., 2009;van der Zand et al., 2010;Fakieh et al., 2013;Knoops et al., 2014). Some PMPs do not enter the pER, but insert late when new peroxisomes are formed in H. polymorpha (Knoops et al., 2014). We observe this for Pex11-GFP in S. cerevisiae. In the absence of peroxisomes, Pex11 is unstable (Hettema et al., 2000;Motley et al., 2012), and the low levels of Pex11-GFP that remain mislocalize to mitochondria (Fig. 4;Mattiazzi Ušaj et al., 2015). Upon peroxisome reintroduction, Pex11-GFP appears in puncta with Pex13 close to the time that GFP-PTS1 import commences (3-4 h; Figs. 2 and 4). These observations are compatible with a previously proposed model of de novo formation from the ER (during peroxisome reintroduction) by a process of maturation: a subset of PMPs traffic via the ER to the pER, preperoxisomes form from this specialized part of the ER, other PMPs are inserted, and a final maturation step is the import of matrix proteins (Hoepfner et al., 2005;Knoops et al., 2014).
In cells multiplying peroxisomes by growth and division (for example, in WT S. cerevisiae cells), we favor a model whereby peroxisomes receive lipids and a subset of PMPs via vesicular transport from the ER, whereas other PMPs are inserted into peroxisomes directly (Fig. 1). This model is supported by the following findings: that Pex15 appended with a glycosylation site is fully glycosylated in WT cells (Lam et al., 2010), that pER-trapped Pex3 can be transported to existing peroxisomes (Motley and Hettema, 2007), and that signals for Pex3 insertion into the ER, its sorting within the ER, and its subsequent sorting from pER to peroxisomes are required both during reintroduction of peroxisomes (de novo formation from the ER) and for transport of Pex3 to peroxisomes in WT cells (Fakieh et al., 2013). Many PMPs are recognized by Pex19 and could, after docking onto Pex3, be directly inserted into the peroxisomal membrane as has been shown to occur in mammalian cells and Neurospora crassa (Pinto et al., 2006;Matsuzaki and Fujiki, 2008;Yagita et al., 2013;Chen et al., 2014). Besides the direct route to peroxisomes, there is also a PMP trafficking route via the ER in mammals and plants (Kim et al., 2006;Karnik and Trelease, 2007;Toro et al., 2009;Aranovich et al., 2014).
The triple ATPases, Pex1 and Pex6, are important for matrix protein import. A series of studies have uncovered a role for these proteins in the recycling of PTS1 and PTS2 targeting receptors (Platta et al., 2004(Platta et al., , 2005Miyata and Fujiki, 2005;Debelyy et al., 2011;Miyata et al., 2012). Additional roles for these proteins have been proposed, including maturation of precursor peroxisomes in Y. lipolytica and heterotypic fusion of distinct ER-derived vesicles in S. cerevisiae (van der Zand et al., 2012). Our data are compatible with a role for Pex1 and Pex6 in matrix protein import. We did not find evidence to support a role for these proteins in membrane protein biogenesis as proposed in the vesicle fusion model (van der Zand et al., 2012). This model is in part based on the observation that markers for the distinct ER exit routes label separate structures in pex1 and pex6 cells. We could not reproduce this observation (Figs. 4, 6, and 7). We found that these markers colocalize and cofractionate in WT, pex1, pex6, and pex1/pex6 cells. We expressed bright fluorescent proteins and fixed cells to reduce exposure times and movement. This may explain the discrepancy.
BiFC studies were used to show that Pex1 and Pex6 are required for assembly of the importomer (van der Zand et al., 2012). However, increased pexophagy in pex1 and pex6 cells (Nuttall et al., 2014) may have precluded detection of BiFC: we show by blocking pexophagy that importomer subunits Pex2 and Pex14 do give BiFC in pex1 and pex6 cells. This is in line with previous studies that indicate importomer assembles in pex1 and pex6 cells (Agne et al., 2003;Kiel et al., 2004;Platta et al., 2004;Rosenkranz et al., 2006;Hensel et al., 2011). We show that depletion of Pex1 blocks matrix protein import (despite the presence of importomer in the membrane) but does not affect PMP delivery to peroxisomes. Furthermore, we find no role for Pex1 in the assembly of peroxisomal membranes during their reintroduction to conditional pex19 cells. We conclude that, within the boundaries of our experimental framework, peroxisomes multiply mainly by growth and division, and that the AAA + ATPases Pex1 and Pex6 are involved in matrix protein import.
Growth conditions and mating assay
Cells were grown overnight in selective glucose medium and diluted to OD 0.1 in either selective glucose medium (Figs. 3, 6 [A-C], 7, 8, 9, and S4 and Video 1) or in selective raffinose medium for 4 h followed by resuspension in galactose medium (Figs. 2, 4, 5, S1, and S2 and Video 1) for the times indicated. For mating, 10 7 cells of each mating type grown to logarithmic phase on yeast peptone dextrose (YPD) were mixed, pelleted, and spotted onto a prewarmed YPD plate and incubated at 30°C for the times indicated. For each experiment, >100 (mating) cells were examined, and images are representative.
Image acquisition and processing
Cells were analyzed with a microscope (Axiovert 200M; Carl Zeiss) equipped with an Exfo X-cite 120 excitation light source, band pass filters (Carl Zeiss and Chroma Technology Corp.), an α Plan-Fluar 100× 1.45 NA, Plan-Apochromat 63× 1.4 NA, or a-Plan 40× 0.65 NA Ph2 objective lens (Carl Zeiss) and a digital camera (Orca ER; Hamamatsu Photonics). Image acquisition was performed using Volocity software (PerkinElmer). Fluorescence images were collected as 0.5-µm Z stacks using exposures of up to 300 ms, merged into one plane in Openlab (PerkinElmer), and processed further in Photoshop (Adobe). Brightfield images were collected in one plane and processed where necessary to highlight circumference of the cells.
For quantitation of colocalization, images were acquired using an Axio Observer (Carl Zeiss) microscope with a 100× 1.45 NA α Plan-Fluar objective with an electron-multiplying charge-coupled device camera (EM-C2; Rolera) using ZEN software. Single slices were taken of Pex13-GFP and Pex11-mRFP. Colocalization analysis was performed computionally using a Jython script for FIJI (http :// fiji .sc /Fiji). Images were processed with a Fast-Fourier transform bandpass filter before subtracting the background and converting to 8-bit images. Per channel, the center of each fluorescent spot was then located by finding maxima in the image. The coordinates of the center of each fluorescent spot in the green channel was then compared with the coordinates of spots in the red channel. The distance between these coordinates was then found using a measure of 0.08 microns/ pixel. The green and red coordinates with the minimum distance between them was then recorded, and they were both removed from the list of coordinates to be compared. More than 400 spots were counted in this manner.
For the live cell imaging, images for video were acquired using an Axio Observer microscope with a 100× 1.45 NA α Plan-Fluar objective with an EM-C2 electron-multiplying charge coupled device camera using ZEN software. Stacks were taken every 20 min at 27°C. Images were then processed to give extended depth of field and manually thresholded, and a Gaussian filter with a kernel of 3 × 3 pixels was applied to remove noise using ZEN. Videos run at one frame per second. Cells were grown in a CellASIC Microfluidic system in 2% galactose (Video 1) or 2% glucose (Video 2) for 14 h. Stills of Video 1 are shown in Fig. 2 C of peroxisomes forming de novo in vps1/dnm1 cells (expressing HcRed-PTS1 from a plasmid) and segregating to daughter cells. Stills of Video 2, showing Pex13-VN/Pex14-VC in diploid vps1 cells expressing HcRed-PTS1 from a plasmid, are shown in Fig. 9 G.
The spheroplasts were washed twice in 1.2-M sorbitol, 5-mM MES, pH 6, 1-mM EDTA, 1-mM KCl before resuspension in 0.65-M sorbitol, 5-mM MES, pH 6, 1-mM EDTA, and 1-mM KCl (fractionation buffer) containing 1-mM PMSF and protease inhibitor cocktail. Cell breakage was achieved by 10 strokes with a tight-fitting douncer. Intact cells and nuclei were removed by two centrifugation steps (800 g for 10 min). 1 ml homogenate was mixed with 3 ml of 80% sucrose in a fractionation buffer. The sample was loaded on the bottom of an SW41 tube over which a sucrose step gradient was loaded consisting of 1-ml fractions of 50, 45, 40, 35, 32.5, 30, and 25% sucrose (wt/vol). These gradients were centrifuged for 18 h at 100,000 g in an SW41 rotor at 4°C. 1-ml fractions were collected from the bottom of the tube and analyzed by SDS-PAGE and immunoblotting. All sucrose solutions were made in fractionation buffer.
3 ml homogenate was fractionated by sequential differential centrifugation, from which we obtained a 2,500 g pellet, a 25,000 g pellet, and a 25,000 g supernatant. Pellet fractions were resuspended in a 3-ml fractionation buffer. Equivalent volumes of these fractions were analyzed by SDS-PAGE and immunoblotting.
Online supplemental material
Fig. S1 presents a mating assay revealing that peroxisomes in vps1/dnm1 cells do not fuse. Fig. S2 presents additional data using peroxisomal membrane markers showing that peroxisomes in vps1/dnm1 cells grow into elongated structures. Fig. S3 presents additional data showing that upon mating of pex1 and pex6 cells, matrix protein import is restored much faster than the development of BiFC signals. Fig. S4 shows quantitation of peroxisome number in WT, vps1, and vps1/ dnm1 cells grown on glucose. Video 1 presents a time-lapse analysis of peroxisome dynamics in vps1/dnm1 cells. Video 2 presents timelapse analysis of asymmetric segregation of a peroxisomal membrane BiFC signal in support of Fig. 9. Table S1 shows yeast strains used in this study. Table S2 shows oligonucleotides used in this study. Online supplemental material is available at http ://www .jcb .org /cgi /content / full /jcb .201412066 /DC1. | 2017-06-30T01:17:00.886Z | 2015-12-07T00:00:00.000 | {
"year": 2015,
"sha1": "93a7c6984acd8231b2b1b690bbb8dfd4139632d9",
"oa_license": "CCBYNCSA",
"oa_url": "https://rupress.org/jcb/article-pdf/211/5/1041/1370796/jcb_201412066.pdf",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "93a7c6984acd8231b2b1b690bbb8dfd4139632d9",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
} |
60442095 | pes2o/s2orc | v3-fos-license | Metabolomic and lipidomic assessment of the metabolic syndrome in Dutch middle-aged individuals reveals novel biological signatures separating health and disease
Background We aimed to identify novel metabolite and lipid signatures connected with the metabolic syndrome in a Dutch middle-aged population. Methods 115 individuals with a metabolic syndrome score ranging from 0 to 5 [50 cases of the metabolic syndrome (score ≥ 3) and 65 controls] were enrolled from the Leiden Longevity Study, and LC/GC–MS metabolomics and lipidomics profiling were performed on fasting plasma samples. Data were analysed with principal component analysis and orthogonal projections to latent structures (OPLS) to study metabolite/lipid signatures associated with the metabolic syndrome. In addition, univariate analyses were done with linear regression, adjusted for age and sex, for the study of individual metabolites/lipids in relation to the metabolic syndrome. Results Data was available on 103 metabolites and 223 lipids. In the OPLS model with metabolic syndrome score (Y-variable), 9 metabolites were negatively correlated and 26 metabolites (mostly acylcarnitines, amino acids and keto acids) were positively correlated with the metabolic syndrome score. In addition, a total of 100 lipids (mainly triacylglycerides) were positively correlated and 10 lipids from different lipid classes were negatively correlated with the metabolic syndrome score. In the univariate analyses, the metabolic syndrome (score) was associated with multiple individual metabolites (e.g., valeryl carnitine, pyruvic acid, lactic acid, alanine) and lipids [e.g., diglyceride(34:1), diglyceride(36:2)]. Conclusion In this first study on metabolomics/lipidomics of the metabolic syndrome, we identified multiple novel metabolite and lipid signatures, from different chemical classes, that were connected to the metabolic syndrome and are of interest to cardiometabolic disease biology. Electronic supplementary material The online version of this article (10.1007/s11306-019-1484-7) contains supplementary material, which is available to authorized users.
Introduction
The metabolic syndrome is a strong risk factor for cardiovascular disease, and increases the risk of (cardiovascular) mortality (Isomaa et al. 2001;Lakka et al. 2002). The metabolic syndrome is a composite of metabolic disturbances in lipid (triglycerides and HDL cholesterol) and glucose metabolism, blood pressure regulation and being overweight (Grundy et al. 2005). The relative contribution of the different components to the diagnosis of the metabolic syndrome has changed during the past decades, owing to improved medication management and increased obesity prevalence (Afshin et al. 2017;Beltran-Sanchez et al. 2013). Importantly, four out of the five components of the metabolic syndrome (with the exception of HDL cholesterol) are causally 1 3 23 Page 2 of 13 associated with the risk of developing cardiovascular disease, as observed in Mendelian Randomization studies (Dale et al. 2017;Holmes et al. 2015;Lyall et al. 2017).
Besides the use of clinical markers, an increasing number of cohort studies uses metabolomics for the discovery of disease-related diagnostic and prognostic markers, as well as for an enhanced understanding of disease aetiology. For example, in several prospective cohort studies multiple metabolites were observed to be predictive for cardiovascular disease and mortality (Fischer et al. 2014;Wurtz et al. 2015). Studies on cardiometabolic disease phenotypes, however, have been generally focussed on the specific components of the metabolic syndrome (most notably glucose regulation and adiposity), not on the overall metabolic syndrome definition. With respect to the glucose component of the metabolic syndrome, different metabolites (e.g., glycerol, ketone bodies and branched-chain amino acids) have been identified in relation to (future) insulin resistance and incident type 2 diabetes mellitus (Mahendran et al. 2013a, b;Tillin et al. 2015;Wurtz et al. 2013Wurtz et al. , 2012b. Furthermore, the metabolite 1,5-anhydroglucitol has been identified as a novel risk factor for the development of type 2 diabetes, and a marker for short-term glycaemic control (Mook- Kanamori et al. 2014). Increased adiposity has been reported to cause changes in concentrations of multiple metabolites and lipids, which include fatty acids, ketone bodies and amino acids (Wurtz et al. 2014). However, these studies generally focused on a single component of the metabolic syndrome and investigated a limited number of metabolites. To the best of our knowledge, only 1 study examined the association between concentrations of several amino acids and the metabolic syndrome (Ntzouvani et al. 2017). The assessment of the heterogeneous population of metabolic syndrome patients could potentially highlight a common biochemical mechanism of importance for multiple cardiometabolic diseases.
A comprehensive approach, focusing on all components of the metabolic syndrome and including multiple metabolites and lipids from different chemical classes not often investigated in epidemiological cohort studies before, is likely to provide novel insights in cardiometabolic disease biology that facilitates in the search for novel innovative strategies for the treatment and prevention of cardiometabolic disease. In the presented study we aimed to identify metabolite and lipid patterns associated with the metabolic syndrome in middle-aged individuals as well as with the subcomponents of the metabolic syndrome in order to increase our understanding about the underlying biochemical processes.
Study setting and design
The present study was embedded in the Leiden Longevity Study, which aims to investigate biomarkers associated with familial longevity and healthy ageing. A more detailed description of the study design and recruitment strategy has been described elsewhere (Schoenmaker et al. 2006). In short, between 2003 and 2006 a total of 421 long-lived families were recruited, without selection based on health condition or demographics. Families were included when at least two long-lived siblings were still alive and fulfilled the age criteria of being at least 89 years for men and 91 years for women. Of these longlived families, we recruited 1671 of their offspring and 744 partners thereof as controls resembling the general Dutch population at middle age. The Leiden Longevity Study was approved by the medical ethics committee of the Leiden University Medical Center. All participants provided written informed consent.
For the present study, we used fasting blood samples collected between 2006 and 2008 from a subpopulation (N = 280) of the Leiden Longevity Study that lived in close approximation (< 45 min by car) from the research center, as we have previously described (Rozing et al. 2010). Within this subpopulation, cases of the metabolic syndrome were identified on the basis of the criteria from the Third Report of the National Cholesterol Education Program (Klose et al. 2014), which is dependent on 5 subcomponents (waist circumference > 102 cm in men, > 88 cm in women; triglyceride concentration ≥ 1.69 mmol/L; HDL cholesterol (HDL-C) < 1.04 mmol/L in men, < 1.29 mmol/L in women; fasting glucose ≥ 6.1 mmol/L or diagnosed diabetes; systolic blood pressure ≥ 130 mmHg, diastolic blood pressure ≥ 85 mmHg, or treated for hypertension) giving a score ranging from 0 to 5 points. Using this score, participants with a score ≥ 3 were considered as having the metabolic syndrome; others were considered as controls without the metabolic syndrome.
For the present study, for each of the sample subclasses (N = 24, based on metabolic syndrome score, sex and offspring/control group), multivariate characterization was used for the design of experiment-based sample selection, as was described before (Surowiec et al. 2017a, b). In short, for each metabolic syndrome score value, twocomponent PCA models on available clinical data were constructed for four main classes of samples; a full twofactor, two-level factorial design with one centre point was fitted to the PCA score plots, aiming for the selection of five samples for each subclass (offspring and controls; stratified by sex), and hence 20 samples from each metabolic syndrome score value (ranging from 0 to 5). It was however not possible to fully follow the presented strategy for all groups, either because of low number of samples for specific groups (for example for metabolic syndrome score equal 5), or because of not even distribution of the samples on the PCA score plots. In the last case, to obtain a balanced and representative selection, additional samples were included in the study. At the end, 115 representative samples were chosen, with 17, 25, 23, 23, 22 and 5 samples for the metabolic syndrome score equal 0, 1, 2, 3, 4 and 5 respectively. If possible, we did not include samples from participants who were on antihypertensive or lipid-lowering medication (a total of 31 users of antihypertensive and 18 users of lipid-lowering medication remained in the analyses).
Anthropometrics and clinical information
Waist circumference was measured halfway between the lower costal margin and the iliac crest with participants in a standing position. Systolic and diastolic blood pressure were measured in resting condition twice; the average thereof was used for the analyses. Diagnosis of hypertension was based on systolic and diastolic blood pressure as well as on the use of antihypertensive medication. Use of antihypertensive medication was retrieved from the pharmacist of the participant. Diagnosis of diabetes mellitus was based on a fasting blood glucose concentration > 6.9 mmol/L, a diagnosis by a medical specialist (by questionnaire from the general practitioner) or by the use of glucose-lowering medication (by questionnaire from the pharmacist).
All routine clinical serum measurements were performed using fully automated equipment and standardized protocols. Glucose, Hb1Ac, high-sensitivity C-reactive protein, HDL-C and triglyceride concentrations were measured with the Hitachi Modular P800 (Roche, Almere, the Netherlands). Alanine transaminase (ALT), aspartate aminotransferase (AST) and gamma-glutamyltransferase (GGT) concentrations were measured on an Abbott ci8200 (Roche, Almere, the Netherlands). ALT and AST were measured using the NADH (with P-5′-P) methodology and GGT by measuring the substrate l-gamma-glutamyl-3-carboxy-4-nitroanilide methodology. Coefficients of variation of all measures were below 5%.
Information on alcohol intake and current smoking status were retrieved by questionnaire. Information on total caloric intake was retrieved via a validated food frequency questionnaire (Verkleij-Hagoort et al. 2007).
The anthropometric and clinical characteristics of the participants were provided for cases of the metabolic syndrome (metabolic syndrome score ≥ 3) and controls separately as means (with standard deviation) or numbers (percentage) ( Table 1).
Metabolomics analyses
Fasting EDTA plasma samples from the participants, which were not thawed before, were thawed on ice; 630 µL of extraction mixture (H 2 O:methanol (1:9, v/v)) was added to 70 µL of plasma. Extraction of the metabolites from the sample was then carried out using a MM301 vibration Mill (Retsch GmbH & Co. KG, Haan, Germany) at a frequency of 30 Hz for 2 min. Samples were stored on ice for 2 h to allow protein precipitation, after which they were centrifuged at 18 620 RCF for 10 min at 4 °C. An aliquot (200 µL) of the resulting supernatant was transferred to a liquid chromatography vial and evaporated to dryness at room temperature in a miVac QUATTRO concentrator (Genevac LTD, Ipswich, UK). Subsequently, samples were dissolved in 20 µL of methanol:water (1:1 ratio) mixture and analysed with liquid chromatography-mass spectrometry (LC-MS) system as described in detail in Supplementary Methods. Gas chromatography-mass spectrometry (GC-MS) analyses was performed after metabolite derivatization as described before (Jiye et al. 2005); a detailed description on the methodology is given in Supplementary Methods.
Lipidomics analysis
Fasting plasma samples from the participants, which were not thawed before, were thawed on ice and 110 µL of extraction mixture (chloroform:methanol (2:1, V/V)) was added to 20 µL of plasma sample. Extraction was carried out using a MM301 vibration Mill (Retsch GmbH & Co. KG, Haan, Germany) at a frequency of 30 Hz for 2 min. Subsequently, samples were stored at ambient temperature for 60 min before being centrifuged at 18 620 RCF for 3 min at 4 °C. A 50 µL aliquot of the resulting lower phase was transferred to a LC vial, 70 µL of a chloroform:methanol (2:1, V/V) mixture were added and samples were briefly shaken before being analysed by LC-MS as described in detail in Supplementary Methods.
Compound identification
Targeted feature extraction of the acquired LC-MS data was performed using the Profinder™ software package, version B.06.00 (Agilent Technologies Inc., Santa Clara, CA, USA) and an in-house retention-time based and mass-spectra based libraries consisting of 713 metabolites and 487 lipid species. These libraries contained compounds from chemical classes such as acylcarnitines, amino acids, carbohydrates, fatty acids, lysophosphatidylcholines, organic acids, phosphatidylcholines, sphingomyelins, triglycerides and others. Detection of the compounds was based on the following parameters: allowed ion species in positive ionization mode: (+H, +Na, +K, +NH 4 ); in negative ionization mode: (-H, +HCOO); peak spacing tolerance: 0.0025-7 ppm; isotope model: common organic molecules; charge state: 1; mass tolerance: 10 ppm; retention time tolerance: 0.1 min. After extraction of the peaks, each compound was manually checked for mass and retention time agreement with appropriate standards from the library; peaks with bad characteristics (e.g., overloaded, sample noise, non-Gaussian) were excluded from the analysis. Identification of compounds was confirmed by comparison of MS/MS spectra with MS/MS spectra of relevant compounds from the library.
Non-processed files from GC-MS were exported in NetCDF format to a MATLAB-based in-house script where all data pre-treatment procedures such as baseline correction, chromatogram alignment, and peak deconvolution were performed. Metabolite identification, was implemented within the script and was based on the retention index (RI) values and MS spectra from the in-house mass spectra library established by the Swedish Metabolomics Centre (Umeå, Sweden) and consisting of 585 compounds [Level 1 identification according to the Metabolomics Standards Initiative (Salek et al. 2013)].
Data processing and multivariate and univariate data analysis
For the LC-MS analysis of the metabolites, a combined dataset was used, with compounds included that could be detected in either negative or positive ion modes. In case a single metabolite was detected in both the negative and positive ion mode, the signal with the highest intensity was used for the statistical analyses. When metabolites were detected with both the LC-MS and GC-MS methodology, the signal detected with the GC-MS method was used for the statistical analyses. The LC-MS metabolite and lipid signals were normalized to the total peak area prior to further statistical analyses. GC-MS data were normalized to internal standards as described before (Redestig et al. 2009). Metabolite and lipid data were imported separately into SIMCA software (version 14.0, Sartorius Stedim Biotech Umetrics AB, Umeå, Sweden) for multivariate analyses. All data were mean centred and scaled to unit variance. Principal component analysis (PCA) was used to obtain an overview of the variation in the data and to check for trends and potential outliers for cases of the metabolic syndrome and controls. Seven-fold cross-validation was used for calculating the models. Orthogonal partial least squares (OPLS) method was used to correlate metabolite and lipid profiles with the continuous metabolic syndrome score (Y variable) of the study participants; 1 + 0 or 1 + 1 component models were used to avoid possible over-fitting (Trygg et al. 2002). The significance of a metabolite for classification in the OPLS models was specified by calculating the 95% confidence interval for the loadings using the jackknife method, which attempts to find precision of an estimate, by iteratively making subsets in which estimates are calculated (Efron et al. 1983). OPLS models were also created for the separate subcomponents of the metabolic syndrome as the Y variable (waist circumference, plasma fasting triglycerides, HDL-C, and glucose concentrations, and systolic and diastolic blood pressure). Validity and degree of overfitting of the OPLS models was checked by conducting CV-ANOVA (ANalysis Of VAriance testing of Cross-Validated predictive residuals) and permutation analyses.
In addition, we conducted univariate analyses on the metabolites and lipids using linear regression in the R statistical environment. Metabolites and lipids were logtransformed and subsequently standardized to approximate a standard normal distribution (mean = 0, standard deviation = 1). Hence, results from the univariate analyses can be interpreted as the difference in standard deviation in metabolite/lipid level between cases of the metabolic syndrome and controls. Results were repeated with the metabolic syndrome score as a continuous determinant. Outlying metabolite and lipid levels (> 4 standard deviations from the mean) were excluded from the analyses. As we studied a high number of associations between exposure and metabolite/lipid, there is a risk of getting false-positive results. To correct for multiple testing, we first calculated the number of independent metabolites and lipids based on the methodology described by Li et al. (2005), and subsequently corrected our threshold for statistical significance accordingly. Univariate analyses were visualized using the ggplot2 package in the R statistical environment (Wickham 2009).
Characteristics of the study population
According to the used clinical classification of the metabolic syndrome, samples were available for 50 cases (metabolic syndrome score ≥ 3) and 65 controls (Table 1). Both groups were similar with respect to age (64.4 [SD 6.1] versus 62.0 [SD 6.5] years, respectively) and percentage of men (52.0% versus 52.3%, respectively). Cases currently smoked less and had a lower alcohol intake and lower total caloric intake compared to controls. In line with the clinical classification of the metabolic syndrome, components of the metabolic syndrome were generally higher in cases compared to controls with the exception of HDL cholesterol. Furthermore, cases had a higher mean HbA1C and higher median hsCRP and had moderately higher median liver enzyme concentrations.
Multivariate metabolite profiling
PCA on 115 samples and 103 metabolites resulted in a model with 7 components, which explained 53% of the total variation (R 2 X(cum) = 0.53), and identified 3 samples outside Hotelling's T2 range that remained in the subsequent analyses (Fig. 1a). In the PCA plot, a trend was visible with samples from cases with metabolic syndrome (metabolic syndrome score ≥ 3) located more frequently in the lower half of the plot. A total of 11.7% of the total variation in the data was explained by the predictive component of the OPLS model with the metabolic syndrome score (ranging from 0 to 5) as the Y variable (1 + 0 model, Q 2 = 0.39; R 2 X(cum) = 0.12; CV-ANOVA p-value: 1.2 × 10 −12 ). Additional diagnoses by permutation analyses for the Y variable ( Supplementary Fig. 1) showed Y-axis intercepts below 0.3 for R 2 Y and below 0.05 for Q 2 , indicating the OPLS model was not influenced by overfitting. The metabolic profile connected to the metabolic syndrome score (p(corr) vector from the OPLS model] is presented in Fig. 1b, and significant results are summarized in Table 2 (all results are summarized in Supplementary Table 1). In the metabolomics dataset, a total of 35 metabolites were significantly correlated to the metabolic syndrome score, based on jackknife confidence intervals; multiple amino acids, organic acids and acylcarnitines were positively correlated with the metabolic syndrome score and several compounds (e.g., some fatty acids and sterols) were negatively correlated with the metabolic syndrome score. When metabolic syndrome components were used as Y variables in the OPLS model, multiple metabolites were found to be significantly correlated with these components (Supplementary Table 1). Metabolic profiles for the different components (p(corr) vectors) were correlated to the metabolite profile connected to the metabolic syndrome score and strongest correlations with the metabolic syndrome score were found with systolic blood pressure (R 2 = 0.96) and HDL-C (R 2 = − 0.94), and lowest with glucose (R 2 = 0.55).
Multivariate lipid profiling
PCA on 115 samples and 223 lipids gave a model of 12 components explaining 83% of the total variation in the data (R 2 X(cum) = 0.83), and identified one sample outside Hotelling's T2 range that remained in subsequent analyses ( Fig. 2a). A trend was visible in the PCA plot with samples from individuals with the metabolic syndrome (metabolic syndrome score ≥ 3) being more frequently located on the upper half (positive t 2 values) of the plot. A total of 19.4% of the total variation in the data was explained by the predictive component of the OPLS model with the metabolic syndrome score as Y (1 + 1 model, Q 2 = 0.47, R 2 X(cum) = 0.453, CV-ANOVA p value: 1.4 × 10 −14 ). Additional diagnosis by the permutation analyses for the Y variable ( Supplementary Fig. 2) showed Y-axis intercepts below 0.3 for R 2 Y and below 0.05 for Q 2 , indicating the OPLS model was not influenced by overfitting. The lipidomic profile (p(corr) vector from the OPLS model) connected to the metabolic syndrome score in the OPLS model is presented in Fig. 2b and significant lipids are presented in Table 3 (complete list is summarized in Supplementary Table 2). A total of 110 lipids were significantly correlated to the metabolic syndrome score. Of these, 100 lipids were positively correlated (mainly triglycerides with 76 compounds, phosphatidylcholines, phosphatidylinositols and ceramides) with the metabolic syndrome score and 10 lipids were negatively correlated with the metabolic syndrome score.
Univariate metabolite and lipid analyses
In our data, we had 67 independent metabolites and 73 independent lipids. Hence, we used a p-value threshold of 7.46 × 10 −4 for the metabolite analyses and 6.85 × 10 −4 for the lipid analyses. In the univariate regression analyses on standardized metabolite (Fig. 3a) and lipid (Fig. 3b) levels, where we compared metabolic syndrome cases and controls, we identified multiple metabolites and lipids that had higher (13 metabolites; 8 lipids) or lower (1 metabolite; 10 lipids) levels in cases of the metabolic syndrome Fig. 1 Metabolite profiling. a PCA score plot on metabolic data with samples colored according to their respective groups: blue dots signify individuals with the metabolic syndrome (metabolic syndrome score 3-5) and green dots individuals without metabolic syndrome (metabolic syndrome score 0-2); x axis -t[1] first score (R 2 X = 0.146), y axis-t[2], second score (R 2 X = 0.110). b Metabo-lite predictive loading vector (p(corr)) from the OPLS model with the metabolic syndrome score as the Y variable; metabolites are colored according to their chemical classes; p(corr) values indicate when a compound is positively (positive p(corr) value) or negatively (negative p(corr) value) correlated with the metabolic syndrome score after correction for multiple testing. Examples of metabolites associated with the metabolic syndrome were valeryl carnitine, pyruvic acid, lactic acid and alanine; examples of lipids associated with the metabolic syndrome were diglyceride(34:1) and diglyceride(36:2). Similar results were observed with the metabolic syndrome score as a continuous determinant in the analyses. Summary statistics are presented in Supplementary
Discussion
In the present study, which is the first of its kind, we observed multiple metabolites and lipids from different chemical classes to be connected to the metabolic syndrome that have not been often described before in epidemiological studies, which includes acylcarnitines and keto acids. Collectively, our findings highlight the role of multiple different biochemical pathways connected to the metabolic syndrome that could be used in the design of novel interventions for the treatment and prevention of cardiometabolic disease. Our study replicates multiple observations from other studies on different cardiometabolic disease outcomes, which includes multiple amino acids, 1,5-anhydroglucitol and uric acid. Most notably, previous studies found associations between high concentrations of branched-chain amino acids and the risk of type 2 diabetes mellitus (Wurtz et al. 2012a(Wurtz et al. , 2013 possibly by the disturbance of fatty acid metabolism in mitochondria (Newgard 2012). In our study population, valine (one of the branched-chain amino acids) was positively associated with the metabolic syndrome in both the OPLS model and univariate analysis. Interestingly, high levels of valine have been associated with increased oxidative stress and inflammation through the activation of mTOC1 (Zhenyukh et al. 2017). In addition, higher levels of many other amino acids were associated with the metabolic syndrome score as well, a result which is in line with the results from another cross-sectional study (Ntzouvani et al. 2017). Alanine, which had the strongest association in our univariate analyses, has previously been documented to directly affect beta-cell function and insulin secretion (Newsholme et al. 2005). Furthermore, in line with previous research on type 2 diabetes mellitus (Mook-Kanamori Fig. 2 Lipid profiling. a PCA score plot on lipidomics data with samples colored according to their respective groups: blue dots signify individuals with the metabolic syndrome (metabolic syndrome score 3-5) and green dots individuals without metabolic syndrome (metabolic syndrome score 0-2); x axis -t[1] first score (R2X = 0.304), y axis -t[2], second score (R2X = 0.155). b Lipidomics predictive loading values (p(corr)) from the OPLS model with the metabolic syndrome score as the Y variable; metabolites are colored according to their chemical classes; p(corr) values indicate when a compound is positively (positive p(corr) value) or negatively (negative p(corr) value) correlated with the metabolic syndrome score et al. 2014) and cardiovascular mortality in normoglycaemic individuals (Ouchi et al. 2017), metabolic syndrome cases had lower levels of carbohydrate 1,5-anhydroglucitol compared to controls. Most likely, this observation is explained by the diabetes subcomponent as reflected by a significant correlation with the glucose component in the OPLS model. Finally, in line with our research findings, though these associations were previously found not to be causal (Palmer et al. 2013;Sluijs et al. 2015), high levels of uric acid in serum have been associated with an increased risk of type 2 diabetes (Dehghan et al. 2008) as well as with hypertension, the metabolic syndrome and cardiovascular disease (Soltani et al. 2013). As the direction of effects of these metabolites went in the expected direction, our used platform and study design seem to be suitable for the identification of novel biochemical pathways.
A class of compounds which brought particular novel insights in the biochemical pathways related to the metabolic syndrome from our data are the acylcarnitines which were positively correlated with the metabolic syndrome. Specifically, we found valeryl-carnitine to show the strongest connection with the metabolic syndrome in the OPLS and univariate analyses. As not much has been described about this particular metabolite, future studies focussing on valerylcarnitine in particular are required to elucidate its role in cardiometabolic disease. Acylcarnitines are required for the transport of fatty acids across the mitochondrial membrane for β-oxidation (Mihalik et al. 2010). Higher concentrations of acylcarnitines in blood have been associated with obesity, insulin resistance and type 2 diabetes mellitus in humans (Floegel et al. 2014;Gall et al. 2010;Mihalik et al. 2010;Pallares-Mendez et al. 2016). In one previous publication, higher acylcarnitine concentrations were shown to cause imbalances between insulin synthesis and insulin secretion, which consequently caused beta cell dysfunction in human and mouse pancreatic tissue samples (Aichler et al. 2017). In line, we found multiple acylcarnitines to be positively correlated with fasting glucose levels in the OPLS model.
Another main chemical class with strong positive correlations with the metabolic syndrome, as shown by OPLS and univariate analyses, were keto acids. For example alpha-ketoglutaric acid, lactic acid and pyruvic acid. Alphaketoglutaric acid, although not described in recent clinical studies, was found to affect TOR signalling, which affects insulin signalling, and has been associated with longevity in nematode worms (Chin et al. 2014). Extremely high levels of lactic acid are generally known to be lethal, but our results show that subclinical elevation of lactic acid levels could play a role in cardiometabolic disease as well. Importantly, high lactate levels, as a product of oxidation of pyruvic acid, are indicative of increased anaerobic metabolism, and increased oxidative stress.
Lysophosphatidylcholine(18:2) levels were lower in the univariate regression analysis in individuals with metabolic syndrome score as compared to controls, but we found no consistent relation between lysophosphatidylcholines as a chemical class and the metabolic syndrome score in the OPLS model. Lysophosphatidylcholines play a pivotal role in oxidized LDL cholesterol, and are found to directly affect progression of atherosclerosis through multiple biological pathways including inflammatory processes (Aiyar et al. 2007;Lusis 2000). Previously, lower concentrations of lysophosphatidylcholines have been observed in obesity and type 2 diabetes mellitus ( Barber et al. 2012), and might directly affect insulin resistance state (Motley et al. 2002). Furthermore, an inverse relationship between serum lysophosphatidylcholines and vascular damage and heart rate was observed in patients with atherosclerosis (Paapstel et al. 2018).
In the lipid profiling analysis, we identified predominantly triglycerides to be correlated with the metabolic syndrome. Although not unexpected given the triglyceride subcomponent, we observed the odd-chain triacylglycerol (53:2/3), which originates mainly from food, to be negatively correlated to the metabolic syndrome as well as with several of its subcomponents in the OPLS analysis. In addition, multiple ceramides were positively correlated with the metabolic syndrome and a number of its components. A positive relationship between ceramide levels and insulin resistance has been found previously (Blachnio-Zabielska et al. 2012). Interestingly, in literature, ceramides are described to be important mediators of oxidative stress in apoptosis signalling (Andrieu-Abadie et al. 2001). Furthermore, a number of ether-bound phosphatidylcholines were negatively correlated with the metabolic syndrome. Interestingly, this biochemical class is associated with decreased oxidative stress levels and slows the ageing process (Hung et al. 2001).
The main strength of the present study was to investigate the connection between metabolite and lipid profile and the metabolic syndrome using platforms enabling detection of many compounds not frequently investigated in unstandardized human population studies. However, the use of an unstandardized human population likely resulted in an increased variability in the data as a consequence of factors like lifestyle and disease heterogeneity. This increased variability in the data is likely the cause of the limited separation in the PCA score plots. Nevertheless, using this approach, we were able to provide (novel) insights that could be used in future population and experimental studies. Validity of the results was confirmed by checking significance of the obtained OPLS models, application of univariate analysis and by putting the results into biological context based on the available scientific literature. Still, since metabolomics/ lipidomics is an exploratory approach, with usually limited amount of samples included in the hypothesis generating study (as was the case also for present study). the described findings require verification in the independent cohorts. Given the observational nature of the data, no causality of our research findings can be inferred. Furthermore, due to the design of the study, causality cannot be determined (e.g., the altered metabolite/lipid concentration could be either the cause as well as the consequence of the metabolic syndrome condition).
In summary, within this first combined metabolomics and lipidomics study on the metabolic syndrome, we identified several metabolites and lipids to be connected to the metabolic syndrome, which could be of interest for further research in the field of cardiometabolic disease biology. Interestingly, several of the different biochemical pathways that we identified in relation to the metabolic syndrome have been previously found to be connected to the regulation of oxidative stress. Future studies are however required to further elucidate our research findings. | 2019-02-13T15:06:28.398Z | 2019-02-01T00:00:00.000 | {
"year": 2019,
"sha1": "0ac9cd0b60cbf39bb6da5ff4f99f415162ea1fd7",
"oa_license": "CCBY",
"oa_url": "https://link.springer.com/content/pdf/10.1007/s11306-019-1484-7.pdf",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "37946edbcae5a0ad25e9baafbc9f7e72dcb06206",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"extfieldsofstudy": [
"Medicine",
"Biology"
]
} |
255394041 | pes2o/s2orc | v3-fos-license | Bringing data minimization to digital wallets at scale with general-purpose zero-knowledge proofs
Today, digital identity management for individuals is either inconvenient and error-prone or creates undesirable lock-in effects and violates privacy and security expectations. These shortcomings inhibit the digital transformation in general and seem particularly concerning in the context of novel applications such as access control for decentralized autonomous organizations and identification in the Metaverse. Decentralized or self-sovereign identity (SSI) aims to offer a solution to this dilemma by empowering individuals to manage their digital identity through machine-verifiable attestations stored in a"digital wallet"application on their edge devices. However, when presented to a relying party, these attestations typically reveal more attributes than required and allow tracking end users' activities. Several academic works and practical solutions exist to reduce or avoid such excessive information disclosure, from simple selective disclosure to data-minimizing anonymous credentials based on zero-knowledge proofs (ZKPs). We first demonstrate that the SSI solutions that are currently built with anonymous credentials still lack essential features such as scalable revocation, certificate chaining, and integration with secure elements. We then argue that general-purpose ZKPs in the form of zk-SNARKs can appropriately address these pressing challenges. We describe our implementation and conduct performance tests on different edge devices to illustrate that the performance of zk-SNARK-based anonymous credentials is already practical. We also discuss further advantages that general-purpose ZKPs can easily provide for digital wallets, for instance, to create"designated verifier presentations"that facilitate new design options for digital identity infrastructures that previously were not accessible because of the threat of man-in-the-middle attacks.
• General-purpose ZKPs (zk-SNARKs) can provide scalable and flexible privacy for SSI • zk-SNARKs can provide private revocation, credential chaining, and hardware binding • Performance is already at the edge of being practical for rollout on mobile phones • Designated verifier ZKPs address pressing issues regarding MITM attacks
Introduction
"The Internet was built without a way to know who and what you are connecting to" (Cameron, 2005, p. 1). Owing to this absence of a standardized identity layer, there is currently a "patchwork" of solutions for the digital identification and authentication of individuals. The arguably most prominent approach involves creating an account -including a user name and a password -for each single service that one uses on the internet (Preukschat and Reed, 2021). This renders identity attributes largely non-transferable, i.e., they can be used only in interactions with the service provider, website, or company that created or requested them during registration and usage . Besides the tedious task of repeatedly filling registration forms, many individuals also struggle with managing their dozens or hundreds of user names and passwords in a secure way (Bonneau et al., 2012). Even if users can cope with the secure management of their accounts, many processes that require verifiable data from, e.g., a government-issued ID card, involve additional time-consuming and costly verification-related processes, such as video calls Lacity and Carmel, 2022;Preukschat and Reed, 2021;Strüker et al., 2021). Identity providers in federated identity management offer end users a more convenient alternative with their single sign-on services. They store users' identity data and forward it to relying parties such as service providers on the users' request (Maler and Reed, 2008). Yet, the cross-domain aggregation of identity information and use-related metadata raises significant economic, privacy, and security risks Bernabe et al., 2020). Already in 1984, the cryptographer David Chaum hypothesized that electronic identification may lead to "sophisticated marketing techniques that rely on profiles of individuals [...] being used to manipulate public opinion and elections" (Chaum, 1985(Chaum, , p. 1044. Collecting and trading identity data has indeed become a real business with very real threats, as the well-known Cambridge Analytica scandal that abused identity information collected by Facebook indicates (Doward and Gibbs, 2017;Kitchgaessner, 2017). Users' strive for a more convenient digital identity management hence exposes them to mechanisms such as microtargeting that political parties or companies could use to control the content individuals consume (Zuiderveen Borgesius et al., 2018). Moreover, many physical identity documents are still not available in machinereadable form in this paradigm, although companies such as Apple and Google have started to explore these opportunities (Shakir, 2022).
This situation is particularly daunting when considering the growing pace of the digital transformation. Privacy-oriented, non-proprietary digital identity management also seems particularly important in the context of blockchains, for instance, when implementing auditable access control to off-chain resources (Maesa et al., 2019;Wu et al., 2023) or managing permissions in the context of decentralized autonomous organizations (El Faqir et al., 2020;Liao et al., 2022) via a smart contract owing to the inherent transparency of blockchains and the related intensified issues with data protection (Rieger et al., 2019;Schellinger et al., 2021). These observations also extend to the Metaverse -a combination of the internet and augmented reality via software agents (Dwivedi et al., 2022) that also builds on blockchains for managing asset ownership and exchange. Identification is a crucial component of the Metaverse yet carries substantial privacy risks (Leenes, 2007). Challenges with sensitive personal information are even further aggravated through the measurement of additional data about individuals' devices, actions, or their environment (Wang et al., 2022;Falchuk et al., 2018;Nair et al., 2022), making data minimization in the context of digital identity particularly important.
Some privacy-focused alternatives for electronic identification and authentication, such as the German eID, implement security and data minimization through trusted hardware in the form of smart-cards, which can be used conveniently in combination with a smartphone-based near-field communication (NFC) reader (Poller et al., 2012). Yet, such smart-cards are arguably not suitable for digital-native workflows as they need to be carried separately, and they do not smoothly extend to the variety of attestations with heterogeneous security and privacy levels reflecting the different organizations and processes that users interact with in their daily lives (Schellinger et al., 2022). Trusted hardware is also generally limited in its functionality, e.g., embedded secure elements on mobile phones typically only support storing cryptographic keys and creating common digital signatures with them. Using secure elements for selectively disclosing verifiable identity information hence requires the cooperation of the corresponding manufacturer and -in the case of the mobile phone -operating system providers. Empirically, this cooperation seems challenging to implement even for a single manufacturer, as the German government's efforts to implement the mobile-native Smart-eID suggest (Wilhelm, 2022). On the other hand, the more flexible trusted execution environments in mobile phones are known to be vulnerable to sidechannel attacks (Jauernig et al., 2020) and, thus, cannot provide the security and authenticity levels that some highly regulated workflows require (Schellinger et al., 2022).
Consequently, recent approaches in digital identity management aim to give users both convenience and control through empowering them to self-manage their identity in "digital wallet" applications on their mobile devices Cučko and Turkanović, 2021). It applies the certificate-based approach that builds the foundation of trusted interactions on the internet as of today in the form of X.509 (secure sockets layer (SSL)) certificates to identify servers in https-based communication to the digital identification and authentication for end users and has become popular under the term decentralized digital identity, self-managed digital identity, or selfsovereign identity (SSI) (Čučko and Turkanović, 2021;Kubach et al., 2020;Weigl et al., 2022), with an often strong connection to blockchain communities (Kuperberg, 2019;Sedlmeir et al., 2022). Identity attributes are confirmed through digital certificates that carry electronic signatures by corresponding issuers. Upon request, individuals can choose to reveal selected identity attributes to relying parties in a cryptographically verifiable way. Governments increasingly support this decentralized or SSI paradigm, with large-scale pilots such as Canada's verifiable organizations network (VON) and the European IDunion consortium exploring the approach . Moreover, the European Union (EU) is currently shaping a revision of its former electronic identification and trust services (eIDAS) regulation that mandates every member state to provide its citizens with a digital wallet app that can store and present such digital attestations European Council, 2022;Ehrlich et al., 2021). Researchers have also begun working on integrating the novel technology stacks in SSI with standards like OAuth for implementing access control on the IoT (Fotiou et al., 2022) or with established components of corporate identity and access management (Yildiz et al., 2021;Kuperberg and Klemens, 2022).
In its simplest form, the certificate-based approach sends the attestation directly to the relying party, which then checks the validity of the corresponding digital signature. Yet, providing the entire certificate to the relying party reveals a significant amount of information that is not strictly necessary for the verification in the given context (Hardman, 2020;Brands, 2000;Lioy et al., 2006). This includes identity attributes that the relying party does not require for their workflow. Moreover, for instance, the value of the digital signature on a digital certificate represents a unique identifier that can be used to track individuals whenever they use the certificate -a "super cookie" (Evernym, 2020). Anonymous credentials resolve this problem by enabling users to present their certificates in a data-minimal way (Chaum, 1985;Brands, 2000;Camenisch and Lysyanskaya, 2001;Backes et al., 2005). Users reveal only selected information derived from the credential that is indispensable for the respective purpose, while maintaining cryptographic verifiability. This can be achieved with zero-knowledge proofs (ZKPs), which allow a prover to convince a verifier of a mathematical statement without conveying any information apart from the statement's validity (Goldwasser et al., 1989). ZKPs can be used, for instance, to confirm that a presented attribute is part of a credential issued by a certain institution without having to reveal the value of the digital signature (Hardman, 2020). Currently, several digital wallet projects are already using digital wallets that handle multiple of these anonymous credentials and generate the corresponding ZKPs (Linux Foundation, 2022;Sartor et al., 2022). However, these implementations of anonymous credentials rely on academic works that involved significant effort in hand-crafting the cryptographic primitives that they use (Camenisch and Lysyanskaya, 2001;Sudarsono et al., 2011). They were major breakthroughs at the time of their publication and allow for fast proof generation, transmission, and verification. Yet, being highly tailored to a specific set of functionalities also implies that highly specialized cryptographers need to develop novel ideas to incorporate additional features, and corresponding anonymous credential implementations are difficult to upgrade and audit (Young, 2022) and to integrate with existing digital identity components. For instance, major SSI projects have proclaimed the need for privacy-preserving credential chains (see more details in Section 4.8) for years (Hardman and Harchandani, 2022), and theoretical solutions were indeed found years after the initial conceptualization of anonymous credentials (Belenkiy et al., 2009;Camenisch et al., 2017). Yet, they are still not implemented in larger-scale projects. Moreover, recent discussions pointed out the shortcomings of purpose-specific ZKPs for providing scalable revocation (Young, 2022) and hardware binding (Feulner et al., 2022).
In this paper, we describe the above-mentioned challenges in the context of practically deployed anonymous credential systems in detail and argue that using general-purpose ZKPs such as zero-knowledge non-interactive arguments of knowledge (zk-SNARKs) that matured in cryptocurrency privacy (Ben-Sasson et al., 2014) and scaling (Thibault et al., 2022) projects allows addressing these and other pressing requirements for the broad adoption of anonymous credentials. We thus bridge the related research streams from cryptography on (zk-SNARK-based) anonymous credentials (e.g., Delignat-Lavaud et al., 2016;Schanzenbach et al., 2019;Rosenberg et al., 2022;Maram et al., 2021) with the SSI domain (e.g., Čučko and Turkanović, 2021;Soltani et al., 2021;Sedlmeir et al., 2022;Young, 2022) based on our own implementation of zk-SNARK-based anonymous credentials and experiences from our active involvement in several SSI projects in industry and the public sector. We structure our work as follows. First, we give a basic understanding of SSI, the concept of digital attestations stored in digital wallet applications, related terminology, and technical foundations of zk-SNARK in Section 2. Next, we comprehensively survey related work on anonymous credentials in practice and cryptography research and outline which requirements for broad adoption these implementations address and which features are still missing in Section 3. After that, we describe how to implement these lacking features with zk-SNARKs (Section 4). We evaluate the corresponding performance to demonstrate that this approach can already be considered practical for use in mobile wallet apps as of today in Section 5. We also discuss further limitations of SSI that general-purpose ZKPs and in particular designated verifier zk-SNARKs can address in Section 6. We conclude by outlining limitations and pointing towards avenues for future research in Section 7.
Self-sovereign identity
The paradigm of decentralized or user-centric identity, also called SSI, empowers individuals to self-manage their digital attestations locally on their edge devices (e.g., their mobile phone) in a digital wallet app Weigl et al., 2022;Kubach et al., 2020). These digital attestations are created by "issuers" -entities such as public sector institutions, enterprises, individuals, or machines associated with a cryptographic key-pair that have a certain reputation within specific domains (Soltani et al., 2021). For instance, a meaningful institution to issue digital national IDs would be the public institution that currently manufactures physical ID cards, or a Country's Signing Certification Authority as governed by the International Civil Aviation Organization (ICAO). Analogous to watermarks and seals on physical documents, digital attestations carry cryptographic proofs of integrity; usually a digital signature created by the issuer. This digital signature makes the digital certificate tamper-evident and machineverifiable, which is why the corresponding attestations they are often termed "verifiable credentials". At the same time, Verifiable Credentials refers to a nascent standard established by the world wide web consortium (W3C) to harmonize such digital attestations (Sporny et al., 2019). Because there are also other established standards for digital certificates, we will neutrally use the term "credential" in the following to cover all different flavors of digitally signed and, therefore, machine-verifiable attestations. Note that this does not follow the terminology proposed by Bosworth et al. (2005), according to which a credential is "used to prove an identity to a system", i.e., a physical token or passwords would also qualify as a credential. It will also become apparent that for data minimization using generalpurpose ZKPs, the design of the credential itself has only moderate relevance (see, e.g., also Delignat-Lavaud et al., 2016), which is why by writing "credential", we also aim to include anonymous credentials. Individuals, called "holders", can use their credentials to conveniently disclose identity attributes to "verifiers", i.e., relying parties. Typically, a verifier first sends a "proof request" to the holder, asking for the disclosure of certain attributes stated in the holder's credentials and listing a set of additional requirements. Such requirements may include a list of issuers that the verifier trusts (Preukschat and Reed, 2021). When receiving the proof request, the holder's digital wallet app fully automatically searches for stored credentials that include the requested attributes and that satisfy the requirements specified in the proof request (Sartor et al., 2022) and -upon the holder's consent -creates a cryptographic proof about the correctness of these attributes according to the respective issuer and sends the attributes and the proof to the verifier (Feulner et al., 2022). The verifier can then check the proof and, therefore, the authenticity of the attributes and subsequently use them for its service.
The process that starts with a verifier's proof request and ends with the verification of the proof that the holder created by the verifier is called "verifiable presentation (VP)" . The simplest and least privacy-oriented VP involves sending one or multiple credentials that include the requested identity attributes directly to the verifier, such that the verifier can extract the needed attributes and verify the issuer's digital signature and the fulfillment of the other requirements from the proof request directly. As this means that the verifier could forward the credentials to other parties and impersonate the holder, this type of VP must include a digital signature with the holder's "binding key" on a challenge communicated by the verifier in the proof request for each credential. The public binding key is a part of the credential, whereas the holder never shares their private binding key. Holder binding is also essential when the corresponding binding key-pair needs special protection, for instance, because regulators demand particularly high security both regarding attacks and the potential voluntary sharing of credentials. In these cases, smart cards or smartphone-embedded secure elements can be used to generate the key-pair and protect the corresponding private key.
While the challenge response mechanism can fix the security issue associated with sharing the full credential, it does not prevent excessive information disclosure. A credential may include considerably more identity attributes than the verifier requested. Moreover, the value of the digital signature on a credential and the public binding key are essentially globally unique identifiers. Hence, more privacy-focused approaches to VPs do not communicate the full credential to the verifier but instead only reveal selected attributes and provide cryptographic evidence derived from the credential that these attributes are indeed attested by the specified issuers (Hardman, 2020;Sedlmeir et al., 2022). This derived proof is usually based on a ZKP. Beyond facilitating "selective disclosure" and hiding signatures and binding keys, sometimes it is also desirable to reveal only the results of a computation that uses identity attributes as parameters to the verifier. Well-known examples include set (non-) membership proofs, for instance, to demonstrate that a credential is not included in a revocation or sanctions list, and range proofs, for instance, to show that a credential is not expired or that a date of birth as recorded in a credential is more than 18 years in the past (Hardman, 2020). In general, it is possible to directly reveal results of complex computations that use attribute values as input parameters, called "predicates" or "predicate proofs".
Zero-knowledge proofs and zk-SNARKs
In classical mathematical proofs, the prover starts from a given set of assumptions that the verifier agrees on and uses deductive reasoning -fully transparent to the verifier -that is beyond any doubt to prove new statements. Applied to running an algorithm and proving its correct execution, this means that besides the implementation of the algorithm itself, all input values, intermediate variable values, and output values (the final result) must be disclosed to the verifier. Babai (1985) and Goldwasser et al. (1989) independently introduced an alternative notion of probabilistic "interactive proofs" where the verifier does not passively inspect the full transcript of the algorithm's execution but instead only sees selected steps in the computation. To compensate for this lack of transparency, the verifier may challenge the consistency of what the prover claims to be the final result through additional, often randomly determined, questions about the transcript. While the prover can be lucky and answer any of these questions correctly even if he or she tries to cheat, repeating the protocol sufficiently often makes the probability of a verifier accepting a false proof arbitrarily small. The main motivation for considering this type of proofs was arguably to allow resource-constrained devices outsourcing large computations while still being able to verify the result. In other words, "a single reliable PC can monitor the operation of a herd of supercomputers working with possibly extreme powerful but unreliable software and untrusted hardware" (Babai et al., 1991, p. 1).
Given this new notion of proofs where the disclosure of all initial and intermediate values was not necessary any more, it is natural to ask how much information such a proof still contains about the computational trace. Formally, ZKPs are defined as "those proofs that convey no additional knowledge other than the correctness of the proposition in question" (Goldwasser et al., 1989). A simple example of a ZKP is proving knowledge of a private key associated with a public key with Schnorr's protocol without giving away information that would make it easier for the verifier to find the private key (Schnorr, 1991). A generalization of the mathematical ideas underlying Schnorr's protocol -performing mathematical tricks in the context of the discrete log problem that is assumed to be hard -also builds the basis for anonymous credentials based on Camenisch-Lysyanskaya (CL) signatures (Camenisch and Lysyanskaya, 2001;Maurer, 2009). These hand-crafted, special-purpose ZKPs are highly efficient in the sense that the proofs are small (several hundred bytes) and fast to prove and verify (tens of milliseconds on a commodity laptop). In contrast, creating a ZKP for running an arbitrary algorithm was long prohibitively computationally expensive and only became practical after two decades of substantial improvements in construction and silicon in the form of zk-SNARKs (e.g., Ben-Sasson et al., 2013;Gennaro et al., 2013;Groth, 2016;Parno et al., 2016). Rapidly increased popularity and use of zk-SNARKs showed up in research and applications after their first use in blockchains, first for proving relatively simple statements to provide private payments in cryptocurrencies such as Zcash (Ben-Sasson et al., 2014), and later also for proving increasingly complex statements to increase blockchains' transaction throughput in zkrollups such as Polygon Hermez (Šimunić et al., 2021;Thibault et al., 2022).
General-purpose ZKPs and in particular zk-SNARKs hence introduce a novel paradigm of certificate verification: Instead of sending credentials to the verifier, who then runs the cryptographic verification algorithm, and instead of constructing specific mathematical tricks using the information in the credential in CL signatures, the holder runs the verification algorithm privately on their device using the locally stored credential(s), and only sends the verification result and selected attributes or predicates that need to be disclosed to the verifier (Delignat-Lavaud et al., 2016). To allow the verifier to trust in this verification result, the holder creates a ZKP that attests the correct execution of the verification program and sends it to the verifier, yet without sharing any details about the inputs, and intermediary results of running the credential verification algorithm on a credential. In other words, a ZKPs can convince the verifier that the verification algorithm that the holder ran terminated with the specified result (e.g., "the holder knows a credential that is indeed issued by an institution with public key X and the corresponding private binding key. The credentials is neither expired nor revoked, and the first name according to the credential is Alice"). Figure 1 illustrates the overall flow of issuing a credential and performing a VP with a general-purpose ZKP, which hardly differs from a VP as implemented in, for instance, Hyperledger Aries (see, e.g., Schlatt et al. (2021)).
A challenge related to the use of zk-SNARKs besides the computationally intensive proof generation is that the first practical variants all required an initial and also computationally and memory intensive preprocessing process -called trusted setup. In this trusted setup, a common reference string (CRS) that is required for generating and verifying ZKPs is com- puted. At least one party that participates in the creation of the CRS needs to be honest to guarantee that provers cannot create fake proofs. Privacy guarantees are unconditional (Fuchsbauer, 2018). Consequently, for blockchain applications, the CRS is typically generated in multi-party computation (MPC) that can involve hundreds of participants (Bowe et al., 2017). While there are different ways to translate an algorithm into a format that allows to generate a CRS and, ultimately, proving and verification programs, with compilers being available also for C code , to date the more efficient way seems to be through domain-specific languages (DSLs) such as Circom (Iden3, 2022a). A very frequently used representation is the rank one constraint system (R1CS), in which every step in executing the algorithm only involves a single quadratic expression. A convenient proxy for the complexity of the statement with regard to proving effort is the number of "R1CS constraints", which roughly corresponds to the number of quadratic terms (multiplications) in a quadratic arithmetic program (QAP) that represents the algorithm; at least in the "Groth16" proof system that underlies our prototype (Groth, 2016). One disadvantage of the Groth16 proof system is that the trusted setup is circuit-specific, i.e., every update of the algorithm for which correct execution needs to be proved requires a new trusted setup. More recent flavors of zk-SNARKs such as Plonk (Gabizon et al., 2019) do not require a circuitspecific trusted setup but instead only need to create one "universal" CRS that can be used for all algorithms up to a certain complexity threshold. The higher flexibility of universal zk-SNARKs typically comes at some trade-offs, such as larger Community / regulator Developer Figure 2: Workflow for generating the zk-SNARK proving and verification program that need to be integrated in the wallet app and verification backend, respectively. Note that the circuit-specific trusted setup is not necessary for universal zk-SNARKs, and transparent zk-SNARKs do not even require any trusted setup.
proof sizes and higher verification complexity. Another alternative are transparent zk-SNARKs, such as zero-knowledge transparent argument of knowledges (zk-STARKs), which remove the necessity for a trusted setup completely. They tend to involve even larger proof sizes of tens of kilobytes (Ben-Sasson et al., 2018). Yet, for a bilateral interaction between prover and verifier, this proof size can still be considered moderate. Figure 2 features an overview of the steps required to set up an anonymous credential system using the Groth16 proof system. We implemented all ZKPs in this work as zk-SNARKs using the DSL Circom (Iden3, 2022a). Circom utilizes a finite field, with the number of field elements being a 254-bit prime number. For simplicity, we will write about 254-bit integers in the following although in fact not all 254-bit integers are included in this finite field. Non-integer values are not natively supported, which is why additional logic for handling Strings or Floats needs to be provided (see Section 4.2). A Circom-specific variable type to explicitly define constraints is called "Signal". Programming them is restricted, on behalf of the underlying QAP, to use only quadratic calculations inside one Signal. Therefore, a Signal can only be assigned once and is immutable. For this reason, some calculations have to be split into multiple sub-calculations (see, for instance, Figure B.7a for the simple case of a cubic expression). There is also no native support for branching operations, such as if, break, or continue statements. On the other hand, Circom provides libraries that implement comparators, conversions between numbers and their binary representation, hash functions such as Poseidon (see Figure B.7b) and the secure hash algorithm 256 (SHA256), and signature mechanisms such as the Edwards-curve digital signature algorithm (EdDSA) (Iden3, 2021). Projects that build on Circom use these building blocks to implement more advanced or complex primitives, e.g., Merkle proofs (KimiWu123, 2019) and elliptic curve digital signature algorithm (ECDSA) signatures (Personae Labs, 2022;0xPARC, 2022a).
There are different libraries that allow to create a witness -an assignment of input signals that is derived from the parameters on which the corresponding algorithm runs -to generate a CRS and derive proving and verification keys from a circuit implemented in Circom, and ultimately to generate and verify zk-SNARKs. Arguably, the most simple to use is SnarkJS (Iden3, 2022b), which provides witness (using We-bAssembly (WASM)) and proof generation and proof verification in Javascript (Node.js) for the Groth16 and Plonk proof system. For productive use, there are also a highly optimized C ++ and Intel x86 Assembly-based witness generation (Iden3, 2022a) and Groth16-prover (Hermez Network, 2021) available. Moreover, witness generation, Groth16 proof generation, and proof verification can be conducted in Rust via the ark-circom crate (Konstantopoulos, 2022), based on the WASM files, proving, and verification keys generated from Circom and SnarkJS.
Related Work
X.509 certificates. The X.509 standard is broadly adopted on the internet as a fundamental component of the the https protocol (Cooper et al., 2008). These credentials are mostly organized in credential chains. For instance, a certificate authority (CA) creates a credential that binds a company to a domain and key-pair, and the company can use the binding key of this attestation to issue a credential to one of its web servers. Digital signatures are permanent; yet, sometimes, issuers realize that the reason for issuance ceases to exist prior to expiration. As the deletion of information can hardly be enforced, X.509 certificates hence carry a unique serial number that can be used in a VP for checking their revocation state. The holder can interact with the issuer or the responsible CA according to the online certificate status protocol (OCSP) to get a short-lived signed confirmation about the non-revoked state that they can attach to the certificate when presenting it (Delignat-Lavaud et al., 2016). Alternatively, certificate revocation lists (CRLs) can be used; where the verifier would download a list of all revoked certificates from the issuer or CA that the issuer defined as responsible for maintaining the CRL (Cooper et al., 2008).
As the holder transmits X.509 credentials entirely to the verifier, the corresponding VP is far from data minimizing. A simple modification that does not include the attributes directly in the credential but instead only each attribute's salted hash (De Salve et al., 2022) or a single Merkle root (e.g., Liu et al., 2018;Mukta et al., 2020) facilitates selective disclosure. In a VP, the holder would transfer the full credential plus selected attributes, including the corresponding salt values or Merkle proofs (Merkle, 1987). Yet, sophisticated correlation attempts based on the digital signature, binding key, and serial number, which are globally unique identifiers with high probability (Brands, 2000), are not prevented. Consequently, while X.509 certificates have been remarkably successful for the identification of servers on the web, they seem less suitable for the privacy-oriented digital identity management of natural persons without further modifications.
Hyperledger AnonCreds. There are plenty of SSI implementations, and arguably none of them currently occupies a dominant role in terms of its distribution . However, the Hyperledger AnonCreds (Curran, 2022) that build the foundation of Hyperledger Aries (Linux Foundation, 2022) and related implementations, such as the Hyperledger Aries cloudagent in Python (ACA-Py) and several compatible digital wallets such as the esatus, Lissi, and Trinsic wallet (Sartor et al., 2022) are arguably among the implementations with the most sophisticated privacy functionalities. The technical backbone of Hyperledger AnonCreds goes back to work by Camenisch and Lysyanskaya (2001), building on purpose-specific ZKPs. They enable not only selective disclosure but also hide the signature of the credential, the binding public key, allow proving non-revocation with a zero-knowledge set-membership proof that does not expose the credential's serial number, and support range proofs. Besides Hyperledger AnonCreds, this work builds the foundation of IBM's Identity Mixer (Bichsel et al., 2009) underlying the "I reveal my attributes" (IRMA) (Alpár et al., 2017) and European ARIES research project (Bernabe et al., 2020). Similar features, yet with even higher performance and shorter proof sizes, are provided by the approach of Sudarsono et al. (2011) through moving from an Rivest-Shamir-Adleman (RSA)-based approach to pairing-based Boneh-Boyen-Shachum (BBS+) signatures (Boneh and Boyen, 2004).
Yet, a core feature that is not yet accessible in these projects is privately linking a loosely bound credential, such as a COVID-19 vaccination certificate, and a strongly bound government-issued digital ID that relate to the same person. This would be an example of a cross-credential predicate, e.g., a comparison of the date of birth and name attributes on both certificates, without disclosing these attributes to the relying party. A strongly bound national ID also often requires holder binding with a key-pair stored in trusted hardware. Including such features is generally considered desirable, particularly in regulated environments and could be extended to combinations of even more attestations in a privacypreserving way, such as in the verification of event tickets (Feulner et al., 2022). Secure elements generally do not support CL or BBS+ signatures but only common signature schemes such as ECDSA. Private credential chaining, another example of a cross-credential predicate, is also not supported in Hyperledger AnonCreds but considered essential for large-scale adoption (Hardman and Harchandani, 2022). Moreover, the number of credentials that revocation registries for ZKPs of (non-) set-membership, implemented via RSA accumulators in Hyperledger AnonCreds ), can manage is far too small to guarantee sufficient herd privacy: to allow the holder to prove that their credential is not revoked, he or she needs to store data that grows linearly with the number of credentials represented by the accumulator in their wallet app. This corresponds to a 2000 bit integer per credential represented through the accumulator. For a revocation registry that represents 10,000 credentials, the corresponding file hence already has 2.6 MB (Curran, 2021). Consequently, it is no surprise that the maximum size of the revocation registry is set to 2 15 = 32, 768 in ACA-Py.
In practice, facing limited capacities of revocation registries with RSA accumulators, revocation registries are split. However, this compromises privacy significantly: Consider an identification process that involves information from three different credentials, e.g., a national ID card, a credit card, and a COVID-19 vaccination credential. Let N be the size of the population that owns one of each of these credentials and r be the maximum number of attestations that can be represented by a revocation registry. Then there will be Nr −1 revocation registries for each of the attestation types, and k = N 3 r −3 combinations of revocation registry IDs that an individual can refer to when presenting the three attestations together. For instance, if N = 50 million and r = 100, 000, then k = 125 million, i.e., the combination of revocation registry IDs is essentially a unique identifier for every verification in which one uses these three attestations because k > N. When r = 1 million, we have k = 125, 000, so there will be around Nk −1 = 400 people with the same combination of revocation registries, i.e., herd privacy guarantees are still relatively bad, particularly if additional credentials and, thus, further corresponding revocation registries are around. For r = 10 million and, therefore, already close to N, we get good herd privacy since k = 125 and there will be 400, 000 individuals with the same combination. Consequently, revocation registries should represent several millions of credentials rather than tens of thousands. To achieve this, a digital wallet would need to store around one GB of revocation-related data per credential, which can be considered impractical.
It also seems that the approach with CL and BBS+ is difficult to adapt to post-quantum security: While Dutto et al. (2022) were able to reproduce the key properties of the abovementioned anonymous credential schemes, such as selective disclosure, private holder binding, and private revocation, with plausibly post-quantum secure cryptography (lattices), cryptographic key and proof sizes are on the order of several hundreds of MB. These figures are arguably not yet suitable for largescale roll-out, particularly for wallets on mobile phones.
Academic proposals. The difficulty of extending specialized approaches such as Camenisch-Lysyanskaya (CL) and BBS+based anonymous credentials to needs in large-scale adoption motivated research to construct anonymous credentials using generic ZKPs like zk-SNARKs. The following works have focused on zk-SNARK-based anonymous credentials systems: Delignat-Lavaud et al. (2016) present a highly practical approach towards data-minimal VPs as they turn widespread X.509 certificates into anonymous credentials using zk-SNARKs. A "prover can verify that he holds a valid certificate chain and a signature computed with the associated private key, without actually sending them to the verifier" (Delignat-Lavaud et al., 2016, p. 1). Their approach allowed the authors to implement selective disclosure, private credential chaining, and private holder binding on top of the existing X.509 certificate infrastructure. Their work can be considered a milestone for bridging anonymous credentials and legacy certificate systems for servers, yet does not bridge the gap to the multi-credential systems and general predicates envisioned in SSI and desirable properties like accumulator-based revocation registries as it builds on OCSP. Moreover, at the time of the publication, proof generation took around 4.5 minutes on a quad-core Desktop PC for a VP involving a single X.509 certificate with an RSA signature, and around 9 minutes for a chain of three certificates. Unfortunately, the code is also not open source to the best of our knowledge. Schanzenbach et al. (2019) propose ZKlaims, a zk-SNARK based approach to anonymous credentials, for application specifically in the context of blockchain technology, where an efficient smart contract verifier needs to be implemented. The main focus is on the selective disclosure of attributes and the implementation of range proofs using zk-SNARKs. The work does not consider several key components of SSI, such as holder binding (particularly to secure elements), revocation, as well as more advanced predicates and combinations of anonymous credentials for cross-credential predicates, for which private credential chaining is a special case. Li and Xue (2020) also discuss how privacy-oriented identity verification could look on blockchains using zk-SNARKs. Similar to Schanzenbach et al. (2019)'s work, a smart contract on a blockchain verifies ZKPs about credentials. However, while the corresponding architecture with a smart contract verifier is discussed, there are no implementation details given; and there is also no connection to discussions in SSI and the typical statements such as holder binding and non-revocation for which a VP needs to provide evidence. Yang and Li (2020) describe a similar implementation based on zk-SNARKs that attests identity claims, with an approach that stores blinded commitments to attributes in a smart contract managed by one or several issuers. This approach allows implementing revocation and makes attribute usage unlinkable Yet, holder binding, credential chaining, or the scalability of revocation are not discussed. Buchner et al. (2020) propose a more advanced approach for zk-SNARK-based anonymous credentials. They mention key privacy features for these credentials, such as selective disclosure and private holder binding and also consider privacyoriented revocation; yet in a setting where the revocation status of a credential is directly checked in an interaction between the verifier and issuer, which poses higher availability requirements on the issuer. While not giving an implementation and leaving several design-related questions, the authors emphasize the potential advantages of transparent zk-SNARKs compared to, for instance, Groth16 zk-SNARKs. Rathee et al. (2022) also focus on blockchain-based applications of zk-SNARKs-based anonymous credentials. They use zk-SNARK-batching to reduce the costs of the on-chain verification of multiple VPs. The implementation includes selective disclosure and private revocation. Hardware binding, credential chaining, as well as more general predicates and the corresponding tooling are also not discussed. Rosenberg et al. (2022)'s approach, besides the contribution by Delignat-Lavaud et al. (2016), arguably comes closest to ours. They demonstrate the practicality of a zk-SNARK-based approach to anonymous credentials that include many desirable features with a sub-second proving time on a laptop. Notably, they also provide formal security proofs for their construction. However, their focus is more on establishing the cryptographic foundations than on describing how to design required features and corresponding trade-offs in an SSI-based approach. Rosenberg et al. (2022) implement private, non-interactive proofs of non-revocation using Merkle forests and allow predicate proofs that involve multiple credentials, therefore also facilitating private credential chaining. Many of the constructions are only briefly described in the paper, such as wallet-side scalability considerations of revocation, hardware binding, and a predicate proof of geo-location. We implement this proof as the polygon inbound proof in Section 6.4.
Finally, we note that there are also hybrid approaches. For instance, Campanelli et al. (2019) propose LegoSNARK to implement more common and frequently needed privacy features, such as selective disclosure and private holder binding, with highly performant BBS+ signatures. If more advanced predicates are required occasionally, the LegoSNARK approach allows revealing also blinded commitments to selected attributes in the VP. These blinded commitments can then be used as inputs for a more flexible but also more computationally expensive general-purpose ZKP, such as a Groth16 zk-SNARK. For instance, there is an implementation of this approach by Harchandani (2022). Similarly, Chase et al. (2016) propose a hybrid approach, combining CL signatures and algebraic circuits for more specialized proofs of knowledge, for instance, of an RSA or ECDSA signature, using 2-party computation with garbled circuits, which they argue has better performance than the zk-SNARK-based approach that Delignat-Lavaud et al. (2016) follows. Yet, they do not suggest a concrete implementation of anonymous credentials or evaluate the performance of their proposal empirically. Similarly to Camenisch and Lysyanskaya (2001) and Feulner et al. (2022), the authors of this work also emphasize the significance of non-transferability of credentials, supporting our hypothesis that holder binding to a mobile phone's embedded secure element is desirable.
Besides the paradigm of anonymous credentials being issued by trusted institutions in a regulated environment and a focus on data-minimizing VPs towards (smart contract) verifiers, there is also a literature stream on how to make the issuance of anonymous credentials more decentralized and accountable with the help of blockchains (Garman et al., 2013). Recently, Maram et al. (2021) proposed a decentralized identity system that can be used in blockchain applications. First, "off-chain" attestations and corresponding revocations can be verified by a blockchain through oracles that prove the validity of some transport layer security (TLS)-based communication, with an SSL certificate as cryptographic trust anchor. Sensitive oracle information is not disclosed; instead, one can use either direct remote attestation via trusted execution environments (TEEs) (Zhang et al., 2016) or via ZKPs (Zhang et al., 2020). Similarly to Delignat-Lavaud et al. (2016)'s approach, digitally signed information that exists on the web as of today could therefore be used in a way that retrofits the core capabilities of anonymous credentials. Moreover, Maram et al. (2021) use MPC to identify sanctioned identities based on "off-chain" identifiers even under small modifications (e.g., a spelling error in the name) and implement sophisticated key recovery mechanisms.
Our survey of related implementations and academic research demonstrates that there is not only an established academic discussion but also a high practical need for a flexible, extendable solution for data-minimizing VPs in SSI. Compared to these valuable contributions, we add details on implementation such as discussions of encoding (Section 4.2), the required capacity of revocation registries and how to improve revocation registries' practical capacity with Merkle tree-based accumulators (Section 4.6), and a more detailed discussion of hardware binding and all-or-nothing non-transferability. We also provide more detailed performance analyses for mobile phones. Furthermore, we contribute novel insights into how zk-SNARKs can promote privacy-oriented digital identity infrastructures, for instance, by facilitating designated verifier ZKPs to avoid the need for restrictive certification of relying parties (Section 6). Our work hence focuses on merging a zk-SNARKbased approach to anonymous credentials integration with existing deployments in pilots, focusing holistically on the process of the VP and opening the discussion how to bring anonymous credentials to digital wallets at scale to researchers beyond pure cryptography.
Design and Implementation
In the following, we describe Heimdall, our implementation of anonymous credentials with zk-SNARKs. The code implements a command line demo for different VPs that include revocation, credential chains, and advanced predicates such as a location proof at https://github.com/applied-crypto/ heimdall.
Credential structure
Our credential design aims to be as simple and as general as possible. We opted for a binary Merkle tree-based approach for several reasons: First, it makes the construction that we present intuitive and demonstrates that the use of generalpurpose zk-SNARKs allows reducing complexity while still improving significantly the variety of features and several performance aspects compared to, for instance, anonymous credentials based on CL and BBS+ signatures. Second, using a Merkle tree-based approach allows us to use also hashing algorithms that cannot operate on arbitrary length inputs (in particular, zk-SNARK-friendly ones like Poseidon (Grassi et al., 2020)). Third, this approach can provide selective disclosure capabilities even to holders with highly resource-constrained devices via Merkle proofs where zk-SNARKs are not yet practical to create. Fourth, we can use the Merkle tree to structure the credential's metadata and attributes according to their meaning without the need to implement a complex (de-)serialization inside a ZKP, such as ASN.1 parsing in Delignat-Lavaud et al. (2016). In particular, we outsource the mapping of the different attributes to their semantic meaning to a "schema", with a hash of the schema or its uniform resource locator (URL) referenced in the credential's metadata. We proceed similarly for other descriptions, such as a revocation registry. This approach is similar to the one implemented in ACA-Py, where the credential schema and revocation registry are stored on a Hyperledger Indy blockchain Linux Foundation, 2022).
As in many other approaches to credentials, integrity is ensured through the issuer's digital signature on a compressed serialization of the credential, i.e., on its Merkle root. The left half of the associated Merkle tree corresponds to metadata that will typically be verified in every VP, including a unique credential identifier (serial number) for revocation, a reference to a schema, a reference to a revocation registry, the public key for holder binding, and an expiration date. The right half of the Merkle tree represents the content, including all the attributes, e.g., for a national ID. Note that while our implementation is symmetric, including eight slots for meta-attributes and eight slots for attributes, also asymmetric approaches are conceivable, for instance, if the number of attributes gets much larger.
We used the ZKP-friendly Poseidon hashing algorithm and the ZKP-friendly EdDSA-Poseidon digital signature mechanism, which is the reason why the public key for holder binding consists of two 253-bit values, representing a point with two coordinates on the Baby Jubjub elliptic curve. We chose these two cryptographic primitives despite their relative novelty because they are being used in several blockchain projects, such as the privacy-oriented Dusk Network (Maharramov, 2019). If someone found a security issue with these primitives, it would likely be used to exploit these projects that secure digital assets worth tens of millions of USD and, therefore, arguably quickly discovered and addressed through a patch.
Encoding the attributes
An essential part of defining a credential design that related work does not describe explicitly is to specify an encoding for the different data types of meta-attributes and attributes. For dates and timestamps, for instance, representation through a large integer that any general-purpose ZKP system needs to do behind the curtains is relatively straightforward, for instance, via a UNIX timestamp. For short Strings with less than 34 characters, decoding every character a i , for instance, to a 8-bit integer with ASCII as encoding and then "concatenating" these via i a i · 2 8i where 0 ≤ i ≤ 31 also gives rise to a 1:1 mapping, such that these Strings can be represented directly as leaves of the credential's Merkle tree. This approach is valuable when certain predicates need to be computed from the corresponding (meta-) attribute in the VP. However, for Strings without initial length restrictions, we need to compress the potentially large raw attribute into a single integer with at most 254 bits. When doing so in a collision-resistant way, the VP can selectively disclose the leaf for the corresponding (meta-) data, proving that it is indeed part of the credential with the ZKP, and then attach the raw attribute to the VP. The verifier can then apply the same encoding to the raw attribute to see whether the result is the corresponding leaf selected in the VP. A straightforward way to obtain such a compressing and collision-resistant encoding is to use a cryptographic hash function that takes inputs of arbitrary length and -if necessary -to strip a few bits off the result to arrive at 254 bits. An alternative is reserving several leaves for a potentially large string and to split it into smaller Strings that can be encoded without compression via 254-bit integers as indicated above, such that the attribute can then still be used for computing meaningful predicates. It is important to note that the encoding logic in the verification process is happening completely outside the zk-SNARK, so there is no reason why highly performant and established but less zk-SNARK-friendly cryptographic hash functions such as SHA256 should not be used for this step. In our implementation, we nevertheless used a UTF-8 encoding for Strings and a subsequent sequential compression with the Poseidon hash. Finally, we used the encoding True → 1 and False → 0 for Boolean values, and multiplied Floats with a factor of 10 7 before rounding.
Defining schemas and revocation registries
As indicated in Section 4.1, a schema describes the meaning of attributes and their position in the credential. In this sense, it serves as some kind of credential template . For instance, a schema for a national ID would describe the data types and positions of attributes. The schema would also describe the data type for each attribute, and the corresponding encoding function if this is not defined on a higher level uniformly for each data type. The content of the schema is relevant for the verifier, as the schema determines the positions of the attributes or the description of the predicates for which the verifier asks in the proof request. The schema can be represented directly by a data format like JSON that allows for a collision-resistant serialization and, therefore, encoding, by a hash-pointed link to a website that specifies the schema, or via a transaction hash if the schema is stored on a blockchain.
Similarly, a credential can include the hash of a description of a revocation registry, a corresponding hash-pointed link, or blockchain transaction hash. The description of a revocation registry may include information about the issuer, about policies underlying revocation, potential governance rules, update intervals, etc.
Integrity verification
The first (and probably most obvious) statement that the verifier expects to hold in a VP is that the credential has been digitally signed by a specific issuer and not been tampered with since. In general, creating a digital signature on a credential involves two key ingredients: (1) a deterministic and collisionresistant serialization and compression of the credential into a short number (e.g., a 254-bit number) and (2) using a private key to create the digital signature; often via some kind of exponentiation. One of the most common digital signature mechanisms as of today is arguably ECDSA. Yet, this signature mechanism leads to an exceptionally high computational effort when generating the corresponding ZKP in a VP (see Section 5). Consequently, we mainly used EdDSA-Poseidon, for which the signature corresponding to an issuer's public key consists of an elliptic curve point R = (R x , R y ) and a 254-bit number S .
Owing to the fact that most of the metadata in a credential will be relevant in many VPs because of the verification of schema and revocation registry as well as the proof of holder binding, non-revocation, and non-expiration, we compute the full corresponding Merkle subtree for the metadata to validate its integrity in the ZKP: This corresponds to 2 · 2 2 − 1 = 7 pairwise hashes when we have 2 3 leaves representing metadata, as opposed to 3 · n pairwise hashes when verifying n individual Merkle proofs. By contrast, on the attribute side, we expect that we often only need to reveal a small number of attributes from the credential, so for optimizing performance, so we only verify individual Merkle proofs for attributes that need to be revealed.
Expiration
Proving non-expiration without revealing the correlatable issuance or expiration date is arguably one of the easiest parts of implementing zk-SNARK-based credentials. As the Circomlib library (Iden3, 2021) already provides implementations of range proofs, the verifier can supply a timestamp of their choice as integer (UNIX timestamp) in the proof request. The holder then uses the timestamp as specified by the verifier as private input and also displays it as public output of the zeroknowledge circuit. Within the circuit, the holder proves that the private expiration date as retrieved from the credential is indeed larger than the timestamp specified by the prover with the LargerThan component.
Revocation
We designed and implemented revocation as follows: Each credential has a unique revocation ID; for instance, the i th credential created by the issuer could receive the revocation ID i − 1. We use a binary sequence to represent the revocation state for each credential, where the i th bit is 0 if the credential with revocation ID i is revoked and 1, else. We compress this binary sequence into a single hash value using a Merkle tree: Bits 0, . . . , 251 correspond to the first leaf, bits 252, . . . , 503 correspond to the second leaf, etc. A Merkle tree of depth n can, therefore, represent 2 n · 252 credentials. To check the revocation state in a non-private way, a verifier would look at the credential's revocation ID i and inspect whether leaf number i \ 252 (integer division) at position i % 252 (rest of the integer division, modulo operation) is 0 or 1. Accordingly, based on a Circom-based implementation of Merkle proofs for integrity verification, extracting a number's k th bit (see Figure B.9), and integer division with rest (see Figure B.8), the holder can prove that the credential is not revoked in a zk-SNARK, using the revocation ID that was already verified in the integrity verification part and the Merkle proof for the corresponding leaf as private input while only providing the revocation registry's current Merkle root as public output. The Merkle proof for a revocation registry that represents 252·2 n ≈ 2 n+8 credentials involves computing n hashes, whereas the verification of the integer division with rest and the extraction of the k th add only a relatively small number of constraints.
Holder binding (including secure elements)
Implementing private holder binding is relatively simple with general-purpose ZKPs (Rosenberg et al., 2022). In essence, the holder proves that he or she is able to digitally sign a random challenge provided by the verifier. The public key for holder binding and the signed challenge are not communicated to the verifier; instead, the holder only proves that he or she could privately provide an input for the circuit such that the verification of a digital signature with the public binding key as incorporated in the credential on the challenge is valid. We implemented the EdDSA-Poseidon digital signature scheme for efficient ZKP generation. Yet, storing private keys in software is not sufficient for activities in strongly regulated areas. For instance, buying SIM cards or opening bank accounts typically requires a "high level of assurance" in the European Union according to the eIDAS regulation ) that cannot be provided by keys stored in software, as they can be stolen or passed on relatively easily. Some security bodies also take the view that trusted execution environments like Android's Trusty do not provide sufficient security to achieve this level of assurance because several successful sidechannel attacks and exploits were detected in the past. Only embedded secure elements with highly restricted functionality are deemed sufficiently secure to provide such high levels of assurance (e.g., German Federal Office for Information Security, 2019). Yet, the secure elements that common devices like laptops or mobile phones carry today only support a very limited range of cryptographic operations. For storing a private key and exposing the functionality of signing a challenge, this is the ECDSA algorithm. 0xPARC (2022b) implemented ECDSA verification with around 1.5 million R1CS constraints, as ECDSA is a common signature mechanism on blockchains such as Ethereum. This means that the straightforward approach to zk-SNARK-based ECDSA signature verification is around 360 times less efficient to prove than the EdDSA verification that we used in our sample implementation. Fortunately, there have been improvements that create some auxiliary private inputs to reduce the number of R1CS constraints by a factor of 10 (Personae Labs, 2022) i.e., it is only around 40 times as expensive as an EdDSA-Poseidon verification, plus some overhead for creating the auxiliary private inputs, which takes 2 seconds on the laptop used for the performance evaluations in Section 5.
It is important to note that by using an adequate governance approach, the verification of embedded secure elements' X.509 certificate chains is not required inside a zk-SNARK because the whole certificate chain can be disclosed to the issuer, who publicly announces that it only issues attestations to key-pairs that are provably bound to secure elements from a trusted list of manufacturers as part of its governance policy. In essence, zk-SNARKs hence allow to draw a black box around the challenge-response mechanism used for holder binding and avoiding replay attacks; with the opportunity to integrate any signature mechanism and in particular ones supported by the secure elements embedded in current generations of mobile phones. This addresses a key shortcoming of approaches such as Hyperledger AnonCreds with CL and BBS+ signatures, as the corresponding signature mechanisms are not supported by current generations of secure elements, leaving only the choice between lower levels of assurance or lower levels of privacy.
Credential linking and credential chains
Another aspect that is frequently needed in real-world applications is combining different attestations issued to the same person or entity. For instance, when entering a facility that requires a proof of vaccination, a verifier may demand evidence that the digital vaccination passport (which is typically not strongly bound to the individual because it does, for instance, not include biometric information) refers to the same person that just demanded access . As other government-issued documents such as ID-cards often have higher binding strength (level of assurance), potentially also through hardware binding, it may make sense to prove that the first name, last name, and potentially the date of birth on the ID-card and the vaccination passport coincide, yet without leaking the sensitive (and irrelevant) name and date of birth directly to the verifier. Another frequent case of credential linking involves a proof that the public key for holder binding on one of the credentials is the same as the public key corresponding to the issuer's signature on the other credential. This is the building block of credential chains in which responsibilities are delegated from larger actors to smaller actors, e.g., from a government-controlled certificate authority to institutions on the national level to institutions on the local level to employees of these institutions that then sign an attestation on behalf of their institution and finally on behalf of the head institution on the national level. In this case, it may make sense to hide the issuer's public key for all credentials but the one on the top in the hierarchy, which would not be the case in a presentation without a certificate chain. Otherwise, it could be the case that although a VP selectively discloses only an individual's date of birth but not their address from a national ID, the place of living can be inferred from the local authority that issued the national ID.
Credential linking can be achieved relatively simply by creating a random number on the holder's side as private input and hashing the corresponding attributes on both attestations (e.g., the first name) with it inside the ZKP. One can then either observe equality of the resulting hashes directly or proceed with a separate proof about properties of these hashes' pre-image', as would be the case with LegoSNARK (Campanelli et al., 2019) or related approaches such as Damgård et al. (2021). We implemented the approach with direct equality checks, using the signed challenge from holder binding as randomness. For credential chaining, while proofs of integrity, non-revocation, and non-expiration are conducted for all intermediary credentials, the proof of holder binding is only conducted on the lowest level, as the binding keys on the other levels of the credential chain correspond to issuing keys and are, therefore, not shared with the holder. Note that the holder needs to store all intermediary certificates for proving the validity of such a certificate chain, i.e., the issuers' privacy on the lower levels may become an issue in some cases.
Remaining metadata
There are only few further checks that need to be conducted. One includes whether a credentials makes its holder eligible for issuing chained credentials, which we implemented by incorporating a binary value in the 6 th leaf. Another is to selectively disclose the hash corresponding to the schema and revocation registry (or their URLs) as described previously.
Complexity of the statements to prove
We specify the computational complexity for proof generation through the number of R1CS constraints, which is approximately proportional to proving time (see also Figure 3), decomposed by the different basic components. Table 1 summarizes the number of constraints associated with each of the basic components and the total number for seven scenarios of a VP, each of which involves the verification of integrity, nonexpiration, non-revocation, and holder binding. For instance, a non-revocation proof with the Poseidon hash in a revocation registry representing 2 million credentials involves 13 Poseidon hashes and Selectors, an integer division with rest, and an ex-tractKthBit component. We ignore the constraints associated with small operations, such as converting an attribute position into a Merkle path via the Num2Bits component, which adds only 1 constraint per hash in a Merkle proof. Note that combining several components in one circuit can even decrease the total number of non-linear constraints that are a proxy for proving complexity and in particular proving time (Albert et al., 2022), although no significant reduction is to be expected.
The default setting (I) uses the Poseidon hash, the EdDSA-Poseidon signature, and a revocation registry that represents 2 million credentials. (II) corresponds to a presentation of all 8 attributes, (III) to a revocation registry that represents more than 65 million credentials, and (IV) to a presentation of three chained credentials. (V) is the default scenario with EdDSA-Poseidon-based holder binding replaced by ECDSAbased holder binding, (VI) completely substitutes Poseidon and EdDSA-Poseidon by SHA256 and ECDSA, i.e. for the credential and the revocation registry, and (VII) involves the presentation of three chained credentials of type (VI).
Similar to selectively presenting more than one attribute or increasing the number of credentials represented by a revocation registry, further variations, such as increasing the number of leaves in the content tree to 32, only has a negligible impact on performance: This would merely add approximately 245 constraints for each revealed attribute (one hash and one Selector), i.e., proving complexity is only increased by a few percent compared to the digital attestations with 16 leaves.
Performance measurements
First, we tested the duration of proof generation on a laptop (Dell Precision 3571, Intel i9-12900H, 64 GB RAM, 2.5 GHz, 2.5 GHz, 14 cores with a total of 20 threads on a Windows host, with 32 GB RAM, 7 cores, and 14 threads assigned to a Ubuntu 20.04 LTS virtual machine on which the tests were conducted) with different technology stacks (C ++ /Intel x86 Assembly, Node.js, and Rust) and different complexities of statements by implementing circuits with a variable number of Poseidon hashes. Figure 3 illustrates the corresponding results for a range between 240 and more than 3.5 million R1CS constraints. In scenario (I), i.e., when using a zk-SNARK-friendly hash function (Poseidon) and signature mechanism (EdDSA-Poseidon), proving time for a VP that performs all metadata checks (integrity, non-expiration, non-revocation, holder binding) and reveals a single selected attribute is on the order of 300 ms with C ++ /Intel x86 Assembly on the laptop, similar to Rosenberg et al. (2022)'s construction. When using Node.js and Rust, proof generation duration is on the order of 1 s and 2 s, respectively. For the three chained credentials in scenario (V), total proof generation takes around 700 ms with C ++ /Intel x86 Assembly and 2 s with Node.js and Rust on the laptop. For scenarios (VI) and (VII) that build on a more established hash function (SHA256) and signature mechanism that allows integrating the secure elements of existing hardware, proving times are considerably larger, yet still acceptable: Around 5 resp. 15 s in C ++ /Intel x86 Assembly and 60 s with Rust and Node. Proof sizes are on the order of a few hundred bytes, and verification takes around 1 s with Node.js and 3 ms in Rust, independent of the scenario.
As modern central processing units (CPUs) in mobile phones tend to have more computational power than a Raspberry 4B (GadgetVersus, 2022), we also performed performance tests for the Node.js and Rust libraries on a Raspberry Pi 4B (Broadcom BCM2711, 4 GB RAM, 4 cores with 1.5 GHz and a total of 4 threads) to obtain an upper bound on proving time on mobile phones. The Intel x86 Assembly prover is not available on a CPU with the ARM instruction set. We display the results in Figure 3. We found that proving time is around one order of magnitude higher than on the 215 17 85 17 85 43 215 Range proof 252 1 252 1 252 1 252 3 756 1 252 1 252 3 756 Division with rest 252 1 252 1 252 1 252 3 756 1 252 1 252 3 laptop, i.e., around 6 s for scenarios (I) to (III) and 10 s for scenario (IV). A single proof of knowledge of an ECDSA signature with 163k constraints takes around 30 s with the Raspberry Pi.
Notably, as we ilustrate in Figure 4, when using Rust, a significant share of the total duration of proof generation is used for loading the WASM file and the proving key both on the laptop and on the Raspberry Pi. Read speed is naturally more limited for the Raspberry Pi. While total proof creation with the Raspberry Pi takes more than 5 s even for the simplest VP (I), we see that the pure computation time for the proof in Rust ("genProof") is only around 1 s, with the major time spent on loading the WASM code for witness generation ("loadWasm") and loading the proving key for proof generation ("loadZkey") from the file system. As modern smartphones tend to have considerably more computational power than a Raspberry Pi, and significantly higher reading speeds as they use solid state drives (SSDs) instead of a SD Card, we hypothesize that proof generation time on a mobile phone when running the Rust code natively may be considerably smaller. In contrast, for Node.js, by far the largest share is required for the computation of the proof from the witness. Validating these results in future experiments may give further valuable insights into how the performance of zk-SNARK generation in these libraries may be improved. Moreover, re-using the witness generation and proving program once it is loaded repeatedly, for instance, in a VP that involves credential chains or multiple different credentials, may be able to reduce the total time of cryptographic proof generation.
We have also started investigated performance on mobile devices. When running proof generation in the Browser or a react-native app on a mobile phone, proof generation is on the order of 7 s for scenario (I) when using high-end mobile phones (Samsung Galaxy S10+ (8 cores, 1.9 -2.7 GHz, with a total of 8 threads) and iPhone 13) and between 15 and 30 s for mid-range to low budget phones (Samsung Galaxy A6, Samsung Galaxy A32, Sony Xperia X Compact), respectively. We noticed that both on the laptop and on mobile phones, the choice of the Browser can make a considerable difference, with Firefox performing around 50 % slower than Chrome and Edge on the laptop. Using scenario (VI) and (VII) with SHA256 and ECDSA did not admit reasonable proof generation in a Browser, presumably for its significantly large computation and memory requirements. Consequently, we are currently working on deploying proof generation on the mobile phone with Rust, hoping for proving times that are only a few times longer than proof generation on a laptop and considerably faster than on the Raspberry Pi. Comparing our approach to the work by Rathee et al. (2022), which is also implemented with zk-SNARKs-friendly primitives as far as the Ethereum virtual machine admits it, our simplest VP has around 16,000 constraints instead of 62,000. Proving time for a single-threaded smartphone application with these 62,000 constraints is stated to be 6 s, so we can expect around 1.5 seconds with a single threaded prover for the simplest scenario (I) and 4 s for three chained credentials with zk-SNARK-friendly primitives. In particular, Rathee et al. (2022)'s performance evaluation suggests that when implementing a multi-threaded zk-SNARK prover on a mobile phone with suitable software, proving time can likely be pushed below the 1 second range and therefore considered practical on a smartphone as of today, at least with zk-SNARK-friendly primitives. With this proving speed, an ECDSA verification required for hardware binding would take less than 10 s and can also be considered practical on a mobile phone.
Detailed performance tests on a Raspberry Pi 4B
Finally, owing to the advantages of universal zk-SNARKs that we pointed out in Section 2, we tested proof generation with Plonk in Node.js and found that it performs around 50 times slower. Nonetheless, we encourage future experiments as there have been several improvements since, such as Turbo-Plonk and Ultra-Plonk, with opportunities for optimizations via lookup tables that may substantially accelerate the hashing and signature verification components that are responsible for the overwhelming share of constraints, particularly for less zk-SNARK-friendly primitives.
Scalable revocation
One of the core reasons for the limitations of RSA accumulator-based revocation when using CL signature-based anonymous credentials is that every credential needs to be associated with a large integer to do a proof of non-revocation. The size of the integer is important to ensure highly reliable privacy guarantees, so it cannot be simply reduced. Several optimizations of the underlying approach by Camenisch et al. (2009) have been suggested (Whitehat, 2021;Nguyen, 2005), and it seems that a combination of moving to pairing-based cryptography, splitting revocation registries without compromising herd privacy, and distinguishing cases where no, few, and many credentials are revoked can indeed make revocation registries that cover several million of credentials practical (Curran and Whitehat, 2022). Yet, this approach is complex and has therefore not been implemented in the larger SSI projects thus far, and it seems that low client-side storage requirements can only be achieved by larger storage requirements on the accumulator side, which may be problematic to store on a blockchain.
In our implementation, an issuer assigns every credential that they issue a unique ID that is stored in one of the metadata fields. Our approach creates a proof of non-revocation via downloading the Merkle tree from the issuer's server (or a blockchain) and using it to create a proof that the bit at the specific leaf and position corresponding to the credential's private revocation ID is set to 1. One can readily see that the number of bits required to store the Merkle tree in uncompressed form is twice the number of bits of the leaves, i.e., around 4 million bits or 0.5 MB -a reduction of a factor of 1,000 compared to the RSA accumulator approach implemented in Hyperledger Aries. The zk-SNARK-based approach hence allows to optimize the information that needs to be stored on the holder side. On the other hand, considering the example of 65 million credentials in a single revocation list, the 15 MB may still be considered too large to be practical. Fortunately, the Merkle tree based approach offers several further opportunities to optimize specific resources: If storage and computation is expensive where the revocation registry is stored (e.g., on a permissionless blockchain), it suffices to only record the changes ("witness deltas") to the revocation registry; for instance, an update transaction would record that m credentials with IDs id 0 , ..., id m−1 have been revoked. The state would then include only log(N) bits per revoked credential, where N is the total number of credentials represented by the revocation registry, and not require the computation and storage of hashes, reducing the amount of information to be stored significantly.
If storage and computation on users' devices is the bottleneck, there is also an opportunity to store the full Merkle tree corresponding to the revocation registry on some servers, and to query a Merkle proof for a certain leaf directly from the storage of the revocation registry. However, this could compromise herd privacy through correlating the query for a specific leaf to a VP, such that a wallet would need to trust the corresponding service. Consequently, a more promising approach could be a hybrid form of the first and the second option: The wallet maintains a subtree of the Merkle tree (e.g., the left quarter when this includes the leaf in which the revocation bit for the credential under consideration is stored) locally by periodically pulling and applying witness deltas, and queries all Merkle nodes of the upper n − 2 layers from a blockchain node or server. In the case of a revocation registry with 2 million entries, this would mean that the wallet needs to store only 120 kB and download 120 kB of data for updating the local revocation information that is necessary to produce a timely proof of non-revocation, without compromising on herd privacy guarantees.
If despite the availability of these tradeoffs, storage and computation of the revocation registry still are too resource intensive, the approach with general-purpose ZKPs also allows for splitting the revocation registry into smaller parts without sacrificing herd privacy, similar to the approach in Curran and Whitehat (2022): The issuer could then provide separate Merkle roots, tag credentials and Merkle roots such that it is clear to which registry they belong, and digitally sign them with a timestamp. Holders can then prove that their credential is nonrevoked according to a Merkle root and a timestamp signed by the issuer, without disclosing the Merkle root or the signature itself, thus preserving herd privacy in the VP.
Designated verifier presentations
One issue that also anonymous credentials cannot solve directly is that verifiers cannot be prevented from transmitting information presented to them by holders to third parties. While impersonation attacks through replaying VPs can be avoided by using a random challenge in the proof request and demanding a proof of the capability to sign it with the private binding key (see also Section 2 and Section 4.7), the verifier nonetheless can collect the revealed attributes and even use the ZKP attached to the corresponding VP to provide evidence for the correctness of the data under consideration. Particularly in scenarios in which the corresponding attributes are highly sensitive, such as health-related personal information, plausible deniability or repudiability is desirable (Hardman, 2020). The availability of verifiable personal information is also one of the main reasons why the general idea of digital attestations in a digital wallet faces resistance from members of net activist groups such as the German Chaos Computer Club (Wölbert and Bleich, 2022), and why the less flexible and convenient hardware-based solutions are sometimes considered a more privacy-friendly alter-native because they do not transmit cryptographically verifiable attributes but instead only create a trustworthy communication channel. Yet, facing the emergence of TLS-based oracles that Maram et al. (2021) discuss in the context of digital identity infrastructures, this argument seems questionable, as trusted execution environments (Zhang et al., 2016) or ZKPs (Zhang et al., 2020) can be used by a relying party to prove that they received some data from a hardware-based eID in an encrypted and authenticated communication channel.
One narrowly-scoped approach to provide plausible deniability despite the availability of cryptographically verifiable data is using differential privacy, i.e., adding noise to the attributes or predicates before revealing them. Indeed, generalpurpose ZKPs facilitate verifiable local differential privacy, i.e., they can prove the correctness of noise generation and that the noise was indeed added to the correct value of the attribute (Munilla- Garrido et al., 2022). However, in many scenarios, a trade-off in accuracy and, therefore, data quality will not be possible; particularly if a binary property or a sharp threshold (e.g., on age) is the basis for regulated authorization or process decisions.
A related, and arguably even more problematic, topic in the context of digital wallets is the tension field between users' wish to decide who they want to disclose their information to on the one hand, and security and privacy issues on the other hand . In fact, one of the key challenges of adoption of self-managed identities involves controversies how security risks arising from potential man-in-the-middle (MITM) attacks should be balanced with end users' informational self-determination and low entry barriers (Schellinger et al., 2022). Holders must verify the identity of the relying party in verifiable attestations prior to the VP. Omitting the identification of the verifier introduces significant security problems -with a prominent example being the German ID-Wallet, which implemented anonymous credentials using Hyperledger Anoncreds (Curran, 2022) based on CL signatures. The rollout of the wallet was cancelled after net activists pointed out that it did not identify the verifier and was therefore vulnerable to MITM attacks (Schellinger et al., 2022;Lissi, 2021). For instance, an attacker could compromise a QR code or link that a holder uses to start an interaction with a verifier, interact with a legitimate verifier to obtain their proof request (including the random challenge), and forward this proof request to the holder. As the holder believes that the attacker is the legitimate verifier, he or she creates a VP for this proof request and sends it to the attacker (who compromised the endpoint as referenced in the proof request). The attacker can forward the proof to the legitimate verifier. In other words, the attacker can use the VP to impersonate the holder. This scenario gets even more concerning when the VP is used to request a new credential from the verifier, as the attacker can make sure that this attestation is issued to him-/herself, making impersonation possible for future interactions without the need for another MITM attack. The potential presence of MITM attacks is therefore not only problematic for privacy reasons and in individual interactions but reduce the security and level of assurance of identity documents in general.
Naturally, regulators demand reasonable protection against such replay or MITM attacks for scenarios or attestations that require high levels of assurance (European Commission, 2022). High bars on the certification of verifiers, however, inhibit the adoption and use of digital identities and users' control and informational self-determination. To give an example, the German eID as implemented on a smart-card enforces that identity attributes can only be communicated to verifiers who have a certificate issued by a German national CA. Getting these certificates is not only challenging because it requires the implementation and documentation of substantial security measures but also involves paying the certificate authority substantial amounts, with the outcome of the certification request being unclear. In the context of self-managed digital identities, this means that a digital wallet would not allow for sending a VPs to the verifier unless the verifier can prove the possession of a corresponding certificate. This makes it difficult for selfmanaged digital identities to scale, for instance, to consumerto-consumer interactions and to interactions with smaller businesses and organizations.
Projects that implement self-managed digital identities have hence suggested different ways to resolve this tension between adoption barriers owing to high certification requirements of verifiers on the one hand and security risks in the absence of verifier certification as in the eID on the other hand through incorporating certification mechanisms that are relatively easy to access, such as using SSL certificates for the identification of the verifier (Bastian et al., 2022;Lissi, 2021). One can even discuss that it may make sense to demand different levels of certification for different VPs; for instance, a VP that only proves that a holder is older than 18 years may be relatively unproblematic even in the presence of a MITM attack and, thus, require no or very little certification on the verifier's side, making it accessible also when buying alcohol at a bar or a small supermarket where the verifier will unlikely have access to a sophisticated digital certificate. Yet, such a decision engine will always trade security against low entry barriers, and determining the required certification level for the verifier based on the type and origin of revealed attributes and predicates and involving the holder in the decision by asking them to accept certain risks (similar to circumventing an expired or non-existent SSL certificate in the Browser) seems challenging to implement, too; particularly considering that there is a large global set of verifiers and corresponding processes in which authorizations could be accumulated with a snowballing-like approach.
A very elegant solution to this tension field which, to the best of our knowledge, has not been proposed so far in the context of SSI and anonymous credentials, are designated verifier ZKPs. The core idea behind this construction is making sure that the ZKP in the VP is only convincing for the intended recipient, but not for any third party that the VP is potentially forwarded to (Jakobsson et al., 1996;Baum et al., 2022). In fact, many interactive ZKPs are designated verifier ZKPs because a third party to which the transcript of the interaction is forwarded cannot make sure that the transcript is complete, i.e., that the responses that were not satisfying were not removed from the transcript (Pass, 2003). However, non-interactive ZKPs like zk-SNARKs are designed to remove the inefficient, repeated interaction between prover and verifier, so they do not have this property by design. This manifests in the CRS (Canetti et al., 2007). Fortunately, with a simple trick, the designated verifier property can be added to a zk-SNARKs-based VP by proving the following statement (Buterin, 2022): Either I possess credentials that satisfy all the requirements of the proof request, including the correctness of the revealed attributes, or I know the verifier's secret key. The verifier's secret key corresponds to the key-pair that is used for asymmetric encryption of the VP, such that it is ensured that only the designated verifier can encrypt the VP. Thus, if a MITM does not communicate its own key-pair but the legitimate verifier's, it cannot decrypt the VP to extract and forward the relevant part. On the other hand, given the attacker communicates its own public key such that it can decrypt and re-encrypt the VP, the true verifier knows that the designated verifier zk-SNARK can be created trivially by the owner of the unknown public key and will therefore not be convinced. By contrast, if the holder sends the designated verifier VP to the true verifier with the correct key-pair, the true verifier will be convinced because they know that they protected their own key-pair.
Simplified, designated verifier VPs can be implemented in our approach as follows: Let a 1 , . . . , a n be all the assertions that the VP needs to satisfy (e.g., integrity, non-expiration, nonrevocation, holder binding, etc.), and let us assume for simplicity that a i === 1 for all i ∈ {1, . . . , n} was asserted in the classic VP. By subsequently multiplying the a i (we can only do one multiplication a step because of the underlying R1CS constraint system), the correctness of the VP can be asserted simply by demanding a === 1, where a = n i=1 a i . Let b the result of a component that verifies whether an input to the ZKP was the digital signature of the challenge specified in the proof request with the verifier's private key, where b = 1 represents a valid signature and b = 0 an invalid signature. Instead of demanding a === 1 in the "classical VP", we then simply assert that a + b -a * b === 1, which is the arithmetization of demanding a == 1 OR b == 1, i.e., either the holder knows a valid VP, or it can generate a signature that only the verifier is supposed to be able to generate.
In Circom, a designated verifier VP could also be implemented differently. Taking the example of an EdDSA-Poseidon signature, one can already set a bit that determines whether or not the signature verification is done properly, such that b would be a private input to the signature verification. Note also that creating a designated verifier VP would not increase the proving complexity substantially, as the only modification that needs to be made is that an additional signature (which the holder cannot generate anyway) must be verified (e.g., 4,218 additional constraints with EdDSA-Poseidon), a few assertions will include an additional multiplication (arithmetization of an if statement), and an additional constraint for the assertion a + b -a * b === 1.
Privacy with respect to the issuer
Existing large-scale implementations of anonymous credentials use a so-called link secret for holder binding Zundel, 2022). In essence, this works similarly to the private holder binding that we described in Section 4, with the main difference that the same secret key can be used in many credentials, yet every issuer includes another public key (more precisely, a blinded hash to a common secret) in the credential. This allows the holder to avoid being correlatable not only by verifiers (through the private holder binding that we described in Section 4) but also by issuers, which seems important when aiming for an increasing number of credentials and issuing parties in a digital identity ecosystem. Yet, the desired allor-nothing non-transferability (Camenisch and Lysyanskaya, 2001;Feulner et al., 2022) that binding all credentials to the same secret key should enable was not met with this approach as deployed in, for instance, the Hyperledger AnonCreds (Curran, 2022). The reason is that the holder cannot prove to the issuer that the link secret to be incorporated in blinded form in the credential is the same as the link secret in another credential that the holder used in a previous VP to the issuer. One solution to make sure that a "master" credential is not issued multiple times, or that policies regarding hardware binding are respected by an issuer, is presented in Maram et al. (2021). Taking into consideration the embedding of digital wallets in the political discourse, however, the liability and certification of issuers and corresponding governance frameworks may already be sufficient to ensure their honest behavior; let aside the fact that for such uniqueness, all issuers would need to agree on a single blockchain (or otherwise database) on which their issuances and revocations are recorded. With zk-SNARK and only a very small adaptation of the "standard" features of our approach to anonymous credentials that we described in Section 4, it is very easy to force a holder to use the same link secret for each of their credentials if desired: First, in a VP towards the prospective issuer, the holder additionally outputs a hash of the public binding key for another credential and the challenge specified in the proof request, similar to what the holder does when proving knowledge of a chain of credentials (see Section 4.8). The holder can then send the issuer the root hash of some small sub-tree of the meta-data of the credential that they would like to have issued, e.g., including the public binding key and the expiration timestamp, with enough precision to have sufficient entropy) and a ZKP that the public key behind both hashes is the same. This is essentially an extension of the code for proving knowledge of a pre-image in Figure B.7b twice and an equality proof for a part of the pre-image. The issuer can then include this sub-root in the credential that he or she signs, therefore provably binding the credential to the same key-pair as the credential that was previously presented, without even learning the corresponding binding key (link secret). With similar ideas, the issuer can generally include selected attributes from a holder's other credentials without learning what they are.
Arbitrary predicates
With a standard VP doing all the checks that a verifier can reasonably expect (integrity, non-expiration, non-revocation, holder binding), all the meta-attributes and attributes are available as (private) inputs, i.e., parameters, for further predicates.
As an example that illustrates the generality of predicates that one can easily implement with general-purpose ZKP, we implemented a polygon inbound-proof: Given two coordinates x, y in the Euclidean plane (or, in approximation, on a small area on earth that can be considered flat), one can prove that for a given polygon as specified by the verifier in the proof request, (x, y) is inside (or not inside) the polygon. The implementation is straightforward given some C code that determines whether a point is inside or outside a given polygon (Franklin, 2006), see Figure B.10. Every vertex of the polygon contributes 333 constraints (mainly responsible are the 64 constraints for each of the 5 comparators). Consequently, an inbound/outbound proof for a given polygon with 50 vertices adds only 16,650 constraints when using comparators for 64 bits, which arguably allows sufficient precision.
There are several conceivable practical cases where an implementation of the polygon inbound/outbound proof can be useful. For demonstration, we included the coordinates of an individual's place of living in a national or regional ID and used them to prove that the individual lives in Bavaria as an example of a certain city or federal state (see Figure 5). This predicate can be used to prove an authorization to claim certain benefits. Another example are regional energy markets where -based on its location -intermittent source of green electricity like a roof solar plant can register to offer its electricity or flexibility (e.g., Antal et al., 2021;Mengelkamp et al., 2018). Research considers such markets, on which energy assets can register autonomously, a promising way to improve the share of renewables in the grid. Often, these markets are blockchainbased, which means that sensitive information must not be disclosed during the registration process, particularly for small assets owned by individuals. As we use Circom for the implementation of our prototype, we were also able to create a corresponding zk-SNARK verifier smart contract for Ethereum fully automatically.
Our prototype implementation provides a generic circuit that verifies the integrity of all metadata and content data as well as non-expiration, non-revocation, and holder binding. When all content data has been verified in the circuit, a novel predicate can be implemented very easily, as only the predicate with private inputs attribute[0] to attribute[7] needs to be implemented. Consequently, for a customized predicate that adds up the first, second, and fourth attribute, the only thing the verifier would need to implement is a new output signal returnValue and assign returnValue <== attribute[0] + attribute[1] + attribute [3].
Lastly, predicates can easily be extended to involving (meta-) attributes from multiple credentials. One approach that would minimize the verification effort would be creating a circuit that takes multiple credentials, the corresponding revocation lemmas, and the verifier's challenge as input and to output a single proof that checks the validity of all input credentials and that outputs the result of the predicate. In this case, the verifier would only need to verify a single proof. Yet, this approach seems to be less modular than presenting each credential individually, revealing salted hashes of the attributes that are needed for computing the predicate. In a second step, the holder could then prove the correct computation of the predicate based on pre-image proofs for the salted hashes that were previously revealed. Note that this approach is similar to the one taken by LegoSNARK (Campanelli et al., 2019), with the exception that the initial VP that outputs blinded commitments to attributes is not based on general-purpose ZKPs but CL or BBS+ anonymous credentials and a commitment, and only the pre-image and computation of predicate proof as a second step is zk-SNARK-based to have the opportunity to compute arbitrary predicates across different credentials.
Trusted setup and flexibility of verifiable presentations
So far, our performance analyses were mostly considering Groth-16 zk-SNARKs. One significant shortcoming of this type of zk-SNARKs is the circuit-specific trusted setup, i.e., different types of VPs would require different proving keys that need to be transferred to the holder's wallet before generating a VP. For the initial bootstrapping of a system of digital wallets that can create zk-SNARK-based VPs, a few pre-defined proving keys, associated with common presentation types (e.g., a single attribute or two attributes revealed, where non-expiration, non-revocation and hardware binding, is always verified), hardcoded into the wallet or available as plug-in, may be sufficient. Yet, with a growing number of different credentials, a growing diversity of presentations (flexible number of attributes revealed), and verifier-specific predicates emerging, this solution is arguably not adequate anymore. One option could be to distribute the proving key with the proof request during the VP. In a local area network, the size of the proving key may not be inhibiting when a local bilateral connection was set up, but for wide area network, Bluetooth, and NFC-based transfer, several MB for a proving key (and even hundreds of MB to few GB when using SHA256 for hashing and ECDSA for digital signatures) is arguably not practical. Moreover, in this case, a sophisticated mechanism that gives users an opportunity to verify that no more information is requested than the proof request displays on their screen is required. Certification of proving keys that a wallet accepts, including a corresponding human-readable description, seems an obvious approach here; yet certification may make generic predicates less accessible to verifiers. What seems a more attractive option here that also provides plenty of avenues for future research is that after the relatively general verification of the metadata, the selected attributes and derived predicates could be described through a standardized format that allows for symbolic operations on the credential attributes, such that this description of the proof request allows for an automatic derivation of (1) the corresponding zk-circuit (constraint system) and (2) the corresponding prompt that asks users for their consent. However, with this dynamic approach and a correspondingly large number of VPs types, a circuit-specific trusted setup that requires the transfer of the corresponding proving key seems impractical. On the other hand, general-purpose ZKPs with universal trusted setup, like Plonk and its successors, still involve a computationally and memory-intensive preprocessing step that would be conducted on users' mobile phones. Consequently, zk-SNARKs without trusted setup, such as transparent zk-SNARKs and in particular zk-STARKs, seem like a desirable long-term solution. Yet, to the best of our knowledge, the deployment of such proof systems on mobile phones has not been explored thus far.
Conclusion and Avenues for Future Research
This paper has highlighted several key areas where generalpurpose ZKPs can address the shortcomings of existing implementations of privacy-oriented digital identity infrastructures. In particular, they can help address key requirements of anonymous credentials in self-managed digital identity projects that previously have been pointed out by various sources (e.g., Feulner et al., 2022;Schlatt et al., 2021;Sedlmeir et al., 2022;Schellinger et al., 2022;Hardman, 2020) but that are not present even in advanced solutions like such as Hyperledger Aries (Young, 2022). We illustrated that the key features that anonymous credentials need to support broad adoption in practice can be implemented with relatively limited effort using zk-SNARKs. Related research such as work by Rosenberg et al. (2022) has already provided provable security for similar approaches, and we argue that the universality of the existing tooling makes implementations and audits much easier to perform and more battle-tested with general-purpose ZKPs than with approaches such as CL and BBS+ signatures.
As the main limitation of zk-SNARKs is arguably the computational complexity of proof generation (Thaler, 2022), we also conducted several performance tests that suggest that even with our illustrative design and general-purpose zk-SNARK tooling, the speed of proof generation for a data-minimizing VPs can be considered practical on mobile phones as of today when using zk-SNARK-friendly cryptographic primitives. Moreover, we illustrated how general-purpose ZKPs not only improve on aspects such as private scalable revocation, hardware binding, and credential chaining or much more general predicates, but also bring unprecedented opportunities such as plausible deniability and expanding the solution space in the tension field between user control and low entry barriers on the one side and the risk of MITM attacks on the other side. Finally, we pointed out that there are still many open questions that provide promising avenues for future research, such as facilitating practical performance also for data-minimal VPs in settings where zk-SNARK-friendly hashing algorithms and digital signatures are not accessible or when using transparent zk-SNARKs on mobile phones. Moreover, to date, many zk-STARK implementations are not zero-knowledge, although upgrading them is relatively straightforward and the corresponding proofs are available (Ben-Sasson et al., 2018).
Our implementation and experiments suggest that future VPs could benefit from a very broad spectrum of predicate proofs that can be implemented. Yet, to make this practical, we need to standardize formats for constraint systems, witnesses, and proving and verification algorithms, so developers can combine different libraries that reflect the application domain and hardware that they will run on in a modular way. With proof generation performance on mobile phones being one of the key weaknesses of the zk-SNARKs-based approach, novel tools for efficiently loading files associated with witness and proof generation, or improving proving speed by leveraging GPUs (Ni and Zhu, 2022), are also critical. Even if zk-SNARK generation is not yet possible on all devices or some users do not want to wait longer than they are used to, there are different opportunities for the short term, such as using Merkle tree-based selective disclosure without sophisticated data minimization and outsourcing proof generation to a single trusted yet randomly selected third party or several servers that collaboratively generate the zk-SNARK in a multi-party computation, such that many of them need to collude to compromise the user's privacy (Ozdemir and Boneh, 2022).
Besides the need for further performance improvement, several other challenges outside the scope of zk-SNARKs remain for data-minimal digital identity infrastructures that provide promising avenues for future research. For instance, related work indicates that although user trust benefits from privacy features (Guggenberger et al., 2023), users struggle with understanding the new privacy capabilities that digital wallets and general-purpose ZKPs offer (Sartor et al., 2022). This highlights the need for further user acceptance studies. Considering the limited performance of generating ZKPs on a mobile phone, it would also be interesting to explore how proof generation can be prepared in the background while users are inspecting the attributes to be released prior to giving their approval, and which waiting times users deem acceptable depending on their privacy preferences.
Future research could also compare the practicality of the two core paradigms that general-purpose ZKPs facilitate: The first approach involves retrofitting existing credential infrastructures such as X.509 certificates by transpiling the corresponding verification libraries, as proposed in Delignat-Lavaud et al. (2016)'s approach, or by implementing novel verification libraries for such credential systems in one of the available DSLs. The second approach involves creating a novel infrastructure of certificates tailored towards multi-credential presentations and zk-SNARK-friendly cryptographic primitives, closer to what Hyperledger AnonCreds aim to achieve and what Maram et al. (2021); Rosenberg et al. (2022) and our work propose. For such novel frameworks, the heuristic approaches that we discuss in this paper should be formalized, for instance, to get provable privacy guarantees of the core components of VPs as well as the more complex setup that needs to be considered for studying the role of how combining designated verifier ZKPs and asymmetric encryption addresses MITM at-tacks without restrictive certification and identification of the relying party. Of course, hybrid approaches are also conceivable, similar to W3C verifiable credentials being chained to SSL certificates via publishing the corresponding key-pairs on a website, although this would arguably increase the complexity of creating the related wallets and verification backends. Future security analyses could also consider which additional anonymization on other layers are required to avoid correlation beyond VPs, similar to discussions on anonymous digital payment systems such as (Tinn and Dubach, 2021;Groß et al., 2021), which point out the need for network-level anonymization, e.g., via onion routing through the Tor network (Dingledine et al., 2004).
Additional research is also required to compare different approaches to revealing multiple attributes from several credentials in a VP, which is common in practical applications. As we discussed in Section 4, the holder could generate multiple ZKPs that reveal attributes from each credential, or a single ZKP that takes multiple credentials as input and verifies all of them. The corresponding trade-offs depend on the number of credentials involved and whether general-purpose ZKPs with or without trusted setup are used.
While privacy-focused digital identity management is particularly important for individuals, the application of zk-SNARKs-based anonymous credentials is relevant far beyond. Also digital identities for machines inherit privacy issues because they are often associated with individuals. For instance, blockchain-based energy markets require fine-granular and, therefore, personally identifiable production and consumption information (Utz et al., 2022;Babel et al., 2022). Verifiability can typically be provided by signing the data with certified sensors, yet information such as the exact location of the sensor may be too sensitive to reveal, which is why, for instance, the polygon inbound predicate can be useful. Further applications where verifiable information for identification, authentication, and access control is desirable yet predicates may become relatively complex comprise blockchain-based e-voting (Delignat-Lavaud et al., 2016), privacy-focused digital currencies in regulated environments (Groß et al., 2021;Wüst et al., 2022), verifiable polling that offers plausible deniability through local differential privacy (Munilla- Garrido et al., 2022), as well as applications in the Metaverse (Dwivedi et al., 2022).
Privacy-focused and user-centric means of digital identification, authentication, and authorization verification have come a long way since Chaum's seminal paper. The pilot projects and political developments around SSI that we can observe look promising and provide a unique opportunity to safeguard individuals' privacy despite an increasing amount of verifiable identity information that will arguably be exchanged in the future digital economy. Yet, today's implementations of anonymous credentials in practice still have significant shortcomings, as they build on hand-crafted and, thus, highly-performant but also functionality-limited, specialized ZKPs designed in the early 2000s. In the last few years, there has been impressive progress regarding the performance and ease of implementation of general-purpose ZKPs like zk-SNARKs in the context of privacy and scaling approaches in cryptocurrency projects. This makes a much more powerful technology stack available to implement privacy-focused SSI systems to a much larger community of developers. By illustrating the ways in which generalpurpose ZKPs address pressing problems of today's privacyoriented implementations of SSI, and that performance can be considered practical, this paper aims to encourage stakeholders on all levels to leverage the potential of general-purpose ZKPs in their technical roadmap and to invest in exploring the corresponding novel opportunities in the design space of digital identity infrastructures. // Iteratively recover the input from its supposed binary representation. 38 r u n n i n g B i n a r y S u m += b i n a r y R e p r e s e n t a t i o n [ i ] * powersOfTwo ; 39 powersOfTwo += powersOfTwo ; 40 } 41 42 // Make the remaining check to verify that the Oracle -provided binary representation is legitimate. Inspired by the Num2Bits implementation 43 ( https://github.com/iden3/circomlib/blob/master/circuits/bitify.circom ) . 44 r u n n i n g B i n a r y S u m === i n ; 45 46 // Assign the output bit by using the last value of the runningOutputBitSum. .9: Circom implementation for retrieving the k th bit from a large integer input with at most 253 bits. Note that the subsequent values of powersOfTwo in each loop cycle do not need to be constrained because they are known at compile time, which is why we can make it a var instead of a signal. The subsequent values for runningBinarySum also only need to be constrained in the end (line 42) once. This is because runningBinarySum is only a linear combination (with coefficients from powersOfTwo that are known at compile time) of signals, namely the elements of the binaryRepresentation array. In contrast, every update of the runningOutputBitSum is non-linear, so there must be new assignments (in an array of signals) in each individual step (see line 35). | 2023-01-04T06:42:01.826Z | 2023-01-02T00:00:00.000 | {
"year": 2023,
"sha1": "5b02d9d1023968bcc18f5b4a15efecc16900ddf2",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "5b02d9d1023968bcc18f5b4a15efecc16900ddf2",
"s2fieldsofstudy": [
"Computer Science",
"Law"
],
"extfieldsofstudy": [
"Computer Science"
]
} |
13933840 | pes2o/s2orc | v3-fos-license | Parametrizing the lepton mixing matrix in terms of deviations from tri-bimaximal mixing
We propose a parametrization of the lepton mixing matrix in terms of an expansion in powers of the deviations of the reactor, solar and atmospheric mixing angles from their tri-bimaximal values. We show that unitarity triangles and neutrino oscillation formulae have a very compact form when expressed in this parametrization, resulting in considerable simplifications when dealing with neutrino phenomenology. The parametrization, which is completely general, should help to establish possible relations between the deviations of the reactor, solar and atmospheric mixing angles from their tri-bimaximal values, and hence enable models which predict such relations to be more directly compared to experiment.
Over the last decade neutrino physics has undergone a revolution with the measurement of neutrino mass and lepton mixing from a variety of solar, atmospheric and terrestrial neutrino oscillation experiments [1].Lepton mixing is described by the 3 × 3 matrix [2] The Particle Data Group (PDG) parameterization of the lepton mixing matrix (see e.g.[3]) is: where s 13 = sin θ 13 , c 13 = cos θ 13 with θ 13 being the reactor angle, s 12 = sin θ 12 , c 12 = cos θ 12 with θ 12 being the solar angle, s 23 = sin θ 23 , c 23 = cos θ 23 with θ 23 being the atmospheric angle, δ is the (Dirac) CP violating phase which is in principle measurable in neutrino oscillation experiments, and P = diag(e i α 1 2 , e i α 2 2 , 0) contains additional (Majorana) CP violating phases α 1 , α 2 .Current data is consistent with the tri-bimaximal mixing (TBM) form 1 Many models can account for TBM lepton mixing [5,6,7,8,9,10,11].However there is no convincing reason for TBM to be exact, and in the future deviations from it are expected to be observed.With this in mind it is clearly useful to develop a parametrization of the lepton mixing matrix in which such deviations are manifest, and in which the predictions of models for deviations from tri-bimaximal mixing can naturally be expressed.Such a parametrization must be model independent, and completely general so that it can be used by experimentalists and phenomenologists in performing analyses of neutrino experiments.It must also be sufficiently simple to be useful and yet accurate enough to be reliable.
In this paper we discuss a parametrization of the lepton mixing matrix which possesses all of the above desirable features.The parametrization exploits the empirical observed closeness of lepton mixing to the TBM form, and is analagous to the Wolfenstein parametrization of quark mixing [12].Just as the Wolfenstein parametrization is an expansion about the unit matrix, so the present parametrization is an expansion about the tri-bimaximal matrix.Unlike the Wolfenstein parametrization, we introduce three small parameters parametrizing the deviations of the reactor, solar and atmospheric angles from their tri-bimaximal values.The expansion works since all three parameters are empirically small, having magnitude of order the Wolfenstein parameter λ ≈ 0.227 or less.A related proposal to expand the lepton mixing matrix elements about the tri-bimaximal matrix elements, using a different parametrization from that introduced here, was discussed in [13].Other related proposals to parametrize the lepton mixing matrix have been considered in [14,15,16,17,18]. 2ithout loss of generality we define where we have introduced the three real parameters r, s, a to describe the deviations of the reactor, solar and atmospheric angles from their tri-bimaximal values.Global fits of the conventional mixing angles [19] can be translated into the 2σ ranges3 0 < r < 0.22, −0.11 < s < 0.04, −0.12 < a < 0.13. ( The empirical smallness of these parameters suggests that we consider an expansion of the lepton mixing matrix in powers of r, s, a about the tri-bimaximal form.To first order4 in r, s, a the lepton mixing matrix can be written As in the Wolfenstein parametrization, the above parametrization of the lepton mixing matrix avoids the introduction of mixing angles, instead dealing directly with elements of the mixing matrix.Accordingly the parametrization results in considerable simplifications when dealing with neutrino phenomenology.For example, the complex elements of the quark mixing matrix can be visualized using unitarity triangles [20], which, when normalized, only depend on two parameters.The same proves to be true when using the above parametrization of the lepton mixing matrix.The sides of the unitarity triangles enter into the neutrino oscillation formulae, and consequently these are also considerably simplified by the new parametrization.In the remainder of the paper we shall discuss unitarity triangles and neutrino oscillation formulae using the above parametrization.CP violation is described by the Jarlskog [21] invariant which to leading order is Leptonic unitarity triangles [22] may be constructed using the orthogonality of different pairs of columns or rows of the mixing matrix.Only the opening angles, side lengths and areas of the triangles have physical significance.For example the area of each unitarity triangle is A = 1 2 |J| and CP violation implies that the longest side of each unitarity triangle is smaller than the sum of the other two.Current solar, reactor and atmospheric experiments directly constrain the elements U e2 , U e3 and U µ3 , which have a particularly simple parametrization in Eq.6.The most important unitarity triangles should therefore include all of the elements U e2 , U e3 and U µ3 .There are two such unitarity triangles, the ν 2 .ν 3 one [16] corresponding to the orthogonality of the second and third column, and the ν e .νµ one [23] corresponding to the orthogonality of the first and second row.Each of them has a simple expression in terms of the new parametrization, as we now discuss.
The ν 2 .ν 3 triangle in Fig. 1 corresponds to the unitarity relation To first order the sides of this unitarity triangle are given by Clearly S 1 + S 2 + S 3 = 0 to first order.The invariant J is which yields Eq.7.To first order the sides of this triangle are only sensitive to the solar and reactor parameters s and r and the phase δ, with the atmospheric parameter a only appearing at second order.One may rescale the sides by S 3 To first order the rescaled triangle is only sensitive to the reactor parameter r and the phase δ, which is the anticipated result.To second order the solar parameter s (but not the atmospheric parameter a) appears.
The other unitarity triangle of interest is ν e .νµ in Fig. 2 corresponding to the unitarity relation To first order the sides of this unitarity triangle are given by Clearly T 1 + T 2 + T 3 = 0 to first order.The invariant J is which again yields Eq.7.Unlike the previous case, the sides of this triangle are sensitive to the atmospheric parameter a at first order.One may rescale the sides by T 1 As in the previous case, to first order the rescaled triangle is only sensitive to the reactor parameter r and the phase δ, which is the anticipated result.To second order the solar parameter s and the atmospheric parameter a appear.We now turn to the application of the parametrization in Eq.4 to neutrino oscillations.Let us denote by P αβ = P (ν α → ν β ) the probability of transition from a neutrino flavour α to a neutrino flavour β.Then expanding to second order in the parameters r, s, a and ∆ 21 , where it is assumed that ∆ 21 ≪ 1 as in [24], we find considerably simplified vacuum oscillation5 probabilities.
The electron anti-neutrino disappearance probability relevant for a reactor experiment [25] is given to second order in r, s, a and ∆ 21 as where ∆ ij = 1.27∆m 2 ij L/E with L the oscillation length in km, E the beam energy in GeV, and ∆m 2 ij = m 2 i − m 2 j in eV 2 .Note that this disappearance probability is independent of the solar and atmospheric parameters s, a, as well as the phase δ, to this order.
The electron neutrino appearance probability relevant for a forthcoming long baseline muon neutrino beam experiment [26] is given to second order in r, s, a and ∆ 21 as It is also independent of the solar and atmospheric parameters s, a and only depends on the reactor parameter r and the phase δ to this order.The reason is that each of the terms is second order in the parameters r, ∆ 21 , so any deviations from tri-bimaximal solar or atmospheric mixing only appear at third order.The muon neutrino disappearance probability is given to second order in r, s, a and ∆ 21 as Muon neutrino disappearance is clearly sensitive to deviations from tri-bimaximal mixing, since all three parameters r, s, a and the phase δ appear.For example the prospects for measuring deviations from maximal atmospheric mixing in the next generation of long baseline muon neutrino beam experiments has recently been discussed [27].Similarly the tau neutrino appearance probability is given to second order in r, s, a and ∆ 21 as We emphasize that the parametrization discussed here is completely general and is not based on the ansatz of tri-bimaximal mixing, any more than the Wolfenstein parametrization [12] is based on the ansatz that the quark mixing matrix is equal to the unit matrix.Just as the Wolfenstein parametrization is an expansion about the unit matrix, so this parametrization is an expansion about the tri-bimaximal matrix.Unlike the Wolfenstein parametrization, there are three small parameters r, s, a parametrizing the reactor, solar and atmospheric deviations from tri-bimaximal mixing.The expansion works since the deviations from tri-bimaximal mixing are empirically small parameters with r, s, a all having magnitude of order the Wolfenstein parameter λ ≈ 0.227 or less.Indeed these parameters are sufficiently small that the first order approximation is accurate enough for many purposes, resulting in quite a simple looking lepton mixing matrix in Eq.6, for example.Unitarity triangles and neutrino oscillation formulae also have a very simple form when expressed in this parametrization.
The three parameters r, s, a are not determined at the present time, and it is even possible that one or more of them (possibly all of them) are zero, although this seems a priori unlikely.However, as mentioned, many speculations appear in the literature as to the origin and nature of tri-bimaximal mixing and the deviations from it, and these speculations naturally find expression in this parametrization.For example certain classes of unified flavour models [5] predict a sum rule which relates s to r and δ, namely s ≈ r cos δ, where r ≈ λ/3 and a = O(λ 2 ).Alternatively it has been suggested [16] that trimaximal solar mixing is exact, s = 0, with a ≈ − 1 2 r cos δ and r unspecified.Clearly an important goal of the next generation of neutrino experiments must be to show that the parameters r, s, a differ from zero.Subsequent high precision neutrino experiments will then be required to accurately measure the values of the parameters r, s, a, as well as δ, to investigate their possible relationships to each other and to the Wolfenstein parameter λ.
The second order corrections to the unscaled sides of the ν 2 .ν 3 unitarity triangle in Eq.9 are ∆S The second order corrections to the unscaled sides of the ν e .νµ unitarity triangle in Eq. 13 The second order corrections to the normalized sides of the ν e .νµ unitarity triangle in Eq.15 are
B Neutrino oscillations in matter
In this appendix we present the complete formulae for neutrino oscillations in the presence of matter of constant density to second order in the quantities r, s, a and ∆ 21 , where
Figure 1 :
Figure 1: The ν 2 .ν 3 unitarity triangle.The angle γ is equal to the CP phase δ to first order.The unknown Majorana phases just rotate the triangle in the complex plane.The rescaled triangle is oriented as shown with the opening angles unchanged, the horizontal side having unit length, and the shortest side having length r to first order.Currently 0 < r < 0.22 at 2σ, and the opening angles α, β and γ are all undetermined.
Figure 2 :
Figure 2: The ν e .νµ unitarity triangle.The angle γ ′ is equal to the CP phase δ to first order.The unknown Majorana phases cancel.The rescaled triangle is oriented as shown with the opening angles unchanged, the horizontal side having unit length, and the shortest side having length 3 2 r to first order.Currently 0 < r < 0.22 at 2σ and the opening angles α ′ , β ′ and γ ′ are all undetermined.
The second order corrections to the normalized sides of the ν 2 .ν 3 unitarity triangle in Eq.11 are | 2007-10-15T12:40:33.000Z | 2007-10-02T00:00:00.000 | {
"year": 2007,
"sha1": "ba952e1d833a7bf6e9b3b3f5ab13774c3ddd25ab",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1016/j.physletb.2007.10.078",
"oa_status": "HYBRID",
"pdf_src": "Arxiv",
"pdf_hash": "d6a07b95cf2bb9b1a2fdba5c707f528a70569a4e",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
117924860 | pes2o/s2orc | v3-fos-license | On the geometry of Pr\"ufer intersections of valuation rings
Let $F$ be a field, let $D$ be a subring of $F$ and let $Z$ be an irreducible subspace of the space of all valuation rings between $D$ and $F$ that have quotient field $F$. Then $Z$ is a locally ringed space whose ring of global sections is $A = \bigcap_{V \in Z}V$. All rings between $D$ and $F$ that are integrally closed in $F$ arise in such a way. Motivated by applications in areas such as multiplicative ideal theory and real algebraic geometry, a number of authors have formulated criteria for when $A$ is a Pr\"ufer domain. We give geometric criteria for when $A$ is a Pr\"ufer domain that reduce this issue to questions of prime avoidance. These criteria, which unify and extend a variety of different results in the literature, are framed in terms of morphisms of $Z$ into the projective line ${\mathbb{P}}^1_D$
Introduction
A subring V of a field F is a valuation ring of F if for each nonzero x ∈ F , x or x −1 is in V ; equivalently, the ideals of V are linearly ordered by inclusion and V has quotient field F . Although the ideal theory of valuation rings is straightforward, an intersection of valuation rings in F can be quite complicated. Indeed, by a theorem of Krull [17,Theorem 10.4], every integrally closed subring of F is an intersection of valuation rings of F . In this article, we describe a geometrical approach to determining when an intersection A of valuation rings of F is a Prüfer domain, meaning that for each prime ideal P of A, the localization A P is a valuation ring of F . Whether an intersection of valuation rings is Prüfer is of consequence in multiplicative ideal theory, where Prüfer domains are of central importance, and real algebraic geometry, where the real holomorphy ring is a Prüfer domain that expresses properties of fields involving sums of squares; see the discussion below. Over the past eighty years, Prüfer domains have been extensively studied from ideal-theoretic, homological and module-theoretic points of view; see for example [6,7,9,14,16].
Throughout the paper F denotes a field, D is a subring of F that need not have quotient field F , and Z is a subspace of the Zariski-Riemann space X of F/D, the space of all valuation rings of F that contain D. The topology on X is given by declaring the basic open sets to be those of the form {V ∈ X : t 1 , . . . , t n ∈ V }, where t 1 , . . . , t n ∈ F . We assume for technical convenience that F ∈ Z. With this notation fixed, the focus of this article is the holomorphy ring 1 A = V ∈Z V of the subspace Z. Such a ring is integrally closed in F , and, as noted above, every ring between D and F that is integrally closed in F occurs as the holomorphy ring 2010 Mathematics Subject Classification. Primary 13F05, 13F30; secondary 13B22, 14A15. 1 This terminology is due to Roquette [24,p. 362]. Viewing Z as consisting of places rather than valuation rings, the elements of A are precisely the elements of F that have no poles (i.e., do not have value infinity) at the places in Z. of a subspace of X. In general it is difficult to determine the structure of A from properties of Z, topological or otherwise; see [20,21,22], where the emphasis is on the case in which D is a two-dimensional Noetherian domain with quotient field F . In this direction, there are a number of results that are concerned with when the holomorphy ring A is a Prüfer domain with quotient field F . Geometrically, this is equivalent to Spec(A) being an affine scheme in X. Moreover, by virtue of the Valuative Criterion for Properness, A is a Prüfer domain with quotient field F if and only if there are no nontrivial proper birational morphisms into the scheme Spec(A), an observation that motivates Temkin and Tyomkin's notion of Prüfer algebraic spaces [30].
We show in this article that the morphisms of Z (viewed as a locally ringed space) into the projective line P 1 D determine whether the holomorphy ring A of Z is a Prüfer domain. A goal in doing so is to provide a unifying explanation for an interesting variety of results in the literature. By way of motivation, and because we will refer to them later, we recall these results here.
(1) Perhaps the earliest result in this direction is due to Nagata [18, (11.11)]: When Z is finite, then the holomorphy ring A of Z is a Prüfer domain with quotient field F .
(2) Gilmer [10,Theorem 2.2] shows that when f is a nonconstant monic polynomial over D having no root in F and each valuation ring in Z contains the set S := {1/f (t) : t ∈ F }, then A is a Prüfer domain with torsion Picard group and quotient field F . Rush [25,Theorem 1.4] has since generalized this by allowing the polynomial f to vary with the choice of t, but at the (necessary) expense of requiring the rational functions in S to have certain numerators other than 1. Gilmer was motivated by a special case of this theorem due to Dress [4], which states that when the field F is formally real (meaning that −1 is not a sum of squares), then the subring of F generated by (1 + t 2 ) −1 : t ∈ F is a Prüfer domain with quotient field F whose set of valuation overrings is precisely the set of valuation rings of F for which −1 is not a square in the residue field. In the literature of real algebraic geometry, the Prüfer domain thus constructed is the real holomorphy ring of F/D. The fact that such rings are Prüfer has a number of interesting consequences for real algebraic geometry and sums of powers of elements of F ; see for example Becker [1] and Schülting [27]. These rings are also the only known source of Prüfer domains having finitely generated ideals that cannot be generated by two elements, as was shown by Schülting [26] and Swan [31]; the related literature on this aspect of holomorphy rings is discussed in [23]. The notion of existential closure leads to more general results on Prüfer holomorphy rings in function fields. For references on this generalization, see [19].
(3) Roquette [24,Theorem 1] proves that when there exists a nonconstant monic polynomial f ∈ A[T ] which has no root in the residue field of V for each valuation ring V ∈ Z (i.e., the residue fields are "uniformly algebraically non-closed"), then A is a Prüfer domain with torsion Picard group and quotient field F . Roquette developed these ideas as a general explanation for his Principal Ideal Theorem, which states that the ring of totally p-integral elements of a formally p-adic field is a Bézout domain; that is, every finitely generated ideal is principal [24, p. 362]. In particular, if there is a bound on the size of the residue fields of the valuation rings in Z, then A is a Bézout domain [24,Theorem 3]. Motivated by just such a situation, Loper [15] independently proved similar results in order to apply them to the ring of integer-valued polynomials of a domain R with quotient field F : (4) In [23] it is shown that when the holomorphy ring A of Z contains a field of cardinality greater than that of Z, then A is a Bézout domain.
In this article we offer a geometric explanation for these results that reduces all the arguments to a question of homogeneous prime avoidance in the projective line The standing assumption that F is one of the valuation rings in Z guarantees that Z is an irreducible space; irreducibility in turn guarantees that O Z is a sheaf. (Note that since we are interested in the ring A = V ∈Z V , the assumption that F ∈ Z is no limitation.) When considering irreducible subspaces Y of X, we similarly treat Y as a locally ringed space with structure sheaf defined in this way.
By a morphism we always mean a morphism in the category of locally ringed spaces. If X and Y are locally ringed spaces with fixed morphisms α : X → Spec(D) and β : Y → Spec(D), then a morphism φ : ). Thus when considering D-morphisms from Z to X, with X a D-scheme, we always assume that the structure morphism Z → Spec(D) is the one defined above.
Morphisms into projective space
In this section we describe the D-morphisms of Z into projective space by proving an analogue of the fact that morphisms from schemes into projective space are determined by invertible sheaves. Our main technical device in describing such morphisms is the notion of a projective model, as defined in [32, Chapter VI, §17]. Let t 0 , . . . , t n be nonzero elements of F , and for each i = 0, 1, . . . , n, define D i = D[t 0 /t i , . . . , t n /t i ] and U i = Spec(D i ). Then the projective model of F/D defined by t 0 , . . . , t n is X = {(D i ) P : P ∈ Spec(D i ), i = 0, 1, . . . , n}. The projective model X is a topological space whose basic open sets are of the form {R ∈ X : u 0 , . . . , u m ∈ R}, where u 0 , . . . , u m ∈ F , and which is covered by the open subsets {(D i ) P : P ∈ U i }, i = 0, 1, . . . , n. Define a sheaf O X of rings on X for each nonempty open subset U of X by O X (U ) = R∈U R, and let the ring of sections of the empty set be the trivial ring with 0 = 1. Since X is irreducible, O X is a sheaf and hence (X, O X ) is a scheme, and in light of the following remark, it is a projective scheme. Remark 2.1. If X is a projective model defined by n + 1 elements, then there is a closed immersion X → P n D . For let X be the projective model defined by [11, p. 88], which by virtue of the way it is constructed is a closed immersion [28, Lemma 01QO].
Let t 0 , . . . , t n be nonzero elements of F , and let X be the projective model of F/D defined by t 0 , . . . , t n . For each valuation ring V in Z, there exists i = 0, 1, . . . , n such that t j /t i ∈ V for all j, and it follows that each valuation ring V in Z dominates a unique local ring R in the model X, meaning that R ⊆ V and the maximal ideal of R is contained in the maximal ideal of V . The domination morphism δ = (d, d # ) : Z → X is defined by letting d be the continuous map that sends a valuation ring in Z to the local ring in X that it dominates, and by letting ). Let γ : X → P n D be the closed immersion defined in Remark 2.1, and let δ : Z → X denote the domination morphism. Then we say that the D-morphism γ • δ is the morphism defined by t 0 , . . . , t n . We show in Proposition 2.3 that each D-morphism Z → P n D arises in this way. Our standing assumption that F ∈ Z is used in a strong way here, in that the proposition relies on a lemma which shows that the D-morphisms from Z into projective space are calibrated by the inclusion morphism Spec(F ) → Z.
We claim that φ| Y = γ| Y . Since U is affine and Y is a locally ringed space, the morphisms φ| Y and γ| Y are equal if and only if f # } is a cover of Z, and we have shown that φ and γ restrict to the same morphism on each of these open sets, so we conclude that φ = γ. It is straightforward to verify that φ • ι = γ • ι if and only if f (F ) = g(F ) and f # η = g # η , so the lemma follows.
Proof. Write φ = (f, f # ), let η = f (F ), and let S = P n D = Proj(D[T 0 , . . . , T n ]). For each i = 0, . . . , n, let U i be the open affine set S Ti , so that S = U 0 ∪ · · · ∪ U n . Let α = (a, a # ) : Spec(F ) → S be the composition of φ with the canonical morphism Spec(F ) → Z, and note that for each i, a # Ui (s) = f # S,η (s) for all s ∈ O S (U i ). Since α is a morphism of schemes into projective n-space over D, there exist t 0 , . . . , t n ∈ F such that for each i, j, f # Ui (T j /T i ) = t j /t i ; see the proof of [11, Theorem II.7.1, p. 150]. Let X be the projective model of F/D defined by t 0 , . . . , t n . Then t 0 , . . . , t n can be viewed as global sections of an invertible sheaf on X that is the image of the twisting sheaf O(1) of S. There is then by [11,Theorem 7.1,p. 150] and its proof a unique D-morphism γ = (g, g # ) : X → S such that g # U = f # U for each open set U of S and g : X → S is the continuous map that for each i = 0, . . . , n sends the equivalence class of a prime ideal P in Spec Remark 2.5. By Lemma 2.2, the D-morphisms Z → P n D are determined by their composition with the morphism Spec(F ) → P n D . Conversely, by Corollary 2.4, each D-morphism Spec(F ) → Z lifts to a unique morphism Z → X. Thus the D-morphisms Z → P n D are in one-to-one correspondence with the F -valued points of P n D .
A geometrical characterization of Prüfer domains
We show in this section that if Z has the property that the image of every Dmorphism Z → P 1 D of locally ringed spaces factors through an affine scheme, then the holomorphy ring A of Z is a Prüfer domain. A special case in which this is satisfied is when there is a homogeneous polynomial f (T 0 , T 1 ) of positive degree d such that the image of each such morphism is contained in (P 1 D ) f . In this case, we show that the Prüfer domain A has torsion Picard group.
Theorem 3.1. The ring A is a Prüfer domain with quotient field F if and only if every D-morphism Z → P 1 D factors through an affine scheme. Proof. Suppose A is a Prüfer domain, and let φ : Z → P 1 D be a D-morphism. By Proposition 2.3, there exists a projective model X of F/D and a D-morphism γ : X → P 1 D such that φ = γ • δ, where δ : Z → X is the domination morphism. Since A is a Prüfer domain with quotient field F , every localization of A is a valuation domain and hence dominates a local ring in X. Since every valuation ring in Z contains A, it follows that φ factors through the affine scheme Spec(A).
Conversely, suppose that every D-morphism Z → P 1 D factors through an affine scheme. Let P be a prime ideal of A. To prove that A P is a valuation domain with quotient field F , it suffices to show that for each 0 = t ∈ F , t ∈ A P or t −1 ∈ A P . Let 0 = t ∈ F , and let X be the projective model of F/D defined by 1, t. Then by Remark 2.1 there is a closed immersion of X into P 1 D be the D-morphism that results from composing this closed immersion with the domination morphism Z → X. In particular, with ν = f (F ), By assumption there is a ring R and D-morphisms δ = (d, d # ) : Z → Spec(R) and γ = (g, g # ) : we may assume by Lemma 2.2 that R is a subring of F and that δ is the domination morphism. Then since R is the ring of global sections of Spec(R) and A is the ring of global sections of Z, it follows that R ⊆ A, This proves that A is a Prüfer domain with quotient field F . Nagata's theorem discussed in (1) of the introduction follows then from Prime Avoidance: In fact, when Z is finite, then A is a Bézout domain: If M is a maximal ideal of A, then A M is a valuation domain, but since Z is finite, A M = V ∈Z V A M , which since A M is a valuation domain, forces A M = V for some V ∈ Z. Therefore, A has only finitely many maximal ideals, so that every invertible ideal is principal, and hence A is a Bézout domain.
In Theorem 3.5, we give a criterion for when A is a Prüfer domain with torsion Picard group. In this case, the D-morphisms Z → P 1 D not only factor through an affine scheme, but have image in an affine open subscheme of P 1 D . For lack of a precise reference, we note the following standard observation. ( The image of the morphism Z → P n D defined by t 0 , . . . , t n is in (P n D ) f . Proof. Let u = f (t 0 , . . . , t n ). First we claim that (1) implies (2). If V ∈ Z, then there is i such that t i divides in V each of t 0 , . . . , t n . It follows that when i e i = d for nonnegative integers e i , then t e0 0 t e1 1 · · · t en n ∈ t d i V . Thus by (1), t e0 0 t e1 1 · · · t en n ∈ uV , so that t e0 0 t e1 1 · · · t en n ∈ uA. Statement (2) now follows. To see that (2) implies (3), let γ = (g, g # ) : Z → P n D be the morphism defined by t 0 , . . . , t n . By (2), u = f (t 0 , . . . , t n ) is nonzero. Define (2), h(t 0 , . . . , t n ) ∈ (t 0 , . . . , t n ) de A = u e A, so that R ⊆ A. Now let β : Z → Spec(R) be the induced domination morphism. We claim that γ = α • β. Indeed, by Lemma 3.3, Spec(R) is an affine submodel of the projective model X of F/D defined by t 0 , . . . , t n , and γ factors through X. Since β is the domination mapping, it follows that γ = α • β, and hence the image of γ is contained in Spec(S) = (P n D ) f . Finally, to see that (3) implies (1), let U = (P n D ) f and let γ = (g, g # ) : Z → P n D be the morphism defined by t 0 , . . . , t n . Since by (3), Z ⊆ g −1 (U ), then S, the ring of sections of U , is mapped via g # U into the holomorphy ring A of Z. But the image of g # U is R, so R ⊆ A, and hence every element of F of the form t d i /u is an element of A, from which (1) follows.
Theorem 3.5. The ring A is a Prüfer domain with torsion Picard group and quotient field F if and only if for each A-morphism φ : Z → P 1 A there is a homogeneous polynomial f ∈ A[T 0 , T 1 ] of positive degree such that the image of φ is in (P 1 A ) f . Proof. The choice of the subring D of F was arbitrary, so for the sake of this proof we may assume without loss of generality that D = A and apply then the preceding results to A. Suppose that for each A-morphism φ : Z → P 1 A there exists a homogeneous polynomial f ∈ A[T 0 , T 1 ] of positive degree such that the image of φ is in the affine subset (P 1 A ) f . By Theorem 3.1, A is a Prüfer domain with quotient field F . Thus to prove that A has torsion Picard group, it suffices to show that for each two-generated ideal (t 0 , t 1 )A of A, there exists e > 0 such that (t 0 , t 1 ) e A is a principal ideal (see for example the proof of [10, Theorem 2.2]). Let t 0 , t 1 ∈ F , and let φ : Z → P 1 A be the morphism defined by t 0 , t 1 . Then by assumption, there exists a homogeneous polynomial f ∈ A[T 0 , T 1 ] of positive degree d such that the image of A be an A-morphism. Then by Proposition 2.3 there exist t 0 , t 1 ∈ F such that φ is defined by t 0 , t 1 . Since A has torsion Picard group and quotient field F , there exists d > 0 such that (t 0 , t 1 ) d A = uA for some u ∈ (t 0 , t 1 ) d A.
Since u is an element of (t 0 , t 1 ) d A, there exists a homogeneous polynomial f ∈ A[T 0 , T 1 ] of degree d such that f (t 0 , t 1 ) = u, and hence by Lemma 3.4, the image of the morphism φ is contained in (P 1 A ) f . For applications such as those discussed in (2) and (3) of the introduction, one needs to work with D-morphisms into the projective line over D, rather than A. This involves a change of base, but causes no difficulties when verifying that A is a Prüfer domain. However, the converse of Theorem 3.5 (which is not needed in the applications in (2) and (3) A be a D-morphism, and let α : P 1 A → P 1 D be the change of base morphism. By assumption there exists a homogeneous polynomial f ∈ D[T 0 , T 1 ] such that the image of α • φ is contained in (P 1 D ) f . Then the image of φ is contained in (P 1 A ) f , and the corollary follows from Theorem 3.1. Let n be a positive integer. An abelian group G is an n-group if each element of G has finite order and this order is divisible by such primes only which also appear as factors of n. If A is a Prüfer domain with quotient field F , then the Picard group of A is an n-group if and only if for each t ∈ F there exists k > 0 such that (A + tA) n k is a principal fractional ideal of A [24, Lemma 1].
Remark 3.7. If each homogeneous polynomial f arising as in the statement of the corollary can be chosen with degree ≤ n (n fixed), then the Picard group of the Prüfer domain A is an n-group. For when t ∈ F and φ : Z → P 1 D is the D-morphism defined by 1, t, then with f the polynomial of degree ≤ n given by the corollary, Lemma 3.4 shows that (A + tA) n is a principal fractional ideal of A. In particular, when for each D-morphism φ : Z → P 1 D , there exists a linear homogeneous polynomial f ∈ A[T 0 , T 1 ] such that the image of φ is contained in (P 1 A ) f , then the ring A is a Bézout domain with quotient field F . The next corollary is a stronger version of statement (4) in the introduction.
Corollary 3.8. If D is a local domain and Z has cardinality less than that of the residue field of D, then A is a Bézout domain with quotient field F .
Proof. Let φ : Z → P 1 D be a D-morphism. For each P ∈ Proj(D[T 0 , T 1 ]), let ∆ P = {d ∈ D : T 0 + dT 1 ∈ P }. Then all the elements of ∆ P have the same image in the residue field of D. Indeed, if d 1 , d 2 ∈ ∆ P , then (d 1 − d 2 )T 1 = (T 0 + d 1 T 1 ) − (T 0 + d 2 T 1 ) ∈ P . If T 1 ∈ P , then since T 0 + d 1 T 1 ∈ P , this forces (T 0 , T 1 ) ⊆ P , a contradiction to the fact that P ∈ Proj(D[T 0 , T 1 ]). Therefore, T 1 ∈ P , so that d 1 − d 2 ∈ P ∩ D ⊆ m := maximal ideal of D, which shows that all the elements of ∆ P have the same image in the residue field of D. Let X denote the image of φ in P 1 D . Then since |X| < |D/m|, there exists d ∈ D P ∈X ∆ P , and hence f (T 0 , T 1 ) := T 0 + dT 1 ∈ P for all P ∈ X. Thus the image of φ is in (P 1 D ) f , and by Corollary 3.6 and Remark 3.7, A is a Bézout domain with quotient field F .
The following corollary is a small improvement of a theorem of Rush [25, Theorem 1.4]. Whereas the theorem of Rush requires that 1, t, t 2 , . . . , t dt ∈ f t (t)A, we need only that 1, t dt ∈ f t (t)A.
Proof. If A is a Prüfer domain with torsion Picard group and quotient field F , then for each 0 = t ∈ F , there is d t > 0 such that (1, t) dt A is a principal fractional ideal of A. Since A is a Prüfer domain, local verification shows that (1, t) dt A = (1, t dt )A, and it follows that there is a polynomial To prove the converse, we use Theorem 3.5. Let φ : Z → P 1 D be a D-morphism. Then by Proposition 2.3 there exists 0 = t ∈ F such that φ is defined by 1, t. By assumption, there is a polynomial is a homogeneous form of positive degree. Then 1, t dt ∈ g t (t, 1)A, and by Lemma 3.4 the image of φ is in (P 1 A ) g . By Theorem 3.5, A is a Prüfer domain with torsion Picard group and quotient field F . A is a Prüfer domain and f (a) is a unit in A for each a ∈ A. As Rush points out, Gilmer's theorem discussed in (2) of the introduction follows quickly from the equivalence of (a) and (b) and Corollary 3.9; see the discussion on pp. 314-315 of [25]. Similarly, the results of Loper and Roquette described in (3) of the introduction also follow from Corollary 3.9 and the equivalence of (a) and (b). Thus all the constructions in (1)-(4) of the introduction are recovered by the results in this section.
The case where D is a local ring
This section focuses on the case where D is a local ring that is integrally closed in F . (By a local ring, we mean a ring that has a unique maximal ideal; in particular, we do not require local rings to be Noetherian.) In such a case, as is noted in the proof of Theorem 4.2, every proper subset of closed points of P 1 D is contained in an affine open subset of P 1 D , a fact which leads to a stronger result than could be obtained in the last section. To prove the theorem, we need a coset version of homogeneous prime avoidance. The proof of the lemma follows Gabber-Liu-Lorenzini [8] but involves a slight modification to permit cosets. Lemma 4.1. (cf. [8,Lemma 4.11]) Let R = ∞ i=0 R i be a graded ring, and let P 1 , . . . , P n be incomparable homogeneous prime ideals not containing R 1 . Let I = ∞ i=0 I i be a homogeneous ideal of R such that I ⊆ P i for each i = 1, . . . , n. Then there exists e 0 > 0 such that for all e ≥ e 0 and r 1 , . . . , r n ∈ R, I e ⊆ n i=1 (P i + r i ).
Proof. The proof is by induction on n. For the case n = 1, let s be a homogeneous element in I P 1 , let e 0 = deg s, let e ≥ e 0 and let t ∈ R 1 P 1 . Suppose that r 1 ∈ R and I e ⊆ P 1 + r 1 . Then since 0 ∈ I e , this forces r 1 ∈ P 1 and hence st e−e0 ∈ I e ⊆ P 1 , a contradiction to the fact that neither s nor t is in P 1 . Thus I e ⊆ P 1 + r 1 . Next, let n > 1, and suppose that the lemma holds for n − 1. Then since the P i are incomparable, IP 1 · · · P n−1 ⊆ P n , and by the case n = 1, there exists f 0 > 0 such that for all f ≥ f 0 and r n ∈ R, (IP 1 · · · P n−1 ) f ⊆ (P n + r n ). Also, by the induction hypothesis, there exists g 0 > 0 such that for all g ≥ g 0 and r 1 , . . . , r n−1 ∈ R, (IP n ) g ⊆ n−1 i=1 (P i + r i ). Let e 0 = max{f 0 , g 0 }, let e ≥ e 0 and let r 1 , . . . , r n ∈ R. Then in light of the above considerations, we may choose a ∈ (IP 1 · · · P n−1 ) e (P n + r n ) and b ∈ (IP n ) e Proof. Let S = D[T 0 , T 1 ]. By Corollary 3.6 it suffices to show that for each Dmorphism φ : Z → P 1 D , there is a homogeneous polynomial f ∈ S of positive degree such that the image of φ is in (P 1 D ) f . To this end, let φ : Z → P 1 D be a D-morphism. By assumption, there is a closed point x ∈ P 1 D not in the image of φ. Let π : P 1 D → Spec(D) be the structure morphism. Since π is a proper morphism, π is closed and hence π(x) is a closed point in Spec(D). Thus since D is local, π(x) is the maximal ideal m of D. Let k be the residue field of D. Then, with Q the homogeneous prime ideal in S corresponding to x, we must have m ⊆ Q, and hence Proj(k[T 0 , T 1 ]) is isomorphic to a closed subset of P 1 D containing Q. Since a homogeneous prime ideal in Proj(k[T 0 , T 1 ]) is generated by a homogeneous polynomial in k[T 0 , T 1 ], it follows that there is a homogeneous polynomial g ∈ S of positive degree d such that Q = (m, g)S. Since, as noted above, every prime ideal in P 1 D = Proj(S) corresponding to a closed point in P 1 D contains m, it follows that every closed point in P 1 D distinct from x is contained in (P 1 D ) g . Thus if every valuation ring in Z other than F dominates D, then the image of φ is contained in (P 1 D ) g , which proves the theorem. It remains to consider the case where Z also contains, in addition to the valuation ring F , valuation rings V 1 , . . . , V n that are not centered on the maximal ideal m of D. Let P 1 , . . . , P n be the homogeneous prime ideals of S that are the images under φ of V 1 , . . . , V n , respectively. Let I = mS. Since no V i dominates D, then since φ is a morphism of locally ringed spaces, I ⊆ P i for all i = 1, . . . , n. We may assume P 1 , . . . , P k are the prime ideals that are maximal in the set {P 1 , . . . , P n }. Then by Lemma 4.1, there exists e > 0 such that I de ⊆ k i=1 (P i + g e ). Let h be a homogeneous element in I de k i=1 (P i + g e ). Since P 1 , . . . , P k are maximal in {P 1 , . . . , P n }, it follows that h ∈ I de n i=1 (P i + g e ). Set f = h − g e . Then f ∈ P i for all i. In particular, f = 0, and hence f is homogeneous of degree de. Since f ∈ P 1 ∪ · · · ∪ P n , then P 1 , . . . , P n ∈ (P 1 D ) f . Finally we show that every closed point of P 1 D distinct from x is in (P 1 D ) f . Let L be a prime ideal in Proj(S) corresponding to a closed point distinct from x. Then L = Q, and to finish the proof, we need only show that f ∈ L. As noted above, m ⊆ L, so if f ∈ L, then since h ∈ mS, we have g e ∈ L. But then Q = (m, g)S ⊆ L, forcing Q = L since Q is maximal in Proj(S). This contradiction implies that f ∈ L, and hence every closed point of P 1 D distinct from x is in (P 1 D ) f , which completes the proof. Remark 4.3. When the valuation rings in Z do not dominate D, the theorem can still be applied if there exists Y ⊆ X containing F such that (a) each valuation ring in Y other than F dominates D, (b) each valuation ring in Z specializes to a valuation ring in Y , and (c) no D-morphism φ : Y → P 1 D has every closed point in its image. For by the theorem the holomorphy ring of Y is a Prüfer domain with torsion Picard group and quotient field F . As an overring of the holomorphy ring of Y , the holomorphy ring of Z has these same properties also.
The following corollary shows how the theorem can be used to prove that real holomorphy rings can be intersected with finitely many non-dominating valuation rings and the result remains a Prüfer domain with quotient field F . In general an intersection of a Prüfer domain and a valuation domain need not be a Prüfer domain. For example, when D is a two-dimensional local Noetherian UFD with quotient field F and f is an irreducible element of D, then D f is a PID and D (f ) is a valuation ring, but D = D f ∩ D (f ) , so that the intersection is not Prüfer. This example can be modified to show more generally that for this choice of D, there exist quasicompact schemes in X that are not affine.
Corollary 4.4. Suppose D is essentially of finite type over a real-closed field and that F and the residue field of D are formally real. Let H be the real holomorphy ring of F/D. Then for any valuation rings V 1 , . . . , V n ∈ X not dominating D, the ring H ∩ V 1 ∩ · · · ∩ V n is a Prüfer domain with torsion Picard group and quotient field F .
Proof. Each formally real valuation ring in X specializes to a formally real valuation ring dominating D (this can be deduced, for example, from [13,Theorem 23]). Let Y be the set of all the formally real valuation rings dominating D, let Z = Y ∪ {F, V 1 , . . . , V n }, and let φ : Z → P 1 D be a D-morphism. Then the image of Y under φ is contained in (P 1 D ) f , where f (T 0 , T 1 ) = T 2 0 + T 2 1 . Because V 1 , . . . , V n do not dominate D, they are not mapped by φ to closed points of P 1 D . Thus the corollary follows from Theorem 4.2.
We include the last corollary as more of a curiosity than an application. Suppose that D has quotient field F . A valuation ring V in X admits local uniformization if there exists a projective model X of F/D such that V dominates a regular local ring in X. Thus if Spec(D) has a resolution of singularities, then every valuation ring in X admits local uniformization. When D is essentially of finite type over a field k of characteristic 0, then D has a resolution of singularities by the theorem of Hironaka, but when k has positive characteristic, it is not known in general whether local uniformization holds in dimension greater than 3; see for example [3] and [29].
Corollary 4.5. Suppose that D is a quasi-excellent integrally closed local Noetherian domain with quotient field F . If there exists a valuation ring in X that dominates D but does not admit local uniformization, and Y consists of all such valuation rings, then the holomorphy ring of Y is a Prüfer domain with torsion Picard group.
Proof. Let Z = Y ∪ {F }, and let φ : Z → P 1 D be a D-morphism. Then by Proposition 2.3, φ factors through a projective model X of F/D. Since Y is nonempty, the projective model X has a singularity, and thus since D is quasi-excellent, the singular points of X are contained in a proper nonempty closed subset of X. In particular, there are closed points of X that are not in the image of the domination map Z → X, and hence there are closed points of P 1 D that are not in the image of φ. Therefore, by Theorem 4.2, A is a Prüfer domain with torsion Picard group and quotient field F .
In particular, all the valuation rings that dominate D and do not admit local uniformization lie in an affine scheme in X. | 2014-08-22T16:59:39.000Z | 2014-08-22T00:00:00.000 | {
"year": 2014,
"sha1": "a97e8e8c05388ff4a10f24c05b1a1810c65a075f",
"oa_license": null,
"oa_url": "http://msp.org/pjm/2015/273-2/pjm-v273-n2-p05-s.pdf",
"oa_status": "BRONZE",
"pdf_src": "Arxiv",
"pdf_hash": "a97e8e8c05388ff4a10f24c05b1a1810c65a075f",
"s2fieldsofstudy": [
"Mathematics"
],
"extfieldsofstudy": [
"Mathematics"
]
} |
219170784 | pes2o/s2orc | v3-fos-license | mirnaQC: a webserver for comparative quality control of miRNA-seq data
Abstract Although miRNA-seq is extensively used in many different fields, its quality control is frequently restricted to a PhredScore-based filter. Other important quality related aspects like microRNA yield, the fraction of putative degradation products (such as rRNA fragments) or the percentage of adapter-dimers are hard to assess using absolute thresholds. Here we present mirnaQC, a webserver that relies on 34 quality parameters to assist in miRNA-seq quality control. To improve their interpretability, quality attributes are ranked using a reference distribution obtained from over 36 000 publicly available miRNA-seq datasets. Accepted input formats include FASTQ and SRA accessions. The results page contains several sections that deal with putative technical artefacts related to library preparation, sequencing, contamination or yield. Different visualisations, including PCA and heatmaps, are available to help users identify underlying issues. Finally, we show the usefulness of this approach by analysing two publicly available datasets and discussing the different quality issues that can be detected using mirnaQC.
INTRODUCTION
Different aspects of miRNA-seq such as RNA extraction, storage conditions and sample processing together with the chosen library preparation protocol have a great impact on the obtained sequencing results (1,2). In any bioinformatics analysis of high-throughput sequencing data, quality control (QC) is the key step to reveal the existence of technical artefacts. Neglecting this step can lead to both false discoveries and failure to identify the existing biological signal.
The processing of miRNA-seq data is no exception, and QC approaches should focus on measurable sample features that can be linked to quality aspects. Moreover, whenever possible, these quality parameters should hint or point out specific technical artefacts. This approach would offer the user the chance to take appropriate actions like excluding low-quality samples from the analysis or applying statistical models in order to correct for such technical variation when possible like in the case of batch effects (3).
Several sample attributes are generally calculated in sequencing experiments including the total number of sequenced reads, number of adapter-trimmed and filtered reads, percentage of mapped/unmapped reads and Phred-Score to measure the quality of the sequencing. Besides these general statistics, many pipelines such as sRNAbench (4), mirTrace (5) or miRge (6) implement measurements that are specifically useful for miRNA-seq analysis like number of unique reads, percentage of miRNA-mapping reads, read length distribution and relative abundance of fragments from other RNA types (mostly tRNA and rRNA). Many of these parameters are clearly relevant for quality. Some indicate good quality samples when they hold high values (number of miRNA reads, total number of reads) while others (percentage of rRNA, adapter dimers) do it when they are low. Some of these features can be directly linked to a particular artefact, like a high percentage of adapter dimers which is normally caused by issues with adapters and/or input RNA concentrations (7). Other measurements, like smeared out read length distributions, can also be attributed to specific problems, in this case RNA degradation. However, most of them can be affected by several different artefacts thus it is frequently not possible to directly reveal specific technical issues when considering each quality feature individually. For instance, the yield in mi-croRNAs can be influenced by any artefacts that impact the total read yield including contamination.
Regardless of the values good quality measurements should take, the context-free interpretation of sample features is generally not straightforward. For example, as an obvious source of unwanted fragments, rRNA presence should be minimized, but it is difficult and arbitrary to establish a specific threshold (5%, 10%, 20%) to discard samples. Therefore, rather than working with predefined or user-provided values that are hard to justify, a more agnostic approach would rely on relative values calculated from a background of comparable experiments (i.e. similar samples) which would in turn simplify the interpretation of the QC outcome.
A vast amount of publicly available data exists that can be exploited for purposes beyond their original goal (8). To generate a reference corpus of experiments that can be used to rank quality features, we downloaded over 36 000 raw sequencing datasets from the Sequence Read Archive (SRA) covering most model species. Samples were first processed using sRNAbench and then 34 quality features were extracted from each sample and subsequently organised into the reference set. Furthermore, sample metadata is used to tailor comparisons to more relevant sets of experiments (i.e. samples from the same species and/or processed with the same library preparation protocol).
In contrast with previously available software (5,6), mir-naQC calculates absolute and relative values for several quality-related features for a set of miRNA-seq samples. Input data can be uploaded as FASTQ files or provided as SRA run accessions that will subsequently be ranked making use of the reference corpus mentioned above. An apparent advantage of this approach is that fixed thresholds are no longer needed and decisions can be made based on background statistics. Users can explore mirnaQC results by means of interactive plots and tables that hold both absolute and relative values of the 34 quality attributes. The output report is structured into several categories trying to relate the quality attributes to the different possible technical artefacts. This approach can help to identify low quality samples or reveal issues in the sample processing which is extremely important for protocol optimisation.
mirnaQC SAMPLE FEATURES AND QUALITY MEA-SURES
The success of a small RNA sequencing run depends on many different factors including RNA quality, quantity and purity, an optimized library processing protocol and the sequencing itself. However, it is not always easy or even possible to directly relate features extracted from sequencing data to any technical artefacts. mirnaQC calculates and ranks several quality parameters conceived to hint problems in the different aspects involved in the preparation of miRNA sequencing libraries. Below we describe the different sections and, wherever possible, the putative artefacts or quality issues that can be derived from them.
Sequencing yield
This section focuses on the amount of reads and the fraction that can be assigned to known miRNAs. Generally, parameters in this category (percentage of valid reads, detected microRNAs) indicate high quality when they hold high values. Low numbers (especially for the percentage of valid input reads) can be related to problems in RNA processing or low input material. Some sources like exosomes extracted from bodily fluids however, are known to hold low levels of miRNA, thus high numbers should not be expected for all sample types even for high quality libraries.
Library quality
In this category we list the number of reads that are filtered out due to minimum length (15nt), the percentage of ribosomal RNA and the percentage of short reads (15-17 nt). Their presence may be attributed to degradation products from longer RNA molecules as no small RNAs are known in this length range.
High percentages of adapter-dimers (0, 1 or 2 nt fragments after trimming) normally indicate issues with the ratio of adapter to input RNA concentration. In practice, it is very difficult to completely avoid adapter-dimers, especially in low input samples such as blood. Nevertheless the percentile may still be useful as it might show potential for improvement.
Ultra-short reads are defined as fragments with lengths between 3nt and 14nt (both inclusive).
Library complexity
In general it is also interesting to assess the complexity of the sample since low complexity libraries provide very little information, even for otherwise high-quality datasets. Several measurements are provided to grasp the complexity at two levels that should be interpreted together: • Sequencing library complexity: This is calculated as the ratio of the total number of reads to unique reads. Lower values suggest higher RNA diversity but it can also be caused by degradation. • miRNA complexity: Frequently most microRNA reads correspond to few miRNA genes preventing lowly expressed miRNAs from being detected. Several measures are given to estimate complexity at this level: (i) percentage of miRNA expression assigned to the first, the first 5 and first 20 most expressed miRNAs, (ii) the number of miRNAs required to reach 50%, 75% and 95% of the total miRNA expression.
Putative contamination
The percentage of reads that could not be mapped to the species' genome is calculated. Contamination is subsequently estimated by mapping against a collection of bacterial and viral genomes.
Read length distribution
A narrow peak around 22 nucleotides in the read length distribution indicates good quality samples whereas degraded or poor RNA quality manifests in a broader distribution. Furthermore, it is clear that the 22nt peak should be present for miRNA assigned reads and RNA quality issues might exist if samples deviate from this.
We summarise the miRNA read length distribution in several ways: mean length, mode of the distribution, the fraction of reads with lengths 21, 22 or 23 nt, the standard deviation and the skewness of the distribution.
RNA composition
The relative abundance of other RNA molecules is automatically profiled using the sRNAtoolbox database (9). Most of these longer RNA species (rRNA,mRNA, lin-cRNA) are not known to be processed into smaller molecules that can be picked up by miRNA-seq. Their presence is a symptom of RNA degradation since smaller fragments are randomly generated and then sequenced. Among these, rRNA is typically used because it's the most abundant one.
Sequencing quality
Sequencing quality is calculated by means of FastQC (10). We determine the mean values of the different percentiles provided by the program over all positions of the read.
GENERATION OF THE miRNA-seq REFERENCE CORPUS (BACKGROUND KNOWLEDGE)
The vital part of the presented quality control tool mir-naQC is the comparison corpus of miRNA-seq data that is used to rank user's samples. Using OmicIDXR API we obtained a list of >3000 SRA studies that were annotated as 'miRNA-Seq' or 'ncRNA-Seq' (several 'RNA-Seq' were also included after checking they were in fact 'miRNA-Seq' datasets).
For each study we performed the following steps: • Read the meta-data for a study generating one entry per experiment (SRX level) • Download all SRR files that correspond to this SRX by means of fastq-dump (fastq.gz) • Detect the library preparation protocol • Analyse the small RNA sequencing data with sRN-Abench using all available annotations from sRNAtool-boxDB • Upload sRNAbench results to a MySQL database In total we analysed 36 338 samples from 30 different species. We distinguish 8 different protocols: Illumina, Illumina 2 (3 adapter sequence), Next England Biolabs (NEBnext), Qiagen UMI, NextFlex, adapter trimmed, SoliD and all others (custom). Over 500 billion sequencing reads were analysed.
mirnaQC WORKFLOW AND IMPLEMENTATION
An overview of the mirnaQC workflow is displayed on Figure 1. The only required input is sequencing data in FASTQ format (or SRR accessions) although sample species and library protocol information is recommended if known. If the protocol or species are not provided by the user an automatic detection algorithm, trained with a set of manually curated samples from liqDB (11), will find the right input parameters. Condition or group information can also be provided (optional). All files belonging to a given group should be compressed into a single .zip, tar.gz or .7z file and then separately uploaded. The file names will be used as group labels, and this information will appear in some of the plots.
Input data is subsequently processed by sRNAbench in two steps: First reads are simultaneously mapped to the species genome and a collection of virus and bacterial genomes from sRNAtoolboxDB (9) allowing one mismatch. Preference is given to the reference genome in case of multiple mapping reads. In the second step reads are mapped to microRNA reference libraries (12,13), RNAcentral (14) and Ensembl annotations (15) for ncRNA and mRNA. Note that although samples are mapped to both reference libraries, miRBase and MirGeneDB, currently the miRNA related figures are extracted from the miRBase mappings.
Out of both sRNAbench output folders we extract a total of 34 quality attributes that are next compared to 5 different reference sets: (i) samples from the same kingdom (animals and plants), (ii) samples from the same species, (iii) samples from the same kingdom and protocol, (iv) samples from the same species and protocol and (v) low-input samples (defined as those obtained from bodily fluids). Each comparison can be browsed separately on the output page.
The processing pipeline is a java programme that includes sRNAbench, Bowtie (16) and a MySQL client to query the reference corpus. The web interface was developed using Python and Django and runs on Apache. The results report includes the six sections described in the mirnaQC sample features and quality measures with tables and styles from multiQC (17) and Plotly visualisations. Both absolute values and percentiles are displayed and highlighted using a quartile colour code (see Figure 2C).
WORKING EXAMPLE
To show the usefulness of this tool, we analysed two publically available studies. Basic statistics from the first dataset, one of the earliest large studies designed to detect cervical cancer (18), can be seen on Figure 2A. To help users identify potential issues, the quality parameters and their percentiles are displayed using a quartile-based colour code (from better to worse values: green, yellow, orange and red). Using this guide, several problems can be identified: with few exceptions, most parameters rank on the third (orange, Q3) and fourth quartiles (red, Q4). More specifically, miRNA 'peak' values show that a rather low percentage of microRNA reads have lengths between 21 and 23nt in the majority of samples. This means that although those reads can be assigned to miRNA reference sequences, they do not correspond to the canonical miRNA lengths. This hints an RNA processing issue that might still be tolerated if all samples are similarly affected, which can indicate either systematic artefacts or biological reasons.
It may also happen that not all samples are equally affected by a quality issue, which can be more problematic if two or more conditions are to be compared. mirnaQC allows users to assign samples to conditions in order to explore this possibility. Figure 2B shows a PCA plot of the expression values of the 50 most expressed miRNAs.
Nucleic Acids Research, 2020, Vol. 48, Web Server issue W265 Users can decide which quality attribute should be used to colour the markers, in this case we used '% top miRNA' (the percentage of reads assigned to the most abundant mi-croRNA). This graph shows that the two outlier samples are much less complex than the rest. Furthermore, because conditions are marked with different symbols (control-circles and carcinoma-squares), we know that these two samples belong to the same group. Keeping such samples in the analysis is not recommended since they will certainly bias the results. Figure 2E displays the distribution of this feature for both groups by means of boxplots. Here we can see that these two samples are outliers but otherwise both conditions show reasonably similar distributions for this parameter.
Users can also explore potential sources of contamination from reads mapped to viral and bacterial genomes. Figure 2D shows that all samples suffer from rather high percentages of contamination reads. All samples have more bacterial/viral reads than 75% of all animal samples in the reference set. This range of values indicates serious contamination with the possible exception of cervical cancer samples, where this might be caused by sample extraction or even have a role in the disease (19).
Finally, Figure 2E shows the library complexity of different tissues from Takifugu rubripes (20). While the top ex-pressed microRNA in intestine picks up 47.1% of all reads (percentile 88.4), in ovary this figure drops to 8.3% which corresponds to percentile 2.6. In ovary, eight microRNA sum up over 50% of all miRNA expression while in intestine it takes only two to reach the same percentage which indicates a higher complexity of miRNA expression in germ cells. Furthermore, ovary and testis exhibit much lower percentages of miRNAs. This might be related to their larger repertoire of small RNAs in germ cells (21) which automatically would lead to a lower relative fraction of microRNAs in those samples.
CONCLUSION
We present a user-friendly web server for the comparative quality control of miRNA-seq data that can be useful in several scenarios: to identify low-quality samples that should be excluded from downstream analysis; to reveal systematic errors in order to improve the library preparation process, something especially relevant for pilot studies; and finally, to provide external quality validation for datasets so it can be used as a standard proof of quality. mirnaQC provides several output tables and visualisations for a total of 34 quality attributes which allow users to rank their results against a large corpus of comparable W266 Nucleic Acids Research, 2020, Vol. 48, Web Server issue samples. In this way, no absolute thresholds need to be applied and the user can evaluate their sequencing data based on percentiles. Future developments include new types of analysis and improved visualisations intended to detect confounding variables related to quality issues that can affect downstream steps. Additionally, a dockerized version of the tool will be made available so the pipeline can be run locally or in computing clusters. | 2020-05-28T09:08:56.982Z | 2020-06-02T00:00:00.000 | {
"year": 2020,
"sha1": "be7eb1d25352851b53881ca9d2b1f4acc9daf3af",
"oa_license": "CCBY",
"oa_url": "https://academic.oup.com/nar/article-pdf/48/W1/W262/33433309/gkaa452.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "743bba90d38fa2f056c795cfef5b089b53d9c048",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Computer Science",
"Medicine",
"Biology"
]
} |
54904894 | pes2o/s2orc | v3-fos-license | Single-Crystal MgO Hollow Nanospheres Formed in RF Impulse Discharge Plasmas
Spherical MgO nanoparticles with a hollow inside, that is, MgO hollow nanospheres, were created in Ar/O2 plasma produced by radio frequency (RF) impulse discharge using a Mg rod electrode. The hollow nanospheres were found on the SiO2 plates placed near the powered Mg electrode. The electron refraction pattern showed that each nanosphere was made of a single crystal of MgO. Since the shape was spherical, these nanoparticles seemed to be created during the levitation in the plasma without touching any walls. The formation mechanism with a quasiliquid cooling model was also discussed.
Introduction
Magnesium oxide (MgO) has been utilized as a transparent film with favourable secondary electron emission coefficient for a flat plasma display panel [1].MgO is also used as a buffer layer for the deposition of high T c superconducting films and perovskite-type ferroelectric films.MgO has been widely used as a refractory material in steel manufacture because of its high corrosion resistance and high-melting point.MgO is also used as an optical transmitter and as a substrate for thin film growth.Various methods for the production of MgO films were reported [2][3][4][5], where various morphologies of MgO films, such as flat thin films, whiskers, fishbone fractal nanostructures, and nanofibers, were observed.Photoluminesence emission spectra in the UV range were dependent on the size of MgO nanoparticles [6].
Here, a special shape of MgO nanoparticles with a hollow space inside, that is, a hollow nanosphere or a spherical nanoshell, is reported.Moreover, each particle is made of a single crystal of MgO in spite of its spherical shape.This kind of structure can be used to form an optical scattering surface when they are coated on the surface homogeneously.The mass to volume ratio can be reduced compared to a conventional packed sphere.An improvement of electronic properties such as field electron emission efficiency might be expected.Another possible application includes packing of foreign materials in the hollow.
Concerning with the formation of hollow microspheres, several works have been reported.Ferrite hollow spheres were prepared and fabricated by coating ferrite nanoparticles on the surface of polystyrene spheres and removing of the core polymer subsequently [7].SiO 2 hollow spheres were also prepared by heat treatment of a mixture composed of SiCl 4 and carbon microsphears [8].Double shell hollow spheres were prepared by encapsulating the polymeric hollow spheres with a TiO 2 shell [9].By using a pulsed laser deposition method, hollow ZnO spheres have been prepared after annealing the depositions [10].All of these experiments employed an additional chemical and heat treatment.Recently, one-step synthesis of MgO hollow nanospheres were demonstrated by a pulsed-laser ablation method, where Mg target was melted, followed by surface oxidization [11].However, only a simple sputtering method was employed in Ar/O 2 plasma without chemical and thermal processes.The formation technique was quite different from the above methods.
Glas substrate
Outer electrode
Inner electrode
Gas out t = 10 μs T = 10-1000 μs Figure 1: RF impulse discharge system with plate and rod electrodes.The gas was fed between glass plates with narrow gap where the electric impulse discharge takes place.Inset shows a waveform of the applied voltage.
Experimental Setup
The experimental configuration is shown in Figure 1 [12].Using this system, deposition materials on the flat glass plates made of SiO 2 can be easily analysed.Two glass plates were placed with a narrow spacing gap as shown in Figure 1.The spacing between the glass plates was fixed at 2 mm.To ignite a discharge, a powered rod electrode was introduced between the two glass plates.Another two grounded stainless plate electrodes that sandwich the glass plates were also placed.The inner rod electrode was made of a Mg rod with a diameter of 1.7 mm.The outer stainless electrode was not fixed, so that the distance between the inner and outer electrodes could be varied in the axial direction.In this experiment, however, the distance between the inner and outer electrodes was fixed at 5 mm.The entire electrode system shown in Figure 1 was set inside a cylindrical vacuum chamber of 50 cm in diameter and 20 cm in height.The working gases of Ar and O 2 were introduced into a mixing vessel through mass flow controllers independently, and finally the mixed gas was fed to the discharge region through a gas inlet tube connected to the inner electrode, as shown in Figure 1.The outgoing gas from the discharge region was directly drained into the chamber and evacuated by a rotary pump.Since the length of the glass plate was short (≈10 mm), the pressure in the discharge region was nearly the same as that in the vacuum chamber.The pressure of the chamber was fixed at 0.1 Torr with a total gas flow rate of 20 sccm.
RF impulse voltage was directly supplied to the inner electrode through a coaxial cable without using a matching circuit and a blocking condenser, while the outer electrode is grounded.As shown in an inset of Figure 1, the RF impulse power supply provided one cycle of a sinusoidal waveform of 10 μs in width with the repetition frequency ω R .The pulse amplitude and the repetition frequency can be changed.In this experiment, the pulse repetition frequency was fixed at 4.5 kHz.The applied voltage can be increased up to 20 kV.
The surface morphology was analysed by scanning electron microscopy (SEM) with a resolution of 5.0 nm and a maximum magnification of 300,000 (JCM-5700, JEOL).The transmission electron microscopy (TEM) analysis was also employed to analyse the crystal structure.
Experimental Results
Figure 2 shows SEM images of the depositions on the glass plate surface.Many small spherical particles were formed as shown in Figure 2(a).Typical size of the particles was 200-400 nm.Here, spherical nanoparticles with diameters less than 100 nm could be also observed.Figure 2(b) shows a SEM image in the direction perpendicular to the glass plate surface.The cross-sectional view of these particles indicates that these particles seem to be just put on the glass plate surface.So, it was supposed that these particles were formed during levitation in the plasma discharge, and fallen down and attached to the surface of the glass plate.It was also found that these nanospheres had a quite symmetrical ball structure, and so there was no indication that they were grown up on the surface of the substrate.Therefore, these particles should have grown in the plasma without touching any places.The dependence of the depositions on the experimental parameters was described in [13], together with [12].
In order to evaluate the structure of these spherical particles more in detail, a TEM image is shown in Figure 3, where several spherical nanoparticles can be observed.Note that most of particles are hollow spheres, although the size is not the same; that is, most of these nanoparticles are spherical and include spherical hollow inside.But, some of them have nonuniform shell thickness.In order to check the crystallite of those hollow spheres more in detail, a few particles were chosen as shown in Figure 4(a).The electron refraction pattern for these particles is shown in Figure 4(b), where narrow refraction spots can be clearly observed.In general, the relation D = λL/r(hkl) holds for the electron refraction, where D is radius of the refraction ring, L is the distance between the sample and screen, λ is de Brolie wavelength of the electron beam, and r(hkl) is lattice position in the x, y, and z directions.It was obtained that r(111) = 0.244 nm, r(200) = 0.211 nm, r(220) = 0.149 nm, r(311) = 0.127 nm, r(222) = 0.120 nm, and r(400) = 0.105 nm.The positions of these spots are found to be exactly coincident with the radii of the rings related with the MgO with lattice constant a = 0.4203 nm.Therefore, it is concluded that each hollow spherical particle is made of a single crystal of MgO.
Figure 5 shows the relation between outer diameter D and inner hollow diameter d of MgO nanospheres.The hollow diameter d increases almost in proportion with an increase of the outer diameter D. From a straight line drawn in the figure, the ratio d/D was found to be about 0.6, and the ratio d/D was almost independent of the particle size; that is, larger particles had a larger hollow inside.However, most of the particles smaller than 50 nm seemed to have no detectable hollow inside.These phenomena would be closely related with the formation mechanism of the hollow nanospheres, as discussed in the next section.
The SEM image in the region of thick deposition is shown in Figure 6.It can be confirmed that the depositions are in principle consisted of many small nanoparticles.Moreover, a few particles in Figure 6 are found to have an open hole on the surface.This indicates that a thin part of the shell layer surrounding the particle has been broken by chance, and the inner hollow space was exposed through an open hole.This structure also confirms that these particles are basically hollow particles.
Discussion
Here, the formation mechanism of these hollow MgO nanospheres is discussed.A diffusion model of oxygen into a melted Mg nanosphere was proposed for a formation of MgO hollow nanosphere in pulsed-laser deposition [11].During the cooling, oxygen in gas phase penetrated into Mg nanospheres to form a MgO layer on the surface.On the contrary, Mg atoms in the core region diffused toward the surface region, leaving a void in the core.This model could not be directly applied to the present study, because melted metallic Mg nanoparticles were not created initially.Instead, nanoparticles contain both Mg and O atoms from the beginning, because the reactions between Mg and O would occur immediately after Mg sputtering from the Mg rod electrode in the discharge plasma.
In this study, the following model was considered for a formation of hollow MgO nanospheres.Owing to the sheath potential in front of the Mg rod electrode during the discharge, Mg atoms are sputtered by energetic Ar ions.These Mg atoms react with O atoms in the plasma to form MgO nuclei.Such MgO nanoclusters would coagulate and grow in the plasma, and then nanoparticles containing Mg and O atoms were produced.Note that in the plasma, strong local electric filed occurs in front of the levitated nanoparticles due to a sheath formation.Therefore, Ar ions are accelerated towards the surface of nanoparticle and transfer kinetic energy to nanoparticles, then the nanoparticles will be heated up as discussed below and will reach a quasiliquid state.It was also noted that these nanoparticles were charged up negatively by an effect of electrons in the discharge.Therefore, they can be confined electrically within positive plasma potential for a long period.Since the nanoparticles are in quasiliquid state, the shape will be spherical, as schematically illustrated in Figure 7(a).
Since the impulse discharge was employed, these nanospheres would have a chance to escape from the discharge region during discharge off-interval.In this case, these particles will be in turn cooled down owing to the collisions with neutral Ar atoms before arriving at the surface of the glass plate.The solidification of nanoparticles will start from the particle surface, as shown in Figure 7(b).During this process, Mg atoms will be subtracted toward the surface for MgO crystal formation, because the number of Mg atoms would be lacked near the surface, compared with that of oxygen supplied from the gas phase.On the other hand, excess oxygen inside the nanoparticles will be expelled toward the melted core region, because outer surface was solidified and crystallized, as shown in Figure 7(c).This process will lead to a hollow inside filled with oxygen, as shown in Figure 7(d).Since the cooling process is rather slow, compared with the heating process, single crystal structure of MgO would be formed.Here, it is also noted that in the quasiliquid state, Mg and O will be ionised to form Mg 2+ and O 2− , respectively.The electric force among these ions inside the nanoparticles may also play a key role for driving a collection of Mg + toward the surface (see Figure 7(b)) and an expulsion of excess O − toward the core (see Figure 7(c)) during crystal formation mentioned above.The energy W i (J/s) transferred to the MgO nanoparticle of radius R per second can be estimated by ion bombardment energy E i = eV s , and ion flux J i = n i v i , that is, W i = S p E i J i , where S p is the particle surface area (=4πR 2 ), n i is ion density, v i = 2eV s /m i is ion velocity, and V s is ion acceleration voltage in the sheath in front of the nanoparticle.On the other hand, the heat capacity H(J/ • C) for increasing the particle temperature is expressed by H = kM, where k(J/kg • C) is specific heat, M = (4/3)ρ 0 πR 3 is mass, and ρ 0 is specific gravity of MgO.Of course, the other heating mechanisms, such as radiation heating and electron heating and also the other cooling mechanism such as the collisions with neutral Ar/O 2 gas should be also taken into account.Here, however, as a dominant mechanism, the effect of ion bombardment has to be discussed.In the case of a particle of R = 3 nm, for example, it is obtained that ΔT/τ = 5.7×10 2 for the time τ required for increasing the particle temperature ΔT.This shows that an expose of particles to the plasma for τ = 3.6 s will lead to an increase of particle temperature by about ΔT = 1000 • C. Here, the impulse discharge with repetition frequency ω R = 4.5 kHz, V s = 10 V , and n i = 5×10 9 /cm 3 was taken into account.The temperature rises more quickly for smaller particles (T/τ ∝ 1/R).If the other heating mechanism mentioned above was considered, the temperature rise time might be much shorter.In a typical case, the discharge was continued for 10 minutes.Therefore, nanoparticles growing in the plasma would be heated up by the ion bombardment to reach a quasiliquid state.
The volume of such a quasiliquid spherical droplet of MgO shown in Figure 7(a) expands as a result of heating.During the cooling, solidification proceeds from the surface to the core region.Therefore, the outer diameter D of particles will be maintained, and only the particles in the core region will be rearranged to form a shell structure of MgO crystal, as shown in Figure 7(d).If the number of Mg atoms is preserved during the cooling, a relation nV = n 0 V 0 will be hold, where n is Mg density in the droplet, V (=(4/3)π(D/2) 3 ) is the droplet volume, n 0 is Mg density in MgO crystal, and V 0 (=(4/3)π[(D/2) 3 − (d/2) 3 ]) is the shell volume.Then, the relation n/n 0 = V 0 /V = 1-(d/D) 3 ≈ 0.78 can be obtained.Here, d/D ≈ 0.6 is used from Figure 5. Therefore, the number ratio of atoms in the quasiliquid MgO droplet can be estimated to be Mg : O ≈ 1 : 1.28, containing excess oxygen.The density of Mg in the quasiliquid MgO droplet would be almost constant, independent of the particle size, as shown in Figure 5.This is consistent with the constancy of the ratio d/D.
A following trapping mechanism was considered for the nanoparticles levitated in the plasma.The forces acting on the charged nanoparticles levitated in plasmas were electrostatic force, ion drag force, thermophoresis, gravity, and neutral drag by the gas flow.Here, thermophoresis was negligible because of no heating.Since the nanoparticles were deposited on upper and lower sides of the glass plates placed parallel to the horizontal direction, the gravity was unimportant.Neutral drag by gas injection was also negligible because it acted mainly in the horizontal direction.Therefore, ion drag force would play a key role for the particle transport toward the glass plates.
Conclusion
In conclusion, RF impulse plasma was produced within a space between narrow glass plates using an Mg rod electrode system.It was found that hollow nanospheres were deposited on the glass plate surface.The particles were predominantly deposited on the glass substrate close to the inner Mg rod electrode.From the TEM image and electron refraction patterns, these hollow nanospheres were found to be composed with MgO single crystal.The average size of the particles was several 100 nm.Further, the formation processes of such spherical particles were discussed.A melting process of the particles by the ion bombardment was considered during the growth of MgO nuclei and cluster.Cooling process starting from the outer surface of the particles will confine excess oxygen in the core, resulting in formation of a hollow sphere made of MgO single crystal.It is demonstrated that the RF impulse discharge system employed here is very useful for the formation of MgO hollow nanospheres.
Figure 2 :Figure 3 :
Figure 2: SEM images of the nanoparticles deposited when Ar/O 2 mixing ratio is 3/1 and total pressure is 0.1 Torr.(a) top view, (b) side view.The particles are spherical shape.
Figure 4 :Figure 5 : 000 Figure 6 :Figure 7 :
Figure 4: (a) TEM image of a few nanoparticles and (b) electron refraction pattern for these particles.The rings radii correspond to refraction positions for MgO crystal.This dotted pattern shows that the spherical hollow nanoparticles are consisted of single crystal of MgO. | 2018-12-13T21:34:17.616Z | 2012-01-01T00:00:00.000 | {
"year": 2012,
"sha1": "36fe2969299a7f986625599f89e9b9278a3ba8b3",
"oa_license": "CCBY",
"oa_url": "http://downloads.hindawi.com/journals/jnm/2012/691874.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "36fe2969299a7f986625599f89e9b9278a3ba8b3",
"s2fieldsofstudy": [
"Physics",
"Materials Science"
],
"extfieldsofstudy": [
"Materials Science"
]
} |
468405 | pes2o/s2orc | v3-fos-license | Reliable identification of mycobacterial species by PCR-restriction enzyme analysis (PRA)-hsp65 in a reference laboratory and elaboration of a sequence-based extended algorithm of PRA-hsp65 patterns
Background Identification of nontuberculous mycobacteria (NTM) based on phenotypic tests is time-consuming, labor-intensive, expensive and often provides erroneous or inconclusive results. In the molecular method referred to as PRA-hsp65, a fragment of the hsp65 gene is amplified by PCR and then analyzed by restriction digest; this rapid approach offers the promise of accurate, cost-effective species identification. The aim of this study was to determine whether species identification of NTM using PRA-hsp65 is sufficiently reliable to serve as the routine methodology in a reference laboratory. Results A total of 434 NTM isolates were obtained from 5019 cultures submitted to the Institute Adolpho Lutz, Sao Paulo Brazil, between January 2000 and January 2001. Species identification was performed for all isolates using conventional phenotypic methods and PRA-hsp65. For isolates for which these methods gave discordant results, definitive species identification was obtained by sequencing a 441 bp fragment of hsp65. Phenotypic evaluation and PRA-hsp65 were concordant for 321 (74%) isolates. These assignments were presumed to be correct. For the remaining 113 discordant isolates, definitive identification was based on sequencing a 441 bp fragment of hsp65. PRA-hsp65 identified 30 isolates with hsp65 alleles representing 13 previously unreported PRA-hsp65 patterns. Overall, species identification by PRA-hsp65 was significantly more accurate than by phenotype methods (392 (90.3%) vs. 338 (77.9%), respectively; p < .0001, Fisher's test). Among the 333 isolates representing the most common pathogenic species, PRA-hsp65 provided an incorrect result for only 1.2%. Conclusion PRA-hsp65 is a rapid and highly reliable method and deserves consideration by any clinical microbiology laboratory charged with performing species identification of NTM.
Background
The genus Mycobacterium comprises organisms that are heterogeneous in terms of metabolism, growth, environmental niche, epidemiology, pathogenicity, geographic distribution and disease association [1]. While there are notable pathogens such as Mycobacterium tuberculosis, Mycobacterium bovis and Mycobacterium leprae, most are environmental organisms typically acting as opportunistic pathogens. These species, often collectively called nontuberculous mycobacteria (NTM), have been associated with a variety of problems including pulmonary, lymph node, skin, soft tissue, skeletal, and disseminated infections as well as nosocomial outbreaks related to inadequate disinfection/sterilization of medical devices [2]. In recent years, infections due to the subset of rapidly growing NTM, including Mycobacterium fortuitum, Mycobacterium chelonae and Mycobacterium abscessus, have been reported as complications of numerous surgical procedures, particularly involving foreign bodies (e.g., augmentation mammaplasty), high risk sites (e.g., eye) and injections of natural products used as alternative medicines [3][4][5][6][7][8].
In most laboratories, identification of mycobacterial species is based on in vitro growth and metabolic activities. Such phenotypic tests are labor-intensive and time-consuming to perform and may take several days to weeks to complete. Further, for many NTM species, the tests may be poorly reproducible [9], and consequently, the identifications may be ambiguous or erroneous [10].
DNA-based methods offer the promise of rapid and accurate species identification. However, commercially available DNA probes are available only for a handful of mycobacterial species; moreover, reagents are quite costly. Nucleotide sequence analyses can be used to resolve essentially any bacterial species, but requires both amplification and sequencing.
Telenti and coworkers described a DNA-based method for species identification of mycobacteria in which a portion of hsp65, the gene encoding the 65 kDa heat shock protein, was amplified by PCR and then analyzed by restriction digest [11]. This approach, referred to as PRA-hsp65, required only routine PCR and agarose gel electrophoresis equipment and could be completed within a few hours. The different species of mycobacteria yielded distinctly different patterns of restriction fragments and thus the species of an unknown isolate could be determined by comparing the fragments observed with published analyses of clinical isolates [11][12][13][14][15][16][17] and of newly described species [4,[18][19][20][21][22][23][24]. The availability of an on-line internet resource facilitates the process [25].
Some studies have observed limitations to PRA-hsp65 which could, potentially, render the approach impractical for routine use. First, within commonly encountered species of clinical significance, such as Mycobacterium avium and Mycobacterium kansasii, as many as six distinct PRA-hsp65 patterns have been encountered [20,[26][27][28]. Such variability could result in a high frequency of ambiguous or uninterpretable patterns. Second, validated protocols for electrophoresis and internal standards have not been defined [17,29]. Lastly, published tables present patterns which differ within a range of 5-15 bp and lack patterns for recently described species [11,14,16]. The aim of this study was to determine whether PRA-hsp65 of mycobacterial isolates provides sufficiently reliable species identification to enable it to be used as the routine methodology in a reference laboratory.
Species identification by phenotype and PRA-hsp65 considered separately
Among the 434 isolates studied, biochemical and phenotypic evaluation alone assigned 371 (85.5%) isolates a species or complex; PRA-hsp65 assigned 404 (93%) isolates a species. Inconclusive results were obtained for 63 (14.5%) isolates by conventional methods compared with 30 (6.9%) isolates using the rapid DNA-based approach; these included nine isolates that could not be identified by either method.
Species identification by phenotype and PRA-hsp65 compared to sequencing
For 321 (74.0%) of the 434 isolates both methods gave the same species identification, i.e., the results were concordant (Table 1). Based on prior experience by the authors and others [26,30], these identifications were presumed to be correct. The hsp65 genes of the remaining 113 (26.0%) isolates giving discordant or inconclusive results were sequenced. Among these, phenotypic testing had assigned 50 isolates to a species or a complex, but sequencing indicated that 33 (66%) of these assignments were incorrect (Table 2). For 63 isolates the phenotypic results were ambiguous and provided only a broad Runyon classification. Even among these, 19 (30.2%) were misclassified compared to conventional expectations [9,31], including 12 with regard to rate of growth (i.e., slow vs. rapid) and 7 with regard to chromogen production (Table 2). Overall, phenotypic species identification was correct for only 17 (15%) of 113 isolates for which hsp65 sequencing was performed.
Among the 113 isolates with discordant or inconclusive results, PRA-hsp65 assigned 83 isolates to a species; 71 (85.5%) of these assignments were confirmed by hsp65 partial gene sequencing (Table 3). For most of the remaining isolates, the identifications resolved by PRA-hsp65 and sequencing were consistent with close evolutionary relationships (e.g., M. kansasii and Mycobacterium gastri, Mycobacterium intracellulare and M. avium) ( Table 3).
There were 30 isolates representing 13 PRA-hsp65 patterns not in the available databases and the species was resolved by sequencing. The observed BstEII and HaeIII fragments for these new patterns (designated NP), the source of these isolates and the species identification based on sequencing are listed in Table 4; the observed phenotypes, including antimicrobial susceptibilities, are presented in Table 5. In four instances (NP1, NP11, NP14 and NP17, representing Mycobacterium gordonae Mycobacterium terrae, Mycobacterium sherrisii and Mycobacterium arupense, respectively) multiple isolates with the pattern were identified.
Overview of results
The overall results of the two methods are summarized in Table 6. Among 434 NTM isolates, PRA-hsp65 provided correct species identification significantly more frequently than phenotypic/biochemical testing (392 (90.3%) vs 338 (77.9%), respectively; p < .0001, Fisher's exact test). lates in this collection. PRA-hsp65 provided incorrect species identification for only 4 (1.2%) of these isolates and a new pattern for an additional 3 (0.9%). In contrast, phenotypic/biochemical testing provided incorrect assignments for 9 (2.7%) and ambiguous results for 31 (9.3%). Thus, the frequency of incorrect or uncertain species identification among these isolates of potential clinical importance was almost 6-fold higher for the phenotypic method than for PRA-hsp65 (40 (12.0%) vs. 7 (2.1%), respectively; p < .0001, Fisher's exact test).
PRA-hsp65 algorithm
Figures 1, 2 and 3 display an updated algorithm relating observed restriction fragments to particular species. We have included refinements of previously assigned fragment sizes based on our observations and analysis of available hsp65 sequences from validated mycobacterial species found online [32]. Sequences retrieved from Gen-Bank [33] comprising the 441 bp Telenti fragment were analyzed using BioEdit, version 7.0.5.3. [34] and/or the DNASIS Max version 1 program (Hitashi Software Engineering Co., USA). BstEII restriction patterns were distributed in seven possible configurations: 440, 320-130, 320-120, 235-210, 235-130-85, 235-120-100, and 235-120-85. HaeIII fragment sizes were adjusted considering the nearest number multiple of 5, to facilitate interpretation of gel bands. These adjustments were performed based in our experience with analysis of more than 500 gels both visually and using the GelCompar program. HaeIII restriction fragments shorter than 50 bp were not taken in account as their discrimination in 4% agarose gels is often inaccurate. Different variants of PRA-hsp65 profiles from each species were numbered using Arabic numbers after the designation of the species, as reported in the PRASITE, except for M. avium, for which variants M. avium 1 and M. avium 2 were defined as reported in Leao et al. [20] and Smole et al. [27]. There were also PRA-hsp65 patterns frequently found in our routine work that had no sequence deposited. These patterns were included according to published data [11][12][13][14][15][16][17] or the PRASITE [25]. Figures 2 and 3 also include the two new patterns we observed in two or more isolates (NP11 and NP1) and for which we propose PRA-hsp65 designations, M. terrae 4 and M. gordonae 10, respectively. The partial hsp65 gene sequences of these isolates have been deposited in GenBank [Gen- [23]; NP17, DQ168662 [18]. All isolates with new PRA-hsp65 profiles were cultured from sputum, with the following exceptions: NP1: urine (2), feces, liver biopsy and unknown (one each); NP17: unknown (2).
M. simiae 1 a Species identification was determined by hsp65 sequencing for 113 isolates that had discordant results by PRA-hsp65 and phenotypic studies. For 71 isolates sequencing confirmed the species identification obtained by PRA-hsp65. For an additional 30 isolates, the PRA-hsp65 patterns obtained were previously unreported (see Table 4). b N, number of isolates for which the PRA-hsp65 identification shown was incorrect. c Total number of isolates of that species sequenced. Bank:EF601223 and GenBank:EF601222, respectively].
The figures also indicate the basic phenotypic characteristics (time for growth and pigment production) observed for each species.
Discussion
The incidence of individual infections and outbreaks associated with NTM has risen dramatically over the past decade establishing these organisms as significant human pathogens. Traditionally, the identification of mycobacteria to the species level has relied upon biochemical tests, which require three to six weeks to complete. Biochemical identification, even when performed by skilled microbiologists, may yield uncertain or even misleading results because (a) the tests used are inherently poorly reproducible; (b) the expected phenotypes are not an absolute property of the species, but may exhibit substantial variability; and (c) the database of phenotypic characteristics is limited to common species [10].
In recent years, DNA-based techniques have greatly facilitated identifying the species of NTM isolates and enabled a number of new species to be documented as infecting agents [35][36][37][38][39]. These approaches can be applied to a single isolated colony and a definitive result can typically be obtained within a day. PRA-hsp65, first described by Telenti et al., is based on detection of restriction fragment polymorphisms in the hsp65 gene and thereby resolving the species of a mycobacterial isolate [11].
In the present study, 434 NTM isolates from clinical specimens were analyzed by conventional phenotypic methods and by PRA-hsp65; further, those isolates for which the results from the two methods were discordant were analyzed using nucleotide sequencing of the hsp65 gene. For 63 (14.5%) isolates phenotypic methods could not provide a species identification and for almost a third of these isolates even the apparent Runyon classification proved inconsistent with conventional expectations. For an additional 33 (7.6%) isolates the phenotypic identification proved incorrect. Phenotypic variability among fresh clinical isolates has been observed in other studies [10,40,41].
In contrast, PRA-hsp65 correctly identified over 90% of evaluable isolates using currently available databases of restriction digest patterns. For most of the remaining isolates, the PRA-hsp65 pattern observed was not previously reported. There were only 4 (1.2%) clinically significant isolates for which the current PRA algorithm indicated an incorrect species.
PRA-hsp65 has proven similarly effective in other studies. Hafner et al. used 16S rDNA sequencing to analyze 126 isolates selected at random from a larger collection [17]. The hsp65 method correctly identified 120 (95.2%) of these isolates. They also sequenced 10 additional isolates from the larger collection that gave PRA-hsp65 patterns not previously reported. All these isolates represented environmental species rarely associated with clinically significant disease.
Among our 434 isolates, 30 (6.9%) provided 13 PRA-hsp65 profiles not previously reported. Our series represents isolates cultured from varied clinical specimens collected in the metropolitan and surrounding areas of the city of Sao Paulo, Brazil. Most of the isolates with new PRA-hsp65 patterns were cultured from sputum. Many represented species typically considered non-pathogens; clinical correlation was not available and these isolates may reflect colonization by environmental organisms. Previous studies have similarly documented considerable species diversity as well as the genotypic diversity among mycobacteria isolates in Brazil [42,43]. Sequence analysis confirmed that the new profiles were allelic variations within the species, consistent with previous studies [13,17,20]. Of interest, four profiles were represented by more than one isolate, suggesting that they are potentially prevalent lineages rather than singular mutation events.
The most commonly identified new profile (designated NP1) was observed in 11 isolates, representing 20% of all M. gordonae in this collection. Comparison to the proto- Algorithm of PRA-hsp65 patterns based on analysis of the 441 bp fragment of the hsp65 gene. BstEII patterns: 235 bp/210 bp Figure 2 Algorithm of PRA-hsp65 patterns based on analysis of the 441 bp fragment of the hsp65 gene. BstEII patterns: 235 bp/210 bp. Columns 1 and 2: calculated BstEII and HaeIII fragment sizes in base pairs. Column 3: species names according to [32]. Column 4: PRA-hsp65 pattern type. Column 5: RGN: rapidly growing non-pigmented, RGS: rapidly growing scotochromogen, RGP: rapidly growing photochromogen, SGN: slowly growing non-pigmented, SGS: slowly growing scotochromogen, SGP: slowly growing photochromogen. Column 6: strain(s) used for hsp65 sequencing or reference of the publication describing this pattern. We would concur with Hafner et al. that additional work is required to define and standardize the most effective electrophoresis conditions for resolving hsp65 digests of mycobacteria [17]. In a recent multicenter study evaluating PRA-hsp65, variations related to gel preparation, running conditions and documentation tools all complicated the interpretation of digestion patterns [29].
The ever-increasing amount of data available and the identification of new profiles make the analysis more complex. We present an updated PRA-hsp65 algorithm, which includes 174 patterns among 120 species and subspecies and have the basic cultural characteristics (rate of growth and pigment production). These core phenotypic traits can be readily determined and, as emphasized in a recent statement by the American Thoracic Society [45], can assist in confirming the molecular identification, detecting mixed cultures, and classifying species with indistinguishable PRA-hsp65 patterns.
Despite the complexities noted above, PRA-hsp65 analysis proved both more rapid and more reliable than phenotypic methods; it was particularly effective at resolving the most common pathogenic species. Commercial DNA probes are available only for a very few species and their expense may be prohibitive in some settings. DNA sequencing is more definitive, but sequencing capability is not yet widely available in clinical laboratories.
DNA extraction and PRA-hsp65 method
For DNA extraction, a loop-full of organisms grown on Löwenstein-Jensen medium was suspended in 500 μl of ultrapure water, boiled for 10 min and frozen at -20°C for at least 18 h. Five microliters of DNA-containing supernatant were subjected to PCR amplification of the 441 bp of the gene hsp65 [11]. Separate aliquots of the PCR product were digested with BstEII and HaeIII, and the resulting restriction fragments separated by electrophoresis in a 4% agarose gel (Nusieve, FMC Bioproducts, Rockland, Maine USA) with 50 bp ladder as molecular size standard.
hsp65 partial gene sequencing For those isolates for which conventional and PRA-hsp65 methods gave discordant or inconclusive results, the hsp65 amplicon was purified using Novagen Spin-prep Kit (Novagen, Canada) and then sequenced using BigDye terminator cycle sequencing reagents. Cycle sequencing was performed by using a Perkin-Elmer 9600 GeneAmp PCR system programmed for 25 cycles at 96°C for 20 s, 50°C for 10 s and 60°C for 4 min. Sequencing products were cleaned with CentriSep Spin Columns (Princeton Separations, Applied Biosystems) and then analyzed on a ABI Prism 377 sequencer (Perkin-Elmer).
Sequence data analysis
Data produced by the sequencer was automatically processed using the EGene platform [46]. The trace files were initially submitted to Phred [47] for base calling and quality assessment. Then, sequences were submitted to a quality filter that eliminated reads that did not present at least one window of 200 bases where 190 bases had phred quality above 15. After, low quality bases were trimmed from the sequence. For each sequence, the trimming procedure isolated a "good quality" subsequence. In this remaining subsequence, any window of 15 bases have at least 12 bases above the quality threshold of 15. After trimming, contaminant screening was performed using Blastn [48] against Homo sapiens, Salmonella typhimurium and Gallus gallus databases. Finally the clean isolates were identified by similarity using Blastn against a database of hsp65 genes. Sequences were considered a positive match when they presented a minimum similarity of 80 percent over a local alignment of at least 90 bases and ev-value of 1e-20. Species identification was confirmed if = 97% match was achieved, according to criteria proposed by McNabb et al. [44],. with any sequence deposited in databases and published. | 2017-06-21T01:35:15.899Z | 2008-03-20T00:00:00.000 | {
"year": 2008,
"sha1": "8108e6a2f1d0fd4cfc9977e15a6630a6e6b97b21",
"oa_license": "CCBY",
"oa_url": "https://bmcmicrobiol.biomedcentral.com/track/pdf/10.1186/1471-2180-8-48",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "df037438e90173801b3612d693cfd410bf40f8b5",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
} |
232105790 | pes2o/s2orc | v3-fos-license | Mathematical analysis and simulation of a stochastic COVID-19 Lévy jump model with isolation strategy
This paper investigates the dynamics of a COVID-19 stochastic model with isolation strategy. The white noise as well as the Lévy jump perturbations are incorporated in all compartments of the suggested model. First, the existence and uniqueness of a global positive solution are proven. Next, the stochastic dynamic properties of the stochastic solution around the deterministic model equilibria are investigated. Finally, the theoretical results are reinforced by some numerical simulations.
Introduction
Infectious diseases modeling has captivated the interest of many research works during the last recent years [1,6,[2][3][4][5]7]. The basic SIR model representing the dynamics behavior of the three main populations that represent the susceptible (S), the infected (I) and the recovered (R), was firstly proposed in 1927 by Kermack and Mc Kendricks [8]; the suggested model has played an important role in starting different research works in disease dynamics field. Understanding the interaction dynamics between the different infection components becomes then an important issue to prevent many serious infectious disease outbreaks. For instance, several mathematical models have been used to better understand the behavior of various viral infections, such as the hepatitis B virus (HBV) [6,10,9,11,12] human immunodeficiency virus (HIV) [1,14,2,13,15,3,4] or hepatitis C virus (HCV) [16,19,18,17].
COVID-19 is a recent pandemic disease that was behind a great disaster worldwide. Since there is still no efficient vaccine against COVID-19, substantial number of researches are undertaken in order to understand the disease mechanism, reduce the disease spread and find some solutions to this serious infection. As it was established, COVID-19 is the recent form of coronavirus infection induced by the already known severe acute respiratory syndrome SARS-CoV-2 [20][21][22][23]. This recently discovered disease can be transmitted from an infected to any close unprotected person; likewise the susceptible can become an infected individual when touching any contaminated area [24]. Hence, isolating infected persons from the other susceptible population becomes more and more an important mean to reduce and overcome COVID-19 propagation.
Recently, different models have been investigated to study COVID-19. For instance, the risk estimation, the infection evolution and the prediction of COVD-19 infection is studied [25][26][27][28]; the authors concludes that for ensuring a quick ending of the epidemic, the interventions strategy and self-protection measures should always be maintained. The meteorological role and policy measures on COVD-19 spread were studied in [29,30]; it was concluded that the policy strategy has reduced the infection and the meteorological role can be considered as an important factor in controlling COVID-19. The effect of quarantine on coronavirus was discussed in [31]; the results confirm the importance of reducing contact between the infected and other individuals.
Since the isolation strategy is an important tool to reduce the infection, adding another component representing the isolated in-dividuals (Q ) to the classical SIR model becomes primordial; and the new epidemiological model will be under SIQR abbreviation [32].
To investigate the dynamics of COVID-19 in this paper, we subdivide the total population into four different epidemiological classes in which their descriptions are defined later. The parameters used in the coinfection model are summarized in Table 1,2, and the schematic diagram of the compartmental COVID model is shown in Fig. 1.
The SIQR deterministic system of equations may take the following form: where λ is the birth average of the susceptibles, their mortality rate is denoted by ζS . The susceptible become infected at a rate βSI , the death rate of infected population is denoted by ζI ; the infected become isolated at rate υI . The death rate of the isolated individuals due to the infection is represented dQ and due to others means is ζQ . Finally, the isolated become recovered at rate κQ ; the death rate of the recovered is denoted by ζR . On the hand, stochastic quantification of several real life phenomena have been much helpful in understanding the random nature of their incidence or occurrence. This also helped in finding solutions to such problems arising from them either in form of minimization of their undesirability or maximizing their rewards. Besides, the infectious diseases are exposed to randomness and uncertainty in terms of normal infection progress. Therefore, the stochastic modeling are more appropriate comparing to the deterministic models; considering the fact that the stochastic systems do not take into account only the variable mean but also the standard deviation behavior surround it. Moreover, the deterministic systems generate similar results for initial fixed values, but the stochastic ones can give different predicted results. Several stochastic infectious models describe the effect of white noise on viral dynamics have been deployed [33,7,34]. Recently and in the same context, a stochastic SIQR model is studied in [35], the authors introduce the Brownian perturbation to the four components of the model and study the different conditions of extinction and persistence of the infection. Both of white and telegraph noises were taken into consideration to study SIQR model [36], sufficient different conditions to establish persistence in mean were studied.
In addition to the cited random noises, Lévy jumps present an important tool to model many real dynamical phenomena [37,38]. Indeed, because of the unpredictable stochastic properties of the disease progression, infection dynamical model may know sudden significant perturbations in the disease process [39]. Then, it will be more reasonable to illustrate those sudden fluctuations through an introduction of the Lévy jump behavior into the infection model. For instance, Berrhazi et al. [40] studied, recently, a stochastic SIRS model under Lévy jumps fluctuations and considering bilinear function describing the infection. The uniqueness of global solution was established, also through suitable Lyapunov functions, it was demonstrated that the stochastic stability of steady states depends on some sufficient conditions for persistence or extinction of the studied infection. Motivated by the previous works, we will consider in this paper the following stochastic SIQR model driven by Lévy noise: , , , where W i (t) is a standard Brownian motion defined on a complete probability space (Ω, F , (F t ) t⩾0 , P) with the filtration (F t ) t⩾0 satisfying the usual conditions. We denote by S (t − ), I (t − ) , Q (t − ) and R (t − ) the left limits of S (t), I (t), Q (t) and R (t) respectively. N(dt, du) is a Poisson counting measure with the stationary compensator ν(du)dt, N(dt, du) = N(dt, du) − ν(du)dt with ν(U) < ∞ and σ i is the intensity of The jumps intensities are represented by q i (u) with i = 1,…,4. The present work will be organized as follows. The next section is devoted to establish the existence and uniqueness of the global positive solution to the studied model (2). We calculate the basic reproduction number and the different problem equilibria in Section "The basic reproduction number and equilibria". The stochastic behavior of the solution of the disease-free equilibrium is studied in Section "The stochastic property around the free-infection equilibrium". The dynamics of the solution of the endemic equilibrium is studied in Section "The stochastic property around the endemic equilibrium". The sensitivity analysis is presented in Section "Sensitivity analysis". The final part of this paper is dedicated to some numerical results in order to support the theoretical findings.
The existence and uniqueness of global positive solution
The existence and uniqueness of the problem (2) global positive solution is guaranteed by the next following theorem.
Theorem 1. For any initial condition in
Proof. First, we know that the diffusion and the drift are locally Lipschitz functions, therefore for any initial condition (S (0), where t e is the time of explosion.
In order to demonstrate that this solution is globally defined, we need to check that t e = ∞ a.s. Firstly, we will demonstrate that (S (t), I (t), Q (t), R (t)) do not tend to infinity for a bounded time. Let m 0 > 0, be sufficiently a large number, in such manner that (S (0), I (0), Q (0), . We define, for each integer m⩾m 0 , the stopping time where t m is an increasing number when m↑∞. Let t ∞ = lim m→∞ t m , where t ∞ ⩽t e a.s. We need to show that t ∞ = ∞ which means that t e = ∞ and (S (t), I (t), Q (t), R (t)) ∈ R 4 + a.s. Assume the opposite case is verified, i. e. t ∞ < ∞ a.s. Therefore, there exist two constants 0 < ∊ < 1 and T > 0 such that P(t ∞ ⩽T)⩾∊.
Let's now consider the following functional with a is a positive constant.
Let m⩾m 0 and T > 0 be arbitrary. For any 0⩽t⩽t m ∧ T = min(t m , T). From Itô's formula, we will have therefore, we will have by choosing a = ζ β , we will get Integrating both sides of the Eq. (3) between 0 and t m ∧T, we get This leads to Set Ω m = t m ⩽T for m⩾m 1 This fact implies that, ) .
It follows from (4) that
where I Ωm denotes the indicator function of Ω m , letting m→∞, we will have Since T > 0 is arbitrary, then So, Therefore, the model has a unique global solution (S (t),
The basic reproduction number and equilibria
The model basic reproduction number (1) is given by R 0 = λβυ ζ(ζ+υ)(ζ+d+κ) . Its biological meaning stands for the average number of secondary infected individuals generated by only one infected person at the start of the infection process. The problem (1) has a unique free- and an endemic equilibrium E * = (S * , I * , Q * , R * ) given as follows Following the same reasoning as in [41,32] concerning the equilibria stability of the deterministic SIQR model, we can establish that E f is globally asymptotically stable when R 0 ⩽1. Besides, when R 0 > 1, E f losses it stability and the other equilibrium E * becomes stable.
The stochastic property around the free-infection equilibrium
Around the free-infection equilibrium E f , we have the following stochastic property. Theorem 2. If R 0 ⩽1 and and R (t) = Z (t), then the model (2) becomes We consider the following functional where c 1 , c 2 and c 3 are three constants that will be determined later. By using Itô's formula, we have where Now, we choose c 1 = 4ζ β and c 2 = λ(16υ− (ζ+κ+d)
4υ(ζ+κ+d)
and and Integrating both sides of the Eq. (5) between 0 and t and taking into account expectation, we have let now ρ 1 = min{l 1 , l 2 , l 3 , l 4 }, then we conclude that □ Remark 1. From our last result, one can conclude that when R 0 ⩽1, the solution fluctuates around the free steady state E f .
The stochastic property around the endemic equilibrium
The infection steady state E * has the following stochastic property.
Sensitivity analysis
The sensitivity analysis is used principally to determine which model parameter can change significantly infection dynamics. This allows to detect the parameters that have a high impact on the basic reproduction number R 0 . To perform such analysis we will need the following normalized sensitivity index of R 0 with respect to any given parameter θ: therefore, we obtain LG⩽ − .
From Table 1, we observe that the parameters λ, β and υ are positive sensitivity indices and the other remaining parameters ζ, κ and d are negative sensitivity indices. We remark that the parameters λ, β and υ have large magnitude, in their absolute values, which means that they are the most sensitive parameters of our model equations. This indicates that any increase of the parameters λ, β and υ will cause an increase of the basic reproduction number, which have as consequence of an increase of the infection. Oppositely, an increase of the parameters ζ, d and κ will decrease R 0 which leads to a reduce of the infection. Fig. 2 illustrates the contour plot of R 0 , we observe that for β = 1 and υ = 0 the value of R 0 reaches the maximum value 5.11 × 10 3 . By decreasing β and υ from 1 to 0, we remark that the value of R 0 decreases also and tends toward 8.75 × 10 − 3 (corresponding to β = 0; υ = 0).
This result reflects the impact of these two key parameters in controlling the infection. From the contour plot of R 0 given in Fig. 3, we observe that for β = 1 and κ = 0 the value of R 0 reaches the maximum value 1.03 × 10 3 . When the parameter κ is increased from 0 to 1 and the parameter β is decreased also from 1 to 0, we observe that f R 0 gradually decreases and tends to the limit value 1.93 × 10 − 1 (corresponding to β = 0; κ = 1). Hence, the parameters x and y play an essential role in controlling the infection spread.
The last contour plot of R 0 in illustrated in Fig. 4. We observe that when β = 1 and d = 0 the value of R 0 reaches its maximal value of 5.74 × 10 2 . By decreasing β from 1 to 0 and increasing d from 0 to 1, we observe that the value of R 0 gradually decreases and tends towards 1.57 × 10 − 1 (corresponding to β = 0; d = 1). This confirm the impact of the β and d in controlling the progression of the infection.
Numerical simulations and discussion
This section will illustrate our mathematical results by different numerical simulations. To this end, we will apply the algorithm given in [42] to solve the system (2). The parameters of our model representing the infection and the recovery rates are estimated from COVID-19 Morocco case [43]. The different used values of our parameters in our numerical simulations are given in Table 1. Figure 5 shows the dynamics of COVID-19 infection during the period of observation for the case of the disease extinction. From this figure, we clearly observe that the curves representing to the deterministic model converge towards the endemic-free equilibrium E f = ( 5.1 × 10 2 , 0, 0, 0 ) . The curves that represent the stochastic model fluctuate around the curves representing the deterministic ones. Moreover, it will be worthy to notice that in this case, the susceptible increase to reach their maximum and the other SIQR components that are the infected, the quarantined (the isolated) and the recovered vanish which means that the disease dies out. Within the used parameters in this figure (see Table 1), we have R 0 = 0.95 < 1 which indicates the die out of the infection. This is consistent with our theoretical findings concerning the extinction of SIQR infection.
The evolution of the infection for both the deterministic model and the stochastic with Lévy jumps model is illustrated in Fig. 6 in the case of the disease persistence. Regarding the depicts of this figure, we can see that the plots corresponding to the deterministic model converge towards the endemic equilibrium E * = ( 4, 3.42 × 10 3 , 117.17, 83.69 ) . The fluctuation around the endemic equilibrium E * is clearly remarked for the stochastic numerical results. We note that in this epidemic situation, all the four SIQR compartments, i.e. the susceptible, the infected, the quarantined (the isolated) and the recovered remain at constant level which means that the disease persists. Within the used parameters in this figure (see Table 1), we have R 0 = 31.12 > 1 which indicates the persistence of the infection. This is consistent with our theoretical findings concerning the infection persistence.
Conclusion
In this present work, a stochastic coronavirus model with Lévy noise is presented and analyzed. We have given a four compartments SIQR model representing the interaction between the susceptible, the infected, the quarantined (the isolated) and the recovered. A white noise as well as a Lévy jump perturbations are incorporated in all model compartments. We have proved the existence and the uniqueness of the global positive solution for the stochastic COVID-19 epidemic model which ensures the well-posedness of our mathematical model. By using some appropriate functionals, we have shown that the solution fluctuates around the steady states under sufficient conditions. Different numerical results support our theoretical findings. Indeed, the extinction of the disease is observed for the basic reproduction number less than unity. However, the persistence of the disease is observed for the basic reproduction number greater than one. Moreover, the fluctuation of the stochastic solution around the disease-free equilibrium is observed for the extinction case and the fluctuation of the stochastic solution around the endemic equilibrium is observed for the persistence case.
Funding
None.
Declaration of Competing Interest
The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper. | 2021-03-04T14:10:46.838Z | 2021-03-04T00:00:00.000 | {
"year": 2021,
"sha1": "c98ad31472da3eb65c182d3e7f6aa46e81b3f078",
"oa_license": "CCBYNCND",
"oa_url": "https://doi.org/10.1016/j.rinp.2021.103994",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "0a69cdd8d8dbd29f6a59b1c7ac86e8563401fcd4",
"s2fieldsofstudy": [
"Mathematics",
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
54809151 | pes2o/s2orc | v3-fos-license | Impact of Leadership Styles on Employee Adaptability in Call Center : A Perspective of Telecommunication Industry in Malaysia
The purpose of this study is to find how employees adapt to different leadership styles in call centers in the telecommunication industry. This exploratory research was conducted among employees in call centers in the telecommunication industry to test the relationship between Leadership Styles and Employee Adaptability. The researcher used statistical inference and more specifically Linear Regression to test the relationship between the two variables. Results indicated that all the three Leadership Styles have an influencing role on the Employee Adaptability. Due to company policy, high volume of responses was not achieved. Job functions of the employees not directly reporting to their managers causing employee not interested in responding the surveys. Also, this study did not investigate the maturity level of the employees which has influence on the adaptability. And, the researcher was not able to get responses from all employees working in shifts. Few studies have been explored in the Leadership Styles and Employee Adaptability, this study expands our knowledge of leadership styles effect on employees’ adaptability in call centers in telecommunication industry in Malaysia.
Introduction
The greatest asset for a Call center is its employees.Therefore, any changes in the environment that will affect the employees' adaptability must be treated with high regards.In general, the employees' performance and motivation may decline in long term, but effective leadership approaches helps to minimize this impact.Ivancevich, Konopaske and Matteson (2008) described leadership as the process of influencing others to facilitate the attainment of organizational relevant goals.Good leadership assists in effectively meeting job-related demands, creating higher-performing teams, developing loyal, committed and highly motivated employees.Researchers have developed various leadership styles which can assist aspiring leaders to understand which styles they should adopt and liked by their followers.
In order to remain effective, the leader must be a person of great character with integrity and must make a daily commitment to lead by understanding the employees.Whether a leader manages a small team or a large organization, the leader should identify the leadership styles that suits best for him and his employees.Consciously or subconsciously, a leader uses some of the leadership styles which go along with their personality.However, understanding the different styles and its impact can help the leaders to develop their own personal leadership style that will be accepted by the followers.
The perceptions that certain traits or personality characteristics that are associated with good leaders or that some are born to lead are no longer accepted (Nahavandi, 2006).In this modern age, leadership theorists have accepted that leadership is based on behaviors and skills that can be learned.With this in mind, there are a few factors that had influenced the adaptability and understanding of leadership.This research will look at some of the most common and popular leadership styles being practiced in this modern era.
Problem Statement
Leaders determine the direction of an Organization.When different leadership style takes place, it affects the employee performance either positively or negatively.The employees will act in a supportive manner towards their leader if the employees feel and accept that the leader will lead the organization in the direction that will be benefiting the organization and themselves.Unfortunately, at times it is more likely that employees will experience incongruence among their feelings, thoughts, and actions towards their leaders which can create a challenging atmosphere.On the other hand, leaders face increased pressure to respond to the employees' mood congruence and likely to practice an inappropriate leadership approach to their employees.As a result, this affects the employees' adaptability towards their leader's styles and approach which resulted in the employee performance degradation and creates resistance.
Therefore, the Researcher intends to investigate the adaptability effect of the employees towards leaders' different approaches in the call centers of telecommunication industry and provide specific strategies for the leaders to enhance their employees' adaptability.
Leadership Styles
Effective leadership is an important element for a successful organization.Leadership approaches that are not well perceived or accepted by employees will lead to a decreased organizational efficacy.An organization's performance can move unsteadily during and after a leadership change if an appropriate leadership is not demonstrated.Thus, the leaders play a very important role in determining the employee adaptability.
Transactional leadership is really a type of management, not a true leadership style, because the focus is on short-term tasks.Burns (cited in Boehnke et al., 2003) contrasted transactional and transformational leadership, believing that they lie at opposite ends of a continuum.He found that the transactional leaders work within the organizational culture as it exists but the transformational leader changes the organizational culture.Bass (cited in Boehnke et al., 2003) also expressed that transactional leaders do not voluntarily involve with employees' work until any failure occurs, whereas transformational leaders act as role models for employees, motivate them, and stimulate their intelligence.
A model derived from Bass and Avolio (1990) as shown below, explains that transactional leaders pursues a cost benefit, economic exchange to meet the employees present material and psychic needs in return for the services provided by the employees.While Bass also stressed that the leaders who recognizes the transactional needs in the potential employees but also tends to go beyond, seeking to increase and satisfy higher needs and engage the followers at their full capabilities to achieve a higher level of needs according to Maslow's hierarchy of needs.Bass explained that in order to achieve a performance beyond expectation, a transformational leader should idealize influence, individualized consideration by mentoring, provides inspirational motivation through team spirit and stimulates intellectuals in creativity and innovation among employees.
Transformational Leadership
One leading theory of leadership, known as transformational leadership, has gained prominence in the post-industrial business landscape.According to Boehnke et al. (2003) and also many other researchers such as Xenikou and Simosi (2008), Felfe and Schyns (2006), Avolio et al. (2009) supported the work of Bass and Burns that transformational leadership raises leadership to the next level.Its elements encourages followers to commit to a shared vision and goals of an organization or unit, challenging them and developing their leadership capacities by mentoring, coaching and by providing both challenge and support.
Transformational leadership occurs when a leader with charisma and vision transforms his/her followers becoming highly motivated and trust the leader whilst demonstrates behaviors that contribute to the achievement of organizational goals.In a research conducted by De Jong and Den Hartog (2007) and Aragon-Correa et al. (cited in Raja & Palanichamy, 2011), addressed that transformational leadership draws more attention in organizations since it contributes to the innovation, organizational learning and employees' creativity skills.Raja and Palanichamy (2011) in their research concluded that transformational leadership best suits for employee performance improvement.In addition, Birasnav and Dalpati (2009) research also concluded that transformational leaders have potential to affect their employees' perception of human capital benefits.Such human capital benefits have high individual return on investment, opportunity to participate in high profile project and increase in status and authority (Motley, 2007).
Transformational leaders create individual and team spirit among employees as they show interest and optimism at employee through coaching, encouraging and supporting.According to a research by Nemanich and Keller (2007), they concluded that leaders who possess the characteristics of inspirational motivation enhance the employees' goal or job performance to achieve the target set by the management.As a result, the leaders improve employees' performance while performing job activities and produce a high return on investment from employees (Boerner et al., 2007).
In short, transformational leaders are exceptionally motivating and they are trusted by employees.When every employee in a team trusts their leader, the leaders achieve the organization goals easily.
Situational Leadership
Employees prefer a leader who can guide and make decision instantaneously.However, the employee's motivation and their capability becomes an important factor that affects the situational decision.A study by Roy (2006) concluded that Situational Leadership can be used as a framework to furnish leaders with the guidance to coach their employees throughout the performance coaching cycle.During the initial meeting, Situational Leadership guides the leader in setting the degree of participation for the planning and goal-setting process.During the rest of the period, it guides the leader in each interaction with the follower.This also supported in a research conducted by Mujtaba (2009) who concluded that using situational leadership skills, managers tend to remain focused on the readiness of their employees and coaches them according to their level of maturity while adapting their communication styles to the way employees like to be treated.In this research, Mujtaba (2009) also provided an overview of situational leadership and linked it to diversity management and coaching of employees in the organization.
Situational Leadership combines four different leadership styles into a practical and methodical order for managers to lead and manage staff effectively.It teaches leaders to diagnose the needs of an individual or a team, and use the appropriate leadership approach to respond accordingly.
Employee Adaptability
A leader must understand the preferred leadership styles, how the employees adapt to the leaders and their effect on the performance (Wilson, 2010).Although the employee adaptability can be related to many other factors, the Researcher will focus the employee adaptability towards different leadership styles from the following three main approaches and their relation to the employee performance, turnover, participation and relationship with manager.
Employee Performance
In research conducted by Liu and Batt (2010) on employee performance cited that managers should seriously give importance to the employee performance improvement by providing individualized instruction and guidance.This is also supported by Heslin, Vandewalle and Latham (2006) research who stressed that managers must not only manage their teams but also coach the individual employees for their betterment.The close guidance of the manager is appreciated and the portrayed leadership styles are accepted by the employees in which they tend to follow the instructions given by their leader without much hesitation.
A research conducted by Chen and Silverthorne (2005) confirmed that an employee's ability and their willingness to perform affect each other.The finding suggested that organizations should have the right leaders to give employees suitable training to increase their ability and productivity.Thus, the leaders have to practice more adaptable leadership approach in order to encourage employees to perform.
Turnover
The turnover issue has been a critical organizational issue for some time.Turnover intention has been emphasized as an important factor for the financial performance of organizations and has been influenced by various reasons in organizations (Lambert et al., 2001).
Employees are most likely opt turnover when both their psychological well-being and their job satisfaction are low (Wright & Bonett, 2007).Align with this statement, a survey conducted by Truskie (2008) found that the number one reason employees leaves a company is because of their manager.Employee turnover can often be attributed to poor managerial performance, low emotional intelligence and ineffective leadership.This is also supported by Joo and Park (2009) through their findings that leadership factor is one of the main reason for turnover intention.
In another survey reported in a Workforce Magazine (cited in Carruthers, 2010) stated the results of interviews with 20,000 departing workers indicated the main reason employees chose to leave is due to poor management.While a HR magazine (cited in Carruthers, 2010) found that 95% of exiting employees attributed their search for a new position due to an ineffective leadership.Therefore, it is very significant that leaders hold important role in managing and retaining the employees.
Employee Participation
Employee participation actually increases organizational commitment and job satisfaction, and in fact, during an organizational change it fosters higher levels of change acceptance and effectiveness (Sagie & Koslowsky, 1996).This can be achieved through an effective leadership styles.
With the trends moving toward employee empowerment, the leader's relationship with employees becomes more cohesive.Employees tend to feel more empowered when they are asked for input about a particular task concerning them rather than when they were told exactly how to do something.But not all employees are alike, thus, a leader needs to assess an employee's ability and motivation in order to get the employees to participate and commit in their jobs.This statement correlates with the research done by Walsh and Taylor (2007) who concluded that affective commitment by employees are seen as a strong predictor for employee attitudes and behavioral intentions that brings them to participate in working environment.
Employee-Manager Relationship
Gill (2007) describes good employee-management relationship is the key to the success of a hospitality organization.In a research conducted by Gill (2007) he concluded that the employees trust towards managers has a significant impact on their job satisfaction which describes the level of participation of employees with their managers in performing their jobs.Thus, the leaders should practice the appropriate leadership styles but McCann (2009) stresses that leaders cannot just take one style and think it will work for everyone.An effective leader is able to alter their style by adopting an appropriate mix of task and relationship behavior to maintain the connection with the employees.
A good leadership is understood to increase the likelihood of having a more effective employee-manager relationship at workplace.This supports the findings by Snell and Dickson (2010) that employees have reacts positively when managers implemented good leadership practices, such as providing clear direction, coaching and advancing the professional development of employees.
Hypotheses
The literature review provides an overview on the three types of leadership styles and employee adaptability.Based on the information, a theoretical framework was formed and the following hypotheses were developed. Hypotheses 1: Leadership styles have significant influence on Employee Adaptability.
Research Design
The purpose of this study is hypotheses testing that is to test the relationship between the independent variables and dependent variables.This study focus on employees in call centers.The investigation was done using regression analysis.The sampling was based on probability that the sample size represented the population.The research was performed with interference that no controlled and environment and is performed in one time.The data was collected using questionnaires with Likert scale and employees demographic used ordinal scale.The analysis was performed using hypotheses testing.
The sample comprises of employees in call centers in the telecommunication industry.The researcher approached the organizations and explained to the management and employees about the study.The questionnaire used 5-point Likert scale for most of the questions.Sample size greater than 30 and less than 500 is suitable for most researches and generally, the number of samples should be 10 times the number of variables studied (Sekaran, 2003).
Pilot Study
The questionnaire was constructed based on the literature review.A pilot study was conducted in which the questionnaires were randomly distributed to call centers in the telecommunication industry.A response of 20 samples was collected.The reliability was tested and the Cronbach Alpha read more than 0.5 which indicated the questionnaire was reliable.
Final Study
In the final study, the questionnaires were distributed to call centers in the telecommunication industry.A total of 104 feedbacks were obtained.The collected responses were subjected to factor analysis, and followed by reliability analysis before proceeding to regression analysis.
Demographic Analysis of the Respondents
A total of 104 employees responded for this study.The demographic profile of the respondents is explained below and the summary of the information can be seen in Table 1.
Age
Respondent's age is factored in this survey to understand the age range of the Call Center employees' and how they adapt to the different leadership styles respectively.Respondents less than 25 years old consists of 12.5%, while the respondents between 26 and 30 years consist of 48.1% and group between 31 and 35 years old constituted 26.9%.Lastly, the respondents aged between 41 and 45 years constituted 1% only.
Gender
In respect to the gender, there were a total of 60 female and 44 male employees participated in this study.
Education
The respondents were classified into four groups.The high school respondents constituted only 1%, respondents with certificate or diploma constituted 20.2%, graduates with bachelor degree was 68.3% and with master degree was 10.6%.
Position
Position is referring to the position in which the respondents are working in the call centers.The positions are classified into four groups i.e. non-management, lower-management, middle-management and senior management.The groups constituted 40.4%, 47.1%, 11.5% and 1% respectively.
Experience
The experience refers to the number of years the employee has been in the position.It is classified into five groups i.e. less than 5 years, 5 to 10 years, 11 to 15 years, 16 to 20 years and above 20 years.The findings constituted 43.3%, 49.0%, 5.8%, 1.0% and 1.0% respectively.
Type of Organization
Type of organization refers to the category of the company in the telecommunication industry i.e.Foreign Owned Multinational Corporation, Local Public Listed (PLC Bhd) and Local Organization (Sdn Bhd).They constitute 1%, 88.5% and 10.6% respectively.
Factor Analysis
The factor analysis is a statistical technique used to find whether the variables observed are related to an unobserved variable which are called as factors.By using this technique, generally, the variances are summarized into smaller set, which provides the key information of the variables.This analysis is performed as a test of validity of measures for both dependent and independent variables.
The findings of this study showed that the KMO (Kaiser-Meyer-Olkin) value read above 0.5 for all the three Independent Variables.The Transactional Leadership constituted 0.776, Transformational Leadership constituted 0.837 and Situational Leadership is 0.819.The summary of the KMO and Barlett's Test is shown below.
The analysis for Dependent Variables showed the KMO (Kaiser-Meyer-Olkin) value for all the variables are above 0.5.Employees' Performance, Turnover, Participation and Relationship with Manager showed 0.897, 0.682, 0.888 and 0.755 respectively.The summary of the KMO and Barlett's Test is shown below.
Reliability Analysis
The reliability analysis is performed for each dependent and independent variables similar to the factor analysis.
According to Sekaran (2003), the minimum acceptance criteria of reliability are the Cronbach's Alpha value should exceed 0.5 and high reliability is reflected from Cronbach's Alpha above 0.8.The findings of this study showed Cronbach's Alpha value for the three variables were above 0.5.The Transactional Leader, Transformational Leadership and Situational constituted 0.577, 0.721 and 0.890 respectively.The results are reflected in the Table 3 below.
Correlation Analysis
The correlation analysis is a statistical method used to observe all the existence of relationship between independent variables and dependent variables (Sekaran, 2006).The analysis is performed to see whether the two variables are perfectly relayed in a positive liner relationship, negative linear relationship or no linear relationship between them.
The study showed that the correlation between leadership styles and performance is significant for all the leadership styles.Transactional Leadership showed p = 0.009, Transformational Leadership p = 0.018 and Situational Leadership p = 0.000.
The correlation between leadership styles and turnover showed Transactional Leadership is significant with p = 0.000, but Transformational and Situational Leadership are not significant, p=0.075 and p = 0.508 respectively.
The correlation between leadership styles and participation indicated that the Transactional and Situational Leadership are significant with p = 0.009 and p = 0.000 respectively while Transformational Leadership constituted significance level p = 0.182.
Lastly, the correlation between leadership and relationship with managers indicated that only Situational Leadership is significant with p equal to 0.000.Transactional and Transformational Leaderships proofed not significant to employees' relationship with manager with p = 0.187 and p = 0.126 respectively.The summary of the correlation is shown in the Table 4 below.
Interpretation of Analysis
From the results obtained, we can conclude the following: The leadership style has an impact on employee adaptability.
Transactional Leadership has an impact on employees' performance, turnover and participation only.
Transformational Leadership has an impact on employees' performance only.
Situational Leadership has an impact on employees' performance, participation and relationship with manager.
Findings
This research is to study about the employees' adaptability towards different leadership stylespracticed in call centers in the telecommunication industry.The findings of the current study provided strong evidence that managers' leadership styles have an influence on the employees' performance, turnover, participation and relationship with them in the call center business unit.
The study revealed that the transactional leaders influence the employees' adaptability in the aspect of performance, turnover and participation, leaving less focus on the relationship with the manager.This could be reasoned by the fact that most employees are motivated by rewards.While, the Transformational Leadership is solely influenced the employee performance in the call center.This could be because the employees are managed by female managers.And finally, the findings showed that Situational Leadership has significant influence on the employees' performance, participation and relationship with manager.This can be due to managers equip themselves with this style of leadership and demonstrate in particular situations.The findings in this study did not deny the three types of leaderships are correlated to employees' performance, turnover, participation and relationship with manager.So, the eight hypotheses are verified in this study.The final framework of this research was established as per Figure 1 attached below.
Figure 1.Final framework
Future Research
Since this research focused on the employee adaptability towards different leadership styles, it is possible to study hypothetical relationship between the extend of the emotions at the workplace against employees' adaptability.Future research also should be carried out on the individual leadership styles in the call centers of the telecommunication industry itself as it may provide an in depth knowledge on the most suitable leadership style.Also, future research can be conducted in other call centers in other industries such as banking, public sectors and airlines to understand better on the approaches portrayed and preferences in this sector.All the above topics will provide us a better knowledge on the leadership styles and their impact on the employees' adaptability.
Hypotheses 2: Transactional Leadership has significant influence on Employee Adaptability. Hypotheses 3: Transformational Leadership has significant influence on Employee Adaptability. Hypotheses 4: Situational Leadership has significant influence on Employee Adaptability. Hypotheses 5: Transactional, Transformational and Situational Leadership styles have significant influence on Performance. Hypotheses 6: Transactional, Transformational and Situational Leadership styles have significant influence on Turnover. Hypotheses 7: Transactional, Transformational and Situational Leadership styles have significant influence on Participation. Hypotheses 8: Transactional, Transformational and Situational Leadership styles have significant influence on Relationship with Manager.
Table 1 .
Summary of demographic profile
Table 2 .
Summary of KMO and Barlett's test for independent variable and dependent variable (leadership styles)
Table 3
Summary of Cronbach's Alpha for independent variable (leadership styles) and dependent variable (employee adaptability)
Table 4 .
Summary of regression statistic for dependent variable (performance, turnover, participation and relationship with manager) | 2018-12-07T00:14:51.456Z | 2014-03-31T00:00:00.000 | {
"year": 2014,
"sha1": "dd2b59bea0e0b8ccd084b1766a6c374e90ac5d6c",
"oa_license": "CCBY",
"oa_url": "https://ccsenet.org/journal/index.php/ass/article/download/35651/20191",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "dd2b59bea0e0b8ccd084b1766a6c374e90ac5d6c",
"s2fieldsofstudy": [
"Business"
],
"extfieldsofstudy": [
"Business"
]
} |
244478219 | pes2o/s2orc | v3-fos-license | Feasibility of sparse large Lotka-Volterra ecosystems
Consider a large ecosystem (foodweb) with n species, where the abundances follow a Lotka-Volterra system of coupled differential equations. We assume that each species interacts with d other species and that their interaction coefficients are independent random variables. This parameter d reflects the connectance of the foodweb and the sparsity of its interactions especially if d is much smaller that n. We address the question of feasibility of the foodweb, that is the existence of an equilibrium solution of the Lotka-Volterra system with no vanishing species. We establish that for a given range of d with an extra condition on the sparsity structure, there exists an explicit threshold depending on n and d and reflecting the strength of the interactions, which guarantees the existence of a positive equilibrium as the number of species n gets large. From a mathematical point of view, the study of feasibility is equivalent to the existence of a positive solution (component-wise) to the equilibrium linear equation. The analysis of such positive solutions essentially relies on large random matrix theory for sparse matrices and Gaussian concentration of measure. The stability of the equilibrium is established. The results in this article extend to a sparse setting the results obtained by Bizeul and Najim in Proc. AMS 2021.
Introduction
Lotka-Volterra system of coupled differential equations.
For a given foodweb, denote by x n = (x k (t)) 1≤k≤n the vector of abundances of the various species at time t ≥ 0. In a LV system, the abundances are connected via the following coupled equations: M k x (t) for k ∈ [n] := {1, · · · , n} , where M n = (M k ) stands for the interaction matrix, and r k for the intrinsic growth of species k. At the equilibrium dxn dt = 0, the abundance vector x n = (x k ) k∈ [n] is solution of the system: M k x = 0 for x k ≥ 0 and k ∈ [n] . (1) An important question, which motivated recent developments [1,5], is the existence of a feasible solution x n to (1), that is a solution where all the x k 's are positive, corresponding to a scenario where no species disappears. Notice that in this latter case, the system (1) takes the much simpler form: x n = r n + M n x n , where r n = (r k ).
Aside from the question of feasibility arises the question of stability : for a complex system, how likely a perturbation of the solution x n at equilibrium will return to the equilibrium? Gardner and Ashby [6] considered stability issues of complex systems connected at random. Based on the circular law for large random matrices with i.i.d. entries, May [7] provided a complexity/stability criterion and motivated the systematic use of large random matrix theory in the study of foodwebs, see for instance Allesina et al. [8]. Recently, Stone [9] and Gibbs et al. [10] revisited the relation between feasibility and stability.
In the spirit of May 1 and in the absence of any prior information, we shall model the interactions of matrix M n as random and in order to simplify the analysis, we will consider intrinsic growths (r i ) i∈[n equal to 1, and the equations under study will take the following form in the sequel:
Sparse foodwebs
One of the most important parameters of the complexity of an ecosystem is its connectance, which is the proportion of interactions between species (see for instance [11]). This corresponds to the proportion of non-zero entries in the interaction matrix M n . May's complexity/stability criterion asserts that the instability of an ecosystem increases with the connectance (i.e. the less sparse M n is, the more unstable is the ecosystem equilibrium). More specifically, [12] specifies that the effect of the sparsity depends on the nature of the interactions (random, predator-prey, mutualistic or competitive). In the case of random interactions, [13] supports the idea that sparse ecosystems lead to a stable equilibrium. Based on ecological and biological data (see for instance [14]), recent studies [15] suggest that foodwebs can actually be very sparse.
In a recent theoretical study, [16] study the properties of sparse ecological communities in relation with the strength of interactions.
To encode this sparsity in a simple parametric way, we first consider a directed d n -regular graph with n vertices and its associated n × n adjacency matrix ∆ n = (∆ ij ): 1 if there is an edge pointing from i to j , 0 otherwise.
In the considered graph, each vertex i has d n edges pointing from a vertex k ∈ [n] to i, and has d n other edges pointing from i to a vertex ∈ [n]. An edge pointing from i to i is called a loop. In particular, matrix ∆ n is deterministic, has exactly d n non-null entries per row and per column, and n × d n non-null entries overall. Denote by A n a n × n matrix with independent Gaussian N (0, 1) entries and consider the Hadamard product matrix ∆ n • A n = (∆ ij A ij ). Let (α n ) n≥1 be a positive sequence. We assume that matrix M n has the following form Let us comment on the normalizing factor 1/(α n √ d n ). Theoretical results on sparse large random matrices [17] assert that asymptotically where · stands for the spectral norm, if the degree d n of the graph satisfies d n ≥ log(n), a condition that we will assume in the remaining of the article. In particular, normalization 1/ √ d n guarantees that matrix ∆ n • A n / √ d n has a macroscopic effect in the LV system, even for large foodwebs (large n).
The extra normalization 1/α n is to be tuned to get a feasible solution. Denote by 1 n the n × 1 vector of ones and by A T the transpose of matrix A. In the full matrix case ∆ n = 1 n 1 T n , [5], based on [18], proved that a feasible solution is very unlikely to exist if α n ≡ α is a constant. We thus consider the regime where α n → ∞ and will prove that there is a sharp threshold α n ∼ 2 log(n) above which a feasible solution exists (with high probability) and below which does not. This phase transition has already been established in [1] for the full matrix case.
One can notice that, in sparse foodwebs (d n < n), the interaction coefficients can be stronger than when the interaction matrix is full (i.e. when d n = n) in the sense that 1 √ dn > 1 √ n .
Models and feasibility results
The sparse random matrix model under investigation is given in (3). Specifying the range of d n and the structure of ∆ n , we introduce hereafter two models amenable to analysis.
Model (A): Block permutation matrix.
Let n = d × m. Denote by S m the group of permutations of [m] = {1, . . . , m}. Given σ ∈ S m , consider the associated permutation matrix • matrix ∆ n introduced in (3) is a block-permutation adjacency matrix given by where ⊗ is the Kronecker matrix product.
Notice that ∆ n still corresponds to the adjacency matrix of a d-regular graph.
Example 1 To illustrate these definitions, we provide an example. Let n = m × d with m = 4 and σ ∈ S 4 defined by Matrices Pσ, ∆ and ∆ • A are respectively given by: Assume that M n is given by (3) and that d = d n satisfies lim n→∞ d n n = β > 0 .
We can now state the main result of the article: Theorem 1 Let An be a n × n matrix with i.i.d. N (0, 1) entries and ∆n given by Model (A) or (B). Assume that αn − −−− → n→∞ ∞ and denote by α * n = 2 log n . Let xn = (x k ) k∈[n] be the solution of 2. If ∃ ε > 0 such that eventually α n ≥ (1 + ε)α * n then P min The results of Theorem 1 are illustrated in Fig. 1.
Remarks
1. By taking d n ≥ log(n), we guarantee that the spectral norm of matrix ∆n•An √ dn is of order O(1), see [17]. In particular, matrix I n − ∆n•An αn √ dn is invertible and the solution x n can be represented as: 2. An informal first-order expansion of the solution immediatly explains this phase transition. If we expand the inverse matrix and neglect the remaining terms, we get Notice that the z i 's remain i.i.d. N (0, 1). Going one step further in the approximation yields min i∈[n] x i 1 + min i∈[n] z i α n .
By standard extreme value results, we have min i∈[n] z i ∼ − 2 log(n), hence the phase transition.
3. The component-wise positivity of the solution has been studied in the full matrix case, i.e. ∆ n = 1 n 1 T n and d n = n, in [1] where the same phase transition phenomenon occurs. Proof of Theorem 1 can be handled as in [1] for Model (B) with non-trivial adaptations that will be specified.
In the case where d n n, a normalization issue occurs. To say it roughly, the Euclidian norm of vector 1 n / √ d n is no longer of order O(1) but of order n/d n and one needs to handle more carefully the sparsity of matrix ∆ n .
In this regard, the block-permutation structure of Model (A) is a technical and simplifying assumption. The problem of the component-wise positivity of x n for a general adjacency matrix ∆ n of a d-regular graph with d ≥ log(n) remains open.
Stability results
A classical property of (2) is the positivity of the orbits 2 : if x 0 n ∈ (R * + ) n , then x t n ∈ (R * + ) n as well (t > 0). We first recall definitions related to stability from [19,Chapter 3]. An equilibrium x n is stable if for any given neighborhood W of x n , there exists a neighborhood V such that for any initial point x 0 n ∈ V , the orbit {x t n ; t ≥ 0; x 0 n ∈ V } stays in W . In addition, if the equilibrium is stable and the orbit converges to x n , the equilibrium is said asymptotically stable.
In the full matrix case (∆ n = 1 n 1 T n , d n = n), it has been proved in [1] that in the regime where feasibility occurs, the system is asymptotically stable in the sense that the Jacobian matrix J of the LV system (2) evaluated at x n : has all its eigenvalues with negative real part. Finally, the equilibrium is globally stable when it is asymptotically stable and the neighborhood V can be taken as the whole state place (R * + ) n .
We complement Theorem 1 and prove that feasibility and global stability occur simultaneously. Beware that in this theorem, the solution, although unique, is no longer (component-wise) positive and may have zero components corresponding to vanishing species. Notice that the assumption over ∆ n covers Models (A) and (B) but is far less restrictive. We illustrate Theorem 2 in Fig. 2. minimum, maximum and mean of the population dynamics (x t n , t > 0) solution of (2) for n = 5000 (log(n) 8.51), d = 10 and ∆ n follows Model (A). In the first figure, α n > 2 log(n), the minimum abundance remains positive. In the second one, α n < 2 log(n), the minimum abundance vanishes and the equilibrium is not feasible.
We now specify Theorem 2 in the case of feasibility. and assume that ∆n is given by Model (A) or (B). Denote by Σn the spectrum of the Jacobian matrix J (xn) given by (7).
Assume that there exists ε > 0 such that eventually αn ≥ (1 + ε)α * n . Then: 1. The probability that the equilibrium x n is feasible and globally stable converges to 1, 2. The spectrum Σ n asymptotically coincides with −diag(x n ) in the sense that: As a consequence of (8), for any x 0 n ∈ (R + * ) n , the orbit x t n converges to the equilibrium x n at an exponential convergence rate, see Fig. 3
(b) Histogram of the equilibrium abundances. Fig. 3: Consider the population dynamics (x t n , t > 0) solution of (2) where M is given by (3) and ∆ n follows Model (A) with n = 15000 species, m = 1500 blocks, d = 10 > log(n) 9.62 and α n = 3 log(n). On the left, we plot 10 species randomly chosen over 15000 with starting abundances equals to 1 2 . On the right, the histogram of the abundances is represented, and the normal density with mean 1 and variance 1 α 2 n is fitted. Notice the substantial spread of the abundances despite the high value of n.
Notations
If v is a vector then v stands for its Euclidian norm; if A is a matrix then A stands for its spectral norm and A F = ij |A ij | 2 for its Frobenius norm. Let ϕ be a function from some space X (usually R) to R then ϕ ∞ = sup x∈X |ϕ(x)|. Convergence in probability is denoted by P − →. When no confusion can occur, we shall drop n and simply denote A, ∆, α, d, x, etc. instead of A n , ∆ n , α n , d n , x n , etc.
Organization of the paper
In Section 2, the spectral norm of a sparse matrix and the general strategy of proof are described. Proof of Theorem 1 is provided in Section 3 for Model (A), and in Section 4 for Model (B). Theorem 2 is proved in Section 5. In Section 6, we conclude and state an open question.
Acknowlegments
The authors thank Maxime Clénet, François Massol and Mylène Maïda for fruitful discussions and are grateful to Nick Cook for his insight on the singular values of a sparse random matrix (see Appendix A).
2 Spectral norm of the interaction matrix and strategy of proof In the following proposition which proof is based on [17], we provide an esti- The fact that A's entries are N (0, 1) and that d n ≥ log(n) is crucial.
Proposition 4 Assume that A is a n × n matrix with i.i.d. N (0, 1) entries, that ∆ is a n × n adjacency matrix of a d-regular graph, that d ≥ log(n). Then there exists a constant κ > 0 independent from n (one can take for instance κ = 22) such that In particular, let δ ∈ (0, 1) be fixed and there exists a rank n 1 such that for all n ≥ n 1 : Since α → ∞, the last part of the proposition immediatly follows.
Strategy of proof
Based on the previous control of the spectral norm in probability, we reduce the problem of feasibility to the control of the extreme values of high order terms of the resolvent, considered as a Neumann sum, see Lemma 5. This preliminary step is similar to [1, Section 2.1].
Going back to Eq.(6), we can write which by Proposition 4 exists with probability tending to one, we obtain the representation which holds with growing probability. Denote by e k the n × 1 k-th canonical vector, then Unfolding the resolvent as a Neumann sum, we obtain where Notice that the Z k 's are i.i.d. N (0, 1) random variables and denote byM = min k∈[n] Z k . Eq. (9) immediatly yields Let By taking into account this convergence, we can rewrite (10) as x k where we used (α * n ) −1 (M + β * n ) = o P (1). Theorem 1 will then follow from the following lemma.
Proof of Theorem 1 for Model (A)
We assume that ∆ n follows Model (A).
In order to prove Lemma 5, we first take advantage of the fact that ∆ • A/ √ d is typically lower than κ (see Proposition 4) and replace R k by a truncated version R k (step 1). We then prove that A → R k (A) is Lipschitz (step 2). The quantity R k being Lipschitz, its centered version is sub-Gaussian if the matrix entries are Gaussian i.i.d. We finally prove that R k (A) is uniformily integrable (step 3). The conclusion easily follows. Although the general strategy is similar to the one developed in [1], the proofs are substantially different. In particular, proofs of step 2 and 3 heavily rely on the block permutation structure of the matrices.
Step 1: Truncation
Toward proving Lemma 5, sub-Gaussiannity is an important property, which follows from Lipschitz properties by standard concentration of measure arguments. Unfortunately A → R k (A) fails to be Lipschitz (simply notice that R k (A) has quadratic and higher order terms). In order to circumvent this issue, we provide a truncated version of R k .
Let κ > 0 as in Prop. 4 (one can take κ = 22), η ∈ (0, 1) and ϕ : R + → [0, 1] a smooth function: strictly decreasing from 1 to 0 for x ∈ (κ + 1 − η, κ + 1). According to Prop. 4, is equal to one with high probability. We introduce the truncated value: We have It is therefore sufficient to prove to establish the first part of Lemma 5. The property of the minimum can be proved similarly.
Step 2: Lipschitz property forR k (A)
For ≥ 2, we introduce the following summand terms: The following lemma is the main result of this section.
Lemma 6 Let κ > 0 as in Proposition 4, δ ∈ (0, 1) and n 0 such that for all n ≥ n 0 , For ≥ 2 and n ≥ n 0 , the functionρ k, : where K = K (κ, n 0 , δ) > 0 is a constant independent from k, d and n ≥ n 0 . Moreover, K := ≥2 K < ∞. In particular, the functionR k is K-Lipschitz : Given a n × n matrix C, we define its hermitization matrix H(C) by: A well-known property of H(C) is its symmetric spectrum and the fact that the singular values of C are the non-negatives eigenvalues of H(C). In particular, C corresponds to the largest eigenvalue of H(C).
In order to prove Lemma 6, we first consider the case where H(∆ • A) has a simple spectrum, a sufficient condition for the differentiability of ∆ • A , we then prove that the Euclidian norm of the gradient ofρ k, (A) is bounded : ∇ρ k, (A) ≤ K and finally proceed by approximation to get the general Lipschitz property.
Proof We first consider the case where H(∆ • A) has a simple spectrum. In this case, ∆ • A is equal to the largest eigenvalue of H(∆ • A) which has multiplicity 1 and is thus differentiable. Denote by ∂ ij = ∂ ∂Aij . Notice that if ∆ ij = 0, then for any smooth function f : R n×n → R, ∂ ij f (∆ • A) = 0. If needed, we will take advantage of this property.
We have : In particular, We first evaluate ij |S 1,ij | 2 .
Recall that ∆•A being the maximum eigenvalue of H(∆•A) which by assumption is simple, it is differentiable by [21,Theorem 6.3.12]. Let u and v be respectively the left and right normalized singular vectors associated to the largest singular value Notice that w 2 = 2. We have We now focus on Notice that 1/ √ d = n/d. Since matrix ∆ • A follows Model (A), one can notice that (∆ • A) remains a block matrix with only d nonzero terms per row (and per column as well). This property is fundamental for the remaining estimates and fully relies on the Model (A) assumption.
Denote by and by 1 J k, the n × 1 vector with zero coordinates except those belonging to J k, , set to 1. In particular, Using the fact that ϕ We now evaluate n i,j=1 S 2,ij Recall the definitions of I i and J k introduced in (20), (21). We have and zero else. Then We concentrate on the term Let I Ii = diag(1 Ii (k); k ∈ [n]}), where 1 Ii is the n × 1 vector with component 1 if it belongs to I i and zero else, then for some τ ∈ Sm and some n × n matrix B. In particular, taking into account the matching between the indices of I Ii and (∆ Eq. (24) Multiplying by |ϕ d (A)| 2 finally yields the appropriate estimates: Combining (22) and (25), we obtain : where K does not depend upon k, n, d and is summable. So far, we have established a local estimate over ∇ρ k, (A) for any matrix A such that H(∆ • A) has a simple spectrum. We first establish the Lipschitz estimate (17) for two such matrices A and B. Let , it has simple spectrum for all t / ∈ {t l , l ∈ [L]}. We can now proceed: By iterating the process over the intervals (t l−1 , t l ), we get
Hence the Lipschitz property along the segment [A, B].
To go beyond, we proceed by density and prove that for a given matrix ∆ as in Model (A), the set of matrices (∆ • A) such that H(∆ • A) has a simple spectrum is dense in the set of matrices (∆ • A, A ∈ R n×n ).
Let Pσ be the permutation matrix used to define ∆ in (4) and I d the identity matrix of size d. We define the following n × n matrices Notice that Π is a n × n permutation matrix and that D A is a block diagonal matrix with d × d blocks on the diagonal. Since Π Π T = Π T Π = In, we also have In the framework of Example 1, matrices Π and D A are given by: An important feature of D A is that ∆ • A and D A have the same singular values: Consider a simultaneous ε-perturbation of the Λ (µ) 's into Λ ε (µ) so that all the Λ ε (µ) 's have distinct diagonal elements, ε-close to the Λ (µ) 's. Denote by . and let D ε A be the block diagonal matrix with blocks (A ε (µ) ) µ∈ [m] . Then H(D ε A ) is arbitrarily close to H(D A ) and has a simple spectrum. Note that D ε A Π is ε-close to ∆ • A, is such that H(D ε A Π) has a simple spectrum and has the same pattern as ∆ • A in the sense that: To emphasize this property, we introduce the n × n matrix A ε defined as We can now conclude. Let ∆ • A, ∆ • B be given and D ε A Π = ∆ • A ε and D ε B Π = ∆ • B ε constructed as previously; notice that C →ρ k, (C) is continuous. Then This concludes the proof of the Lipschitz property.
Step 3: uniform estimate for ER k (A)
As a consequence of the Lipschitz property of R k , R k (A) if centered is sub-Gaussian if A is a n×n matrix with i.i. d. N (0, 1) Denote by 1 (µ) the n×1 vector with ones for the indices (µ i ) i∈[d] and zeros elsewhere.
We have We start by expanding The strategy of proof closely follows the one in [1], with one specific issue to handle: the uniform bound on E R k . An important property exploited in [1] to establish a uniform bound over E R k was the exchangeability of the R k 's (or block exchangeability in the case of Model (A)). There is not enough structure in Model (B) to guarantee this exchangeability (which might not hold).
We carefully address this issue hereafter.
A uniform bound over E R k for Model (B)
Proposition 9 Under the assumptions of Theorem 1, uniformly in k ∈ [n], Proof of Proposition 9 relies on two important facts.
• The fact that almost surely H(∆ • A) has a simple spectrum, hence the Lipschitz function ∆ • A is almost surely differentiable with an explicit formula for the partial derivatives, see (19). Details are provided in Appendix A. • The Gaussian integration by parts (i.b.p.) formula: If Z ∼ N (0, 1) then E Zf (Z) = Ef (Z). Interestingly, this formula holds for f Lipschitz. In this case, f is absolutely continuous hence almost surely differentiable (see for instance [24,Chap. 7,Thm. 4]) with linear growth at infinity.
Proof In order to get an asymptotic bound over E R k (A), we expand its expression: At this point, we use the Gaussian i.b.p. formula applied to A → ϕ d (A) which is Lipschitz and a.s. differentiable with explicit derivative (see (19)).
We first handle the term T 1 by Cauchy-Schwarz inequality: We now handle the term T 2 : We finally handle the term T 3 . Notice that ∂ ki Q ij = 1 α √ d Q ik Q ij and denote by ω := (Q ik 1 I k ) i∈ [n] . Notice that ω 2 ≤ e * k Q * Qe k hence ω ≤ Q and Combining these asymptotic notations finally yields : Notice that even if the bound obtained in Proposition 9 is weaker than the one obtained in Proposition 8 or in [1,Prop. 2.4], it is still sufficient to establish the feasibility under Model (B).
Proofs of Theorem 2 and Proposition 3 5.1 Proof of Theorem 2
The proof is a combination of Takeuchi and Adachi's theorem [19, Theorem 3.2.1] and Proposition 4. We first recall the definition of Volterra-Liapunov stability, see for instance [19,Section 3.2]: Let B be a n × n real matrix. B is Volterra-Liapunov stable if there exists a n × n positive definite diagonal matrix D such that DB + B T D is negative definite.
Going back to Eq. (2), according to Takeuchi and Adachi's theorem [19, Th. 3.2.1], this LV system has a unique nonnegative and globally stable equilibrium if M n − I n is Volterra-Liapunov stable.
We now rely on the asymptotic spectral properties of M n to study the Volterra-Liapunov stability of M n − I n . We drop the subscript n in the sequel. Take D = I then is an hermitian matrix. This matrix is negative definite if all its eigenvalues are negative. Given that M + M T is also hermitian, we just have to check that the spectral radius ρ M + M T < 2. According to Proposition 4: Thus, the probability that M − I is Volterra-Liapunov stable converges to 1 as n → ∞. By [19,Th. 3.2.1], this implies that the probability that the LV system (2) has a unique nonnegative and globally stable equilibrium converges to 1 as n → ∞.
Proof of Proposition 3
We first prove the first part of the proposition. By Theorem 2, there exists a unique nonnegative globally stable equilibrium to (2). If there exists > 0 such that eventually α n ≥ (1 + )α * n where α * n = √ 2 log n, then this equilibrium x n is positive by Theorem 1 with overwhelming probability as n → ∞.
The rest of the proof closely follows the proof of [1, Corollary 1.4] and is omitted.
Conclusion
In this article we study the feasibility and stability of sparse large ecosystems modelled by a large Lotka-Volterra system of coupled differential equations: Our work is motivated by recent research [15] which suggests that in the light of many ecological and biological datasets living networks are often sparse. It also illustrates the interest to study feasibility in relation with the normalization of the interaction matrix's entries beyond the non-sparse full i.i.d. models, and opens perspectives to study models with more structure such as elliptic interactions or patch models.
In the model under investigation, the interaction matrix M n is a sparse random matrix, where the sparsity is encoded by a patterned matrix ∆ n based on an underlying d n -regular graph, and the randomness by i.i.d. random variables (matrix A n ) for non-null entries. The single parameter d n of the regular graph provides an easy one-dimensional parametrization of the connectance of the foodweb.
Our main conclusion is that beyond the standard normalization 1/ √ d n of the interaction matrix ∆ • A, which guarantees a bounded norm an extra factor 1/α n with α n → ∞ is needed to reach feasibility. The interaction matrix finally writes and a sharp phase transition occurs at α * n = 2 log(n). Interestingly, the same phase transition as in the non-sparse case occurs. Fig. 4: Let n = 15000, d = 10 (notice that d ≥ log(n) 9.61). Matrix ∆ n is drawn at random once for all among the adjacency matrices of d-regular graphs (and a priori does not follow Model (A)). Each point of the curve represents the proportion of feasible solutions x n of Eq. (6) over 1500 realizations of random matrices A n for different values of κ, with α n = κ log(n). The phase transition resemble those of Figure 1.
In the sparse setting log(n) ≤ d n n, we rely on an extra block-structure assumption over matrix ∆ n , namely Model (A), to establish the feasibility and the phase transition. Our method of proof crucially relies on this technical assumption which somehow concentrates the non-null entries of the sparse interaction matrix (and its powers) into localized blocks.
However simulations (cf. Fig 4) suggest that this block structure assumption is not necessary and could be relaxed. Hence the following: Open question 10 Let ∆n the adjacency matrix of a deterministic dnregular graph, with dn ≥ log(n), and An a random matrix with i.i.d. N (0, 1) entries. Consider the equation xn = 1n + ∆n • An αn √ dn xn , αn → ∞ .
Is it true that the same phase transition as in Theorem 1 holds?
Appendix A With probability one, the singular values of a sparse random matrix are distinct We establish hereafter that with probability one the singular values of matrix ∆ • A are distinct, a key argument in the proof of Proposition 9 to compute the partial derivatives of A → ∆ • A . The lemma below and its proof are inspired by Nick Cook [25], whom we thank for his help.
Lemma 11 (Cook [25]) Let n ≥ 1, An a n × n matrix with i.i.d. N (0, 1) entries and ∆n the adjacency matrix of a d-regular graph. Then with probability one, all the singular values of ∆n • An are distinct.
Remark
The original statement of Cook is slightly more general: matrix A n entries only need a distribution with positive density, and the deterministic matrix ∆ n only needs a generalized diagonal, i.e. (∆ iσ(i) ; i ∈ [n]) for some σ ∈ S n , with n − 1 non null entries.
Proof Let E ∆ be the set of matrices with entries supported on the nonzero entries of ∆, E ∆ = ∆ • X ; X = (X ij ) ∈ R n×n .
Thus, E ∆ is the support of the law of ∆ • A. Besides, E ∆ is a variety as a subspace of R n×n . Let R denote the set of matrices with a repeated singular value. It is the set of n × n matrices X for which the characteristic polynomial p of X T X has zero discriminant (ρ), see for instance [26,Section 3.3.2]. R = X ∈ R n×n ; ρ p X T X = 0 = X ∈ R n×n ; P (X) = 0 , where P : R n×n → R defined by P (X) = ρ(p(X T X)) is a polynomial in the entries of X. It follows that R is an algebraic variety in R n×n .
Hence, E ∆ ∩ R is either equal to E ∆ , or a subvariety of E ∆ of zero Lebesgue measure (under the product measure on E ∆ ).
For the claim, it suffices to show that E ∆ ⊂ R hence to exhibit Y ∈ E ∆ with distinct singular values. By Birkhoff's theorem [21,Theorem 8.7.2], the doubly stochastic matrix ∆/d writes where Pσ is the permutation matrix associated to σ ∈ Sn. There exists in particular σ * with a σ * > 0 and P σ * ∈ E ∆ . Let P σ * = (P ij ) i,j∈[n] then matrix Y = (iP ij ) i,j∈ [n] has distinct singular values (1, · · · , n). This completes the proof. | 2021-11-23T02:15:49.246Z | 2021-11-22T00:00:00.000 | {
"year": 2021,
"sha1": "1c6f18f96218318d8b8680c97b07dc2034a38e29",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "1c6f18f96218318d8b8680c97b07dc2034a38e29",
"s2fieldsofstudy": [
"Mathematics"
],
"extfieldsofstudy": [
"Mathematics"
]
} |
12349789 | pes2o/s2orc | v3-fos-license | Learning to Interpret Natural Language Instructions
We address the problem of training an artificial agent to follow verbal commands using a set of instructions paired with demonstration traces of appropriate behavior. From this data, a mapping from instructions to tasks is learned, enabling the agent to carry out new instructions in novel environments. Our system consists of three components: semantic parsing (SP), inverse reinforcement learning (IRL), and task abstraction (TA). SP parses sentences into logical form representations, but when learning begins, the domain/task specific meanings of these representations are unknown. IRL takes demonstration traces and determines the likely reward functions that gave rise to these traces, defined over a set of provided features. TA combines results from SP and IRL over a set of training instances to create abstract goal definitions of tasks. TA also provides SP domain specific meanings for its logical forms and provides IRL the set of task-relevant features.
In this paper, we present an approach to interpreting language instructions that describe complex multipart tasks by learning from pairs of instructions and behavioral traces containing a sequence of primitive actions that result in these instructions being properly followed. We do not assume a oneto-one mapping between instructions and primitive actions. Our approach uses three main subcomponents: (1) recognizing intentions from observed behavior using variations of Inverse Reinforcement Learning (IRL) methods; (2) translating instructions to task specifications using Semantic Parsing (SP) techniques; and (3) creating generalized task specifications to match user intentions using probabilistic Task Abstraction (TA) methods. We describe our system architecture and a learning scenario. We present preliminary results for a simplified version of our system that uses a unigram language model, minimal abstraction, and simple inverse reinforcement learning.
Early work on grounded language learning used features based on n-grams to represent the natural language input (Branavan et al., 2009;Vogel and Jurafsky, 2010). More recent methods have relied on a richer representation of linguistic data, such as syntactic dependency trees (Branavan et al., 2011;Goldwasser and Roth, 2011) and semantic templates (Tellex et al., 2011) to address the complexity of the natural language input. Our approach uses a flexible framework that allows us to incorporate various degrees of linguistic knowledge available at different stages in the learning process (e.g., from dependency relations to a full-fledged semantic model of the domain learned during training). 1
System Architecture
We represent tasks using the Object-oriented Markov Decision Process (OO-MDP) formalism (Diuk et al., 2008), an extension of Markov Decision Processes (MDPs) to explicitly capture relationships between objects. Specifically, OO-MDPs add a set of classes C, each with a set of attributes T C . Each OO-MDP state is defined by an unordered set of instantiated objects. In addition to these object definitions, an OO-MDP also defines a set of propositional functions that operate on objects. For instance, we might have a propositional function toyIn(toy, room) that operates on an object belonging to class "toy" and an object belonging to class "room," returning true if the specified "toy" object is in the specific "room" object. We extend OO-MDPs to include a set of propositional function classes (F) associating propositional functions that describe similar properties. In the context of defining a task corresponding to a particular goal, an OO-MDP defines a subset of states β ⊂ S called termination states that end an action sequence and that need to be favored by the task's reward function.
Example Domain. To illustrate our approach, we present a simple domain called Cleanup World, a 2D grid world defined by various rooms that are connected by open doorways and can contain various objects (toys) that the agent can push around to different positions in the world. The Cleanup World domain can be represented as an OO-MDP with four object classes: agent, room, doorway, and toy, and a set of propositional functions that specify whether a toy is a specific shape (such as isStar(toy)), the color of a room (such as isGreen(room)), whether a toy is in a specific room (toyIn(toy, room)), and whether an agent is in a specific room (agentIn(room)). These functions belong to shape, color, toy position or agent position classes.
Interaction among IRL, SP and TA
The training data for the overall system is a set of pairs of verbal instructions and behavior. For example, one of these pairs could be the instruction Push the star to the green room with a demonstration of the task being accomplished in a specific environment containing various toys and rooms of different colors. We assume the availability of a set of fea-tures for each state represented using the OO-MDP propositional functions descibed previously. These features play an important role in defining the tasks to be learned. For example, a robot being taught to move furniture around would have information about whether or not it is currently carrying a piece of furniture, what piece of furniture it needs to be moving, which room it is currently in, which room contains each piece of furniture, etc. We present briefly the three components of our system (IRL, SP and TA) and how they interact with each other during learning.
Inverse Reinforcement Learning. Inverse Reinforcement Learning (Abbeel and Ng, 2004) addresses the task of learning a reward function from demonstrations of expert behavior and information about the state-transition function. Recently, more data-efficient IRL methods have been proposed, including the Maximum Likelihood Inverse Reinforcement Learning (Babeş-Vroman et al., 2011) or MLIRL approach, which our system builds on. Given even a small number of trajectories, MLIRL finds a weighting of the state features that (locally) maximizes the probability of these trajectories. In our system, these state features consist of one of the sets of propositional functions provided by the TA component. For a given task and a set of sets of state features, MLIRL evaluates the feature sets and returns to the TA component its assessment of the probabilities of the various sets.
Semantic Parsing. To address the problem of mapping instructions to semantic parses, we use a constraint-based grammar formalism, Lexicalized Well-Founded Grammar (LWFG), which has been shown to balance expressiveness with practical learnability results (Muresan and Rambow, 2007;Muresan, 2011). In LWFG, each string is associated with a syntactic-semantic representation, and the grammar rules have two types of constraints: one for semantic composition (Φ c ) and one for semantic interpretation (Φ i ). The semantic interpretation constraints, Φ i , provide access to a semantic model (domain knowledge) during parsing. In the absence of a semantic model, however, the LWFG learnability result still holds. This fact is important if our agent is assumed to start with no knowledge of the task and domain. LWFG uses an ontology-based semantic representation, which is a logical form repre-2 sented as a conjunction of atomic predicates. For example, the representation of the phrase green room is X 1 .is=green, X.P 1 = X 1 , X.isa=room . The semantic representation specifies two conceptsgreen and room-connected through a property that can be uninstantiated in the absence of a semantic model, or instantiated via the Φ i constraints to the property name (e.g, color) if such a model is present.
During the learning phase, the SP component, using an LWFG grammar that is learned offline, provides to TA the logical forms (i.e., the semantic parses, or the unlabeled dependency parses if no semantic model is given) for each verbal instruction. For example, for the instruction Move the chair into the green room, the semantic parser knows initially that move is a verb, chair and room are nouns, and green is an adjective. It also has grammar rules of the form S → Verb NP PP: Φ c1 , Φ i1 , 1 but it has no knowledge of what these words mean (that is, to which concepts they map in the domain model). For this instruction, the LWFG parser returns the logical form: (X 1 .isa=move, X 1 .Arg1= X 2 ) move , (X 2 .det=the) the , (X 2 .isa=chair) chair , (X 1 .P 1 = X 3 , P 2 .isa=into) into , (X 3 .det=the) the , (X 4 .isa=green, X 3 .P 2 = X 2 ) green , (X 3 .isa=room) room .
The subscripts for each atomic predicate indicate the word to which that predicate corresponds. This logical form corresponds to the simplified logical form move(chair1,room1), P1(room1,green), where predicate P1 is uninstantiated. A key advantage of this framework is that the LWFG parser has access to the domain (semantic) model via Φ i constraints. As a result, when TA provides feedback about domain-specific meanings (i.e., groundings), the parser can incorporate those mappings via the Φ i constraints (e.g., move might map to the predicate blockToRoom with a certain probability).
Task Abstraction. The termination conditions for an OO-MDP task can be defined in terms of the propositional functions. For example, the Cleanup World domain might include a task that requires the agent to put a specific toy (t 1 ) in a specific room (r 1 ). In this case, the termination states would be defined by states that satisfy toyIn(t 1 , r 1 ) and the reward function would be defined as R a (s, s ) = {1 : toyIn(t s 1 , r s 1 ); −1 : otherwise}. However, such a task definition is overly specific and cannot be evaluated in a new environment that contains different objects. To remove this limitation, we define abstract task descriptions using parametric lifted reward and termination functions. A parametric lifted reward function is a first-order logic expression in which the propositional functions defining the reward can be selected as parameters. This representation allows much more general tasks to be defined; these tasks can be evaluated in any environment that contains the necessary object classes. For instance, the reward function for an abstract task that encourages an agent to take a toy of a certain shape to a room of a certain color (resulting in a reward of 1) would be represented as R a (s, s ) = {1 : ∃ t s ∈toy ∃ r s ∈room P1(t) ∧ P2(r) ∧ toyIn(t, r); −1 : otherwise}, where P1 is a propositional function that operates on toy objects and P2 is a propositional function that operates on room objects. An analogous definition can be made for termination conditions. Given the logical forms provided by SP, TA finds candidate tasks that might match each logical form, along with a set of possible groundings of those tasks. A grounding of an abstract task is the set of propositional functions to be applied to the specific objects in a given training instance. TA then passes these grounded propositional functions as the features to use in IRL. (If there are no candidate tasks, then it will pass all grounded propositional functions of the OO-MDP to IRL.) When IRL returns a reward function for these possible groundings and their likelihoods of representing the true reward function, TA determines whether any abstract tasks it has defined might match. If not, TA will either create a new abstract task that is consistent with the received reward functions or it will modify one of its existing definitions if doing so does not require significant changes. With IRL indicating the intended goal of a trace and with the abstract task indicating relevant parameters, TA can then inform SP of the task/domain specific meanings for the logical forms. 3 A Learning from Scratch Scenario. Our system is trained using a set of sentence-trajectory pairs ((S 1 , T 1 ), ..., (S N , T N )). Initially, the system does not know what any of the words mean and there are no pre-existing abstract tasks. Let's assume that S 1 is Push the star into the green room.This sentence is first processed by the SP component, yielding the following logical forms: L 1 is push(star1, room1), amod(room1, green) and L 2 is push(star1), amod(room1, green), into(star1, room1).
These logical forms and their likelihoods are passed to the TA component, and TA induces incomplete abstract tasks, which define only the number and kinds of objects that are relevant to the corresponding reward function. TA can send to IRL a set of features involving these objects, together with T 1 , the demonstration attached to S 1 . This set of features might include: agentTouchToy(t 1 ), toyIn(t 1 , r 1 ), toyIn(t 1 , r 2 ), agentIn(r 1 ). IRL sends back a weighting of the features, and TA can select the subset of features that have the highest weights (e.g, (1.91, toyIn(t 1 , r 1 )), (1.12, agentTouchToy(t 1 )), (0.80, agentIn(r 1 )). Using information from SP and IRL, TA can now create a new abstract task, perhaps called blockToRoom, adjust the probabilities of the logical forms based on the relevant features obtained from IRL, and send these probabilities back to SP, enabling it to adjust its semantic model.
The entire system proceeds iteratively. While it is designed, not all features are fully implemented to be able to report experimental results. In the next section, we present a simplified version of our system and show preliminary results.
A Simplified Model and Experiments
In this section, we present a simplified version of our system with a unigram language model, inverse reinforcement learning and minimal abstraction. We call this version Model 0. The input to Model 0 is a set of verbal instructions paired with demonstrations of appropriate behavior. It uses an EM-style algorithm (Dempster et al., 1977) to estimate the probability distribution of words conditioned on reward functions (the parameters). With this information, when the system receives a new command, it can behave in a way that maximizes its reward given the posterior probabilities of the possible reward functions given the words.
Algorithm 1 shows our EM-style Model 0. For all possible reward-demonstration pairs, the E-step of EM estimates z ji = Pr(R j |(S i , T i )), the probability that reward function R j produced sentencetrajectory pair (S i , T i ), This estimate is given by the equation below: where S i is the i th sentence, T i is the trajectory demonstrated for verbal command S i , and w k is an element in the set of all possible words (vocabulary). If the reward functions R j are known ahead of time, Pr(T i |R j ) can be obtained directly by solving the MDP and estimating the probability of trajectory T i under a Boltzmann policy with respect to R j . If the R j s are not known, EM can estimate them by running IRL during the M-step (Babeş-Vroman et al., 2011).
The M-step in Algorithm 1 uses the current estimates of z ji to further refine the probabilities x kj = Pr(w k |R j ): where is a smoothing parameter, X is a normalizing factor and N (S i ) is the number of words in sentence S i . To illustrate our Model 0 performance, we selected as training data six sentences for two tasks (three sentences for each task) from a dataset we have collected using Amazon Mechanical Turk for the Cleanup Domain. We show the training data in Figure 1. We obtained the reward function for each task using MLIRL, computed the Pr(T i |R j ), then ran Algorithm 1 and obtained the parameters Pr(w k |R j ). After this training process, we presented the agent with a new task. She is given the instruction S N : Go to green room. and a starting state, somewhere in the same grid. Using parameters Pr(w k |R j ), the agent can estimate: 4 Algorithm 1 EM-style Model 0 Input: Demonstrations {(S 1 , T 1 ), ..., (S N , T N )}, number of reward functions J, size of vocabulary K.
Pr(S N |R 1 ) = w k ∈S N Pr(w k |R 1 ) = 8.6 × 10 −7 , Pr(S N |R 2 ) = w k ∈S N Pr(w k |R 2 ) = 4.1 × 10 −4 , and choose the optimal policy corresponding to reward R 2 , thus successfully carrying out the task. Note that R 1 and R 2 corresponded to the two target tasks, but this mapping was determined by EM. We illustrate the limitation of the unigram model by telling the trained agent to Go with the star to green, (we label this sentence S N ). Using the learned parameters, the agent computes the following estimates: Pr(S N |R 1 ) = w k ∈S N Pr(w k |R 1 ) = 8.25 × 10 −7 , Pr(S N |R 2 ) = w k ∈S N Pr(w k |R 2 ) = 2.10 × 10 −5 . The agent wrongly chooses reward R 2 and goes to the green room instead of taking the star to the green room. The problem with the unigram model in this case is that it gives too much weight to word frequencies (in this case go) without taking into account what the words mean or how they are used in the context of the sentence. Using the system described in Section 2, we can address these problems and also move towards more complex scenarios.
Conclusions and Future Work
We have presented a three-component architecture for interpreting natural language instructions, where the learner has access to natural language input and demonstrations of appropriate behavior. Our future work includes fully implementing the system to be able to build abstract tasks from language information and feature relevance. | 2014-07-01T00:00:00.000Z | 2012-06-08T00:00:00.000 | {
"year": 2012,
"sha1": "6f59702aa1c114f45372b9dd6110812c23ac2d8d",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "ACL",
"pdf_hash": "6f59702aa1c114f45372b9dd6110812c23ac2d8d",
"s2fieldsofstudy": [
"Education"
],
"extfieldsofstudy": [
"Computer Science"
]
} |
119155153 | pes2o/s2orc | v3-fos-license | Swiss-cheese action on the totalization of operads under the monoid actions actions operad
We prove that if a pair of semi-cosimplicial spaces (X,Y) arise from a coloured operad then the semi-totalization sTot(Y) has the homotopy type of a relative double loop space and the pair (sTot(X),sTot(Y)) is weakly equivalent to an explicit algebra over the two dimensional Swiss-cheese operad.
Bimodules and infinitesimal bimodules over a coloured operad
In what follows we introduce the category of coloured operads as well as the categories of bimodules and infinitesimal bimodules over a coloured operad. We focus on the operads with two colours {o ; c} called {o ; c}-operads. In particular we define the {o ; c}-operad Act >0 of monoid actions as in [13]. Besides, we characterize the bimodules and infinitesimal bimodules over this operad in terms of semi-cosimplicial spaces.
The operad of monoid actions has been introduced by Hoefel, Livernet and Stasheff in [13] in the context of recognition principle for relative loop space. . . , s m , s i+1 , . . . , s n ; s n+1 ), for 1 ≤ i ≤ n, satisfying associativity and unit relations [1]. A map between O-infinitesimal bimodules is given by an S-sequence map preserving this structure. Let Ibimod O be the category of infinitesimal bimodules over O. We denote by x • i y (resp. x • i y) the operation • i (x ; y) (resp. • i (x ; y)) with x ∈ O and y ∈ M (resp. x ∈ M and y ∈ O). Example 1.6. For any S-operad map η : O 1 → O 2 , O 2 is endowed with the following O 1 -infinitesimal bimodule structure:
Infinitesimal bimodules over a coloured operad
Consequently, if A is an O-space then End A is an O-infinitesimal bimodule. x. The semi-cosimplicial structure is given as usual (see e.g [1] , [15] and [17]) by: , . . . , n}, * 2 ; c • 1 x, if i = n + 1, It is proved in [19] that the category of semi-cosimplicial spaces is equivalent to the category of As >0infinitesimal bimodules. Consequently the collection M o = {M n o } n≥0 is an infinitesimal bimodule over As >0 . Since As >0 is generated by * 2 as an operad, the structure of M o is given by: for x ∈ M n o and i ∈ {1, . . . , n}. (2)
Example 1.10. For any S-operad map η : O 1 → O 2 , O 2 is endowed with the following O 1 -bimodule structure: Consequently, if A is an O-algebra then End A is an O-bimodule.
A priori there is no relation between an O-bimodule structure and an O-infinitesimal bimodule structure because the left operations differ. However, if η : O → M is a morphism of O-bimodules then M is an O-infinitesimal bimodule and the left infinitesimal bimodule structure is given by: where * s is the distinguished element in O(s; s). In [15] McClure and Smith define a monoidal structure on the category of semi-cosimplicial spaces in order to recognize loop spaces. More precisely, they prove that the group completion of the semi-totalization of a monoid in this category has the homotopy type of a loop space. We recall this construction since we need it to describe Act >0 -bimodules under Act. Proposition 1.11. [15, proposition 2.2] Let X • and Y • be two semi-cosimplicial spaces and let X Y be the semicosimplicial space whose m-th space is given by: where ∼ is the equivalence relation generated by (x, d 0 y) ∼ (d |x|+1 x, y). The semi-cosimplicial structure is the following: The category of semi-cosimplicial spaces equipped with is a monoidal category denoted by (Top ∆ in j , ), with unit e being the constant semi-cosimplicial one point space. Moreover i) ⇒ ii) even if M is not of type Act.
Proof. Let M be an Act >0 -bimodule equipped with an Act >0 -bimodule map η : Act → M. Let M n c = M(n ; c) and M n o = M(n + 1 ; o) for n ∈ N. The bimodule structure induces the following cofaces: satisfying the semi-cosimplicial relations and two operations: The map η : Act → M gives us the missing cofaces: inducing a semi cosimplicial structure on M c and M o such that the two operations defined in (3) make M c into a monoid with unit and M o into a M c -left module. The map: Conversely, let (M c , M o , h) be a triple satisfying the conditions of the proposition. By using the same argument as in Proposition 1.8, the constructions (3) and (4) define an Act >0 -bimodule structure on M. In particular, if M c and M o coincide with the unit e, then the corresponding Act >0 -bimodule is Act. There exists a map η c from the unit to M c , for M c is a monoid with unit. Let η o be the map from the unit to M o given by η o = h • η c . The map η : Act → M so obtained is an Act >0 -bimodule map.
This proposition implies that the category whose objects are monoids in (Top ∆ in j , ) is equivalent to the category of As >0 -bimodules under As considered by Turchin. Furthermore if we substitute Act >0 -bimodule by Act-bimodule and semi-cosimplicial space by cosimplicial space, Proposition 1.12 is still true. Example 1.13. Let (X; * ) be a pointed topological space and A be a subspace of X containing * . Let ΩX • and Ω(X; A) • be the two cosimplicial spaces defined respectively by: ΩX n := X ×n and Ω(X; A) n := X ×n × A , for n ∈ N and The codegeneracies consist in forgetting a point and the concatenation makes ΩX • into a monoid with unit in (Top ∆ in j , ) and Ω(X; A) • into a left ΩX • -module. The left ΩX • -module map is defined by: h : ΩX n → Ω(X; A) n ; (x 1 , . . . , x n ) → (x 1 , . . . , x n , * ).
Proposition 1.12 states that these data are equivalent to an Act-bimodule map. The evaluation maps: n induce homeomorphisms. It provides an example of an Act-bimodule map η : Act → M such that the totalization of M c (resp. M o ) can be described as a loop space ΩX (respectively a relative loop space Ω(X; A)) with explicit topological spaces X and A. We will prove that we can generalize this result for any Act >0 -bimodule map η : Act → M using the semi-totalization.
The free (infinitesimal) bimodule generated by an S-sequence
In what follows S is a set, O is an S-operad and M is an S-sequence. In order to prove that sTot(M o ) has the homotopy type of a relative loop space and to identify explicitly this space we have to introduce a model category structure on the categories Ibimod O and Bimod O . The easiest way is to use a transfer theorem (see e.g Theorem 3.4) which needs a left adjoint to the forgetful functor from the category of (infinitesimal) bimodules over O to Coll(S). In both cases, the first step consists in introducing the category of trees which encodes the (infinitesimal) bimodule structure. Then we label the vertices by points in M or O. Similar constructions have been considered in [5] and more recently [20].
By a tree we mean a planar rooted tree with an orientation towards the root. Let t be a tree: • The set of its vertices is denoted by V(t) and the set of its edges by E(t).
• For a vertex v, the ordered set of its input edges is denoted by in(v) and its cardinality by |v| such that in(v) = {e 1 (v), . . . , e |v| (v)}. The output edge of v is denoted by e 0 (v).
• The edges connecting two vertices are called inner edges and the set of inner edges is denoted by E int (t).
• An element e ∈ E int (t) is determined by a source vertex s(e) and a target vertex t(e) induced by the orientation of the tree.
• An edge with no source is called a leaf and the ordered set of leaves is denoted by {l 1 , . . . , l n }.
• The edge with no target is called the trunk, denoted by e 0 , and its source, the root, is denoted by r.
• Each leaf is connected to the trunk by a unique path composed of edges.
• An S-tree is a pair (t, f ) where t is a planar tree and f : E(t) → S is called an S-labelling of t.
The free infinitesimal bimodule
Definition 2.1. The trees encoding the infinitesimal bimodule structure are constructed as follows: • The join j(v 1 ; v 2 ) of two vertices v 1 and v 2 is the first common vertex shared by the two paths joining v 1 and v 2 to the root. If j(v 1 ; v 2 ) = r, then v 1 and v 2 are said to be connected to the root and if j(v 1 ; v 2 ) ∈ {v 1 ; v 2 }, then they are said to be connected. In Figure 1 the vertices v 1 and v 2 are connected whereas the vertices v 1 and v 3 are connected to the root. • A pearl tree (or ptree) is a pair (t, p) where t is a planar tree and p ∈ V(t) is called the pearl, satisfying the property: ∀v ∈ V(t) \ {p}, d(v ; p) = 1. An S-ptree is a pearl tree t together with an S-labelling of t.
where ∼ is the equivalence relation generated by Let x be a point in the space Ib O (M)(s 1 , . . . , s n ; s n+1 ) indexed by an S-ptree (t, f, p) and let y ∈ O(s 1 , . . . , s m ; s i ). The right infinitesimal module structure consists in grafting the m-corolla indexed by y to the i-th input of t and contracting the inner edge so obtained if its target does not coincide with the pearl, by using the operadic structure of O as in Figure 3: Similarly, let x be a point in the space Ib O (M)(s 1 , . . . , s m ; s i ) indexed by an S-ptree (t, f, p) and let y ∈ O(s 1 , . . . , s n ; s n+1 ). The left infinitesimal module structure consists in grafting the tree t to the i-th input of the n-corolla indexed by y and contracting the inner edge so obtained if its source does not coincide with the pearl, by using the operadic structure of O. These maps pass to the quotient and are continuous.
There exists an application from the S-sequence M to Ib O (M) which maps a point m ∈ M(s 1 , . . . , s n ; s n+1 ) to the pearl n-corolla whose leaves are labelled by s 1 , . . . , s n , the trunk by s n+1 and the pearl is indexed by m. We denote by (t, f, p, g) a point in Ib O (M) indexed by (t, f, p) and labelled by g : Let (t, f, p, g) be a point in Ib O (M). The maph is defined by induction on |V(t)| as follows. If |V(t)| = 1, then the pearl p is the only vertex and t is a corolla. In this case we defineh((t, f, p, g)) = h(g(p)). Hence the commutativity of the previous diagram is guaranteed. If t has two vertices, then there exists a unique edge e connecting the pearl p to the other vertex v. There are two cases to consider: -if s(e) = p and e is the i-th input of v then we leth((t, f, p, g)) = g(v) • i h(g(p)).
-if t(e) = p and e is the i-th input of p then we leth((t, f, p, g)) = h(g(p)) • i g(v). Assumeh has been defined for |V(t)| = n ≥ 2. Let (t, f, p, g) ∈ Ib O (M) such that t has n + 1 vertices. There exists an inner edge e connecting the pearl p to another vertex v such that t(e) = p. Let (t , f , p, g ) be the tree obtained by cutting off the corolla corresponding to the vertex v ( t has only n vertices ). We define: Due to the associativity axioms of the infinitesimal bimodule structure of N,h does not depend on the choice of v andh is an infinitesimal bimodule map. The uniqueness follows from the construction.
The free bimodule
where t is a planar tree and V p (t) is a subset of V(t), called the set of pearls, such that each path connecting a leaf to the trunk passes by a unique pearl and An S-tree with section (or S-stree) is given by a triple (t, V p (t), f ) such that (t, f ) is an S-tree and (t, V p (t)) is a tree with section.
with ∼ the equivalence relation generated by . . , s n ; s n+1 ) indexed by a tree with section (t, f, V p (t)) and let y ∈ O(s 1 , . . . , s n ; s i ). The right module structure consists in grafting the m-corolla indexed by y to the i-th input of t and contracting the inner edge so obtained if its target does not coincide with a pearl, by using the operadic structure of O.
Let y be a point in O(s 1 , . . . , s n ; s n+1 ) and let . The left module structure consists in grafting each tree t i to the i-th input of the n-corolla indexed by y and contracting the inner edges whose source is not a pearl by using the operadic structure of O, as in Figure 5: and let nb(t) be the cardinality of the set V(t) \ V p (t). The maph is defined by induction on nb(t). If nb(t) = 0, then the pearl p is the only vertex and t is a corolla. In this casẽ h((t, V p (t), f, g)) = h(g(p)).
If nb(t) = 1, we denote by v the unique element of V(t) \ V p (t). There are two cases to consider: -if v is the source of an edge e which is connected to a pearl and e is the i-th input of the unique pearl p, theñ .
-if v coincides with the root, then all the pearls are connected to v. Let p 1 , . . . , p k be the set of ordered pearls. We defineh byh Assumeh has been defined for nb(t) = n ≥ 1.
There exists an inner edge e whose target is a pearl p i . Let v = s(e) and let (t , V p (t), f , g ) be the tree obtained from (t, V p (t), f, g) by cutting off the corolla corresponding to the vertex v. Consequently nb(t ) = n andh can be defined by induction ash Due to the associativity axioms of the bimodule structure of N,h does not depend on the choice of v andh is a map of O-bimodules. The uniqueness follows from the construction.
Model category structure on Bimod O and Ibimod O
In this section we define a model category structure on Bimod O and Ibimod O by using the previous adjunctions. The references used for model categories are [8], [12] and [14]. These structures have been considered by many authors in the context of operads (symmetric, non-symmetric), algebras over operad, left-right modules over operads, most of them in the uncoloured case, see for instance Fresse [9], Berger-Moerdijk [2] and Harper [10]. In order to be precise, we prefer to give in details the model category structure in our context, and take benefit of this section to state lemmas that will be useful for the sequel. Weak equivalences are the continuous maps f : is a bijection and f * n : π n (X; x) → π n (Y; f (x)) is an isomorphism, ∀x ∈ X and ∀n > 0.
Serre fibrations are the continuous maps f : X → Y having the homotopy lifting property i.e., for every CW-complex A a lift exists in every commutative diagram of the form Cofibrations are the continuous maps having the left lifting property with respect to the acyclic Serre fibrations.
Moreover this model category is cofibrantly generated. The cofibrations are generated by the inclusions ∂∆ n → ∆ n for n > 0, whereas the acyclic cofibrations are generated by the inclusions of the horns Λ n k → ∆ n for n > 0 and n ≥ k ≥ 0. We call this model category the Serre model category.
Corollary 3.2. The category Coll(S) inherits a cofibrantly generated model category structure from the Serre model category in which a map is a cofibration, a fibration or a weak equivalence if each of its components is. [2, section 2.5] Let C 1 be a cofibrantly generated model category and let I (resp. J) be the set of generating cofibrations (resp. acyclic cofibrations). Let L : C 1 C 2 : R be a pair of adjoint functors. Assume that C 2 has small colimits and finite limits. Define a map f in C 2 to be a weak equivalence (resp. a fibration) if R( f ) is a weak equivalence (resp. fibration). If the following three conditions are satisfied: i) the functor R preserves filtered colimits, ii) C 2 has a functorial fibrant replacement, iii) for each fibrant objects X ∈ C 2 we have a functorial path object Path(X) with X → Path(X) X × X (a weak equivalence followed by a fibration) a factorization of the diagonal map, then C 2 is equipped with a cofibrantly generated model category (LI, LJ) with LI = {L(u) | u ∈ I} and LJ = {L(v) | v ∈ J}. Furthermore (L, R) is a Quillen pair. The O-infinitesimal bimodule structure and the functoriality of Path(−) are induced by that of M. The factorization of the diagonal map is given pointwise with i a cofibration in Coll(S), f : A → N an S-sequence map called the attaching map andf the O-bimodule map induced by f (see Proposition 2.6). In both cases the map N → M so defined is a cofibration.
Definition 3.7. Let A, B and C be three topological spaces and f : A → B be a continuous map. The space of continuous maps g : A), B). (6), then one has the following homeomorphism:
Lemma 3.8. [19] Let M and N be two O-infinitesimal bimodules. If M is obtained from N by attaching cells as in
with f the attaching map and g : N → Y an O-infinitesimal bimodule map. Similarly, let M and N be two O-bimodules. If M is obtained from N by attaching cells as in (7), then one has the following homeomorphism: Bimod A); Y), with f the attaching map and g : N → Y an O-bimodule map. Definition 3.9. i) As in [19] ( see also [8,Lemma 4.24] ), if A and B are O-infinitesimal bimodules (resp. Obimodules), and A c is a cofibrant replacement of A then Ibimod O (A c ; B) (resp. Bimod O (A c ; B)) is independent, up to weak equivalences, of the choice of a cofibrant replacement of A since every O-infinitesimal bimodule (resp. O-bimodule) B is fibrant. This space is called the space of derived O-infinitesimal bimodule (resp. O-bimodule) maps from A to B and is denoted by: ii) Similarly, Berger and Moerdijk define a model category structure on the category of S-coloured operads in [2] and Operad h S (A ; B) denotes the space of derived S-operad maps from A to B.
iii) If C is the category Bimod Act >0 (resp. Operad {o ; c} ) then for any cofibrant model A of Act >0 , the family A c gives rise to a cofibrant replacement of As >0 in the category Bimod As >0 (resp. Operad). As a consequence the homotopy fiber of the projection onto the closed part is independent (up to weak equivalences) of the choice of a cofibrant model. By abuse of notation we denote by: ; O) to be respectively the homotopy fiber of the projection p h 1 and p h 2 . They are called relative loop spaces. Hence, in order to describe the spaces of derived maps and the relative loop spaces, we need to understand specific cofibrant replacement in the different categories involved. This is the aim of the two following subsections. where the structure is defined by:
Cofibrant replacement of Act in Ibimod Act
Proof. Since Act >0 is generated as a coloured operad by * 2 ; c and * 2 ; o with the relations (1) of Definition 1.4, the previous structure makes into an Act >0 -infinitesimal bimodule. Let N be the sub-Act >0 -infinitesimal bimodule of generated by { (n ; c)} N n=0 { (n ; o)} N n=1 with N ∈ N. By convention −1 is the infinitesimal bimodule Ib Act >0 (∅) and ∂∆ 0 = ∅. The space 0 is obtained from −1 by the attaching cells: The attaching map ∂B → N−1 is the restriction to the boundary of the application: Remark 3.11. According to Definition 3.9, the sequence given by ∆(n) = (n ; c) = ∆ n inherits an As >0infinitesimal bimodule structure and it is a cofibrant replacement of As in the model category Ibimod As >0 (see also [19,Proposition 3.2]). Theorem 3.12. Let M be an Act >0 -infinitesimal bimodule. One has: Ibimod h Act >0 (Act ; M) Ibimod h As >0 (As ; M c ) sTot(M c ). Proof. From Proposition 3.10 and the previous remark, a cofibrant replacement of Act in the model category Ibimod Act >0 is given by and a cofibrant replacement of the associative operad As in the model category Ibimod As >0 is given by ∆. Since M c is an infinitesimal bimodule over As >0 (see Proposition 1.8), Definition 3.9 induces the following:
Ibimod h
As >0 (As ; M c ) Ibimod As >0 (∆ ; M c ) and Ibimod h Act >0 (Act ; M) Ibimod Act >0 ( ; M). Let i be the inclusion defined by: which sends a point f := f n ; c : ∆ n → M(n ; c) n∈N to the map g defined by: .
The space Ibimod As >0 (∆ ; M c ) is a deformation retract of Ibimod Act >0 ( ; M) with the following homotopy: sending a point ( f × u) to the map H( f ; u) given by: The map H is continuous and H( f ; 1) = f . Furthermore: So H( f ; 0) is in the image of the inclusion map i and ∀ f ∈ Ibimod As >0 (∆ ; M c ), ∀u ∈ [0 , 1], H(i( f ) ; u) = i( f ). Indeed:
Cofibrant replacement of Act >0 in Bimod Act >0
Proposition 3.14. A cofibrant replacement of the Act >0 -bimodule Act >0 is the Act >0 -bimodule defined by: Proof. Since Act >0 is generated as a coloured operad by * 2 ; c and * 2 ; o with the relations (1) of Definition 1.4, the previous structure induces an Act >0 -bimodule structure on . For N > 0 let N be the sub-Act >0 -bimodule of generated by { (n ; k)} k∈{o ; c} n∈{1,...,N} . By convention 0 is the Act >0 -bimodule B Act >0 (∅). The bimodule N is obtained from N−1 by the attaching cells: For N ≥ n, N (n ; k) = (n ; k) with k ∈ {o ; c}. Consequently, lim N N = and is cofibrant. The weak equivalence between and Act >0 is due to the convexity of in each degree.
Remark 3.15. According to Definition 3.9, the sequence given by c (n) = (n ; c) inherits an As >0 -bimodule structure and it is a cofibrant replacement of As >0 in the model category Bimod As >0 (see [19,Proposition 4.1]).
Relative delooping of sTot(M o )
Let M be an Act >0 -bimodule endowed with a map η : Act → M. Since the semi-cosimplicial space M o is not a monoid in (Top ∆ in j , ) (see Proposition 1.11), M o is not a bimodule over As >0 and we can not expect that its semi-totalization has the homotopy type of a loop space. However, we will use the left module structure on M o to prove that the pair (sTot(M c ) ; sTot(M o )) has the homotopy type of an SC 1 -space. The first step consists in showing that sTot(M o ) is weakly equivalent to the homotopy fiber of the map (8) of Definition 3.9. The next definition gives a model of this homotopy fiber using the cofibrant replacement of Act >0 (see Proposition 3.14). • g n ; c (x • i * 2 ; c ; t) = g n−1 ; c (x ; t) • i * 2 ; c for x ∈ (n − 1 ; c) and 1 ≤ i ≤ n − 1, • g n ; c * 2 ; c (x ; y) ; t = * 2 ; c g l ; c (x ; t) ; g n−l ; c (y ; t) for x ∈ (l ; c) and y ∈ (n − l ; c), • g n ; o (x • i * 2 ; c ; 1) = g n−1 ; o (x ; 1) • i * 2 ; c for x ∈ (n − 1 ; o) and 1 ≤ i ≤ n − 2, • g n ; o (x • n−1 * 2 ; o ; 1) = g n−1 ; o (x ; 1) • n−1 * 2 ; o for x ∈ (n − 1 ; o), • g n ; o * 2 ; o (x ; y) ; 1 = * 2 ; o g l ; c (x ; 1) ; g n−l ; o (y ; 1) for x ∈ (l ; c) and y ∈ (n − l; o) with the boundary conditions: g n ; c (x ; 0) = η( * n ; c ) for x ∈ (n ; c). This model for the space of relative loops is denoted by Ω Bimod As >0 ( c ; M c ) ; Bimod Act >0 ( ; M) . Proof. As seen in the first section sTot(M o ) Ibimod h As >0 (As ; M o ) using the structure (2). The first step of the proof consists in building a cofibrant replacement˜ of As in the category of infinitesimal bimodule over As >0 so that there exists a map ξ : Bimod Act >0 ( ; M * ) → Ibimod As >0 (˜ ; M o ). Let us recall that a point g ∈ Bimod Act >0 ( ; M * ) is described by: g n ; c : (n ; c) → M * (n ; c); x → η( * n ; c ), for n > 0, g n ; o : (n ; o) → M * (n ; o), for n > 0. satisfying: , for x ∈ (l ; c) and y ∈ (n − l ; o).
In the first case, the class of such a point lies in˜ n−1 by the axioms (i) and (ii). In the second case we have the identification: Consequently˜ n is obtained from˜ n−1 by the pushout diagram: where A is the sequence given by A(n) = [0 , 1] n and the empty set otherwise. The attaching map is the restriction of the quotient map q : [0 , 1] n → [0 , 1] n / ∼ to the boundary. Moreover if n ≥ i then˜ n (i) =˜ (i) and the map ∂A → A is a cofibration. So lim n˜ n =˜ and˜ is cofibrant. This construction implies that˜ (m) is a CW-complex. Let us recall that if A(n) = [0 , 1] n and the empty set otherwise, then the points in Ib As >0 (A)(m) are the pairs (t ; x) with x ∈ A(n) and t a {c}-ptree satisfying: We denote by tr n m the number of {c}-ptrees satisfying Relation (11). The space˜ 0 (m) is the disjoint union of tr 0 m points, that is, a CW-complex.
Finally, for m > n, the space˜ n (m) is obtained from the CW-complex˜ n−1 (m) by attaching tr n m cells of dimension n according to the infinitesimal bimodule structure over As >0 , thus is a CW-complex.
In order to prove that ξ is a weak equivalence, we will introduce two towers of fibrations. For k ≥ 0, define A k and B k to be the subspaces : Top ˜ (i) ; M i o with A k satisfying the Act >0 -bimodule relations and B k the As >0 -infinitesimal bimodule relations. In other words A k and B k are respectively the spaces Bimod Act >0 ( k+1 ; M * ) and Ibimod As >0 (˜ k ; M o ) where k+1 is the sub-Act >0 -bimodule introduced in the proof of Proposition 3.14. The projection: Top ˜ (i) ; M i o induces a map B k+1 → B k . From Lemma 3.3, the following map is a fibration: The space B k+1 is obtained from B k by the pullback diagram: Since the fibrations are preserved by pullbacks, B k+1 → B k is a fibration. Similarly the next pullback square makes the map A k+1 → A k induced by the projection into a fibration: So we consider the two towers of fibrations: so that: By restriction, the map ξ induces an application between the two towers: with ξ = lim k ξ k = holim k ξ k . Consequently, ξ is a weak equivalence if each ξ k is a weak equivalence. We will prove this result by induction on k: • ξ 0 and ξ 1 coincide with the identity. They are weak equivalences.
• Assume that ξ k−1 is a weak equivalence. We consider the following diagram where g is a point in A k−1 , F A is the fiber over g and F B the fiber over ξ k−1 (g). Since the two left horizontal arrows are fibrations, the map ξ k is a weak equivalence if the induced map ξ g is a weak equivalence. 1 ; o) . Sim-ilarly˜ k is obtained from˜ k−1 by the pushout diagram (10). So the fiber F B is homeomorpic to the space 1 ; o) and we have the commutative square: Consequently ξ k is a weak equivalence.
Proposition 4.5. The space Ω Bimod h As >0 (As >0 ; M c ) ; Bimod h Act >0 (Act >0 ; M) is weakly equivalent to the space Bimod h Act >0 (Act >0 ; M * ). Proof. In this proof will serve as a cofibrant model of the Act >0 -bimodule Act >0 . We can consider Bimod Act >0 ( ; M * ) as a subspace of Ω Bimod As >0 ( c ; M c ) ; Bimod Act >0 ( ; M) through the inclusion: In order to show that i is a weak equivalence, we introduce two towers of fibrations. One of them is the tower A k of Proposition 4.4. The second one is defined by: satisfying the relations of Definition 4.1. The map C k+1 → C k induced by the projection is a fibration due to Lemma 3.3 and the following pullback diagram: The restriction of the inclusion i induces a map between the two towers: We will prove that i is a weak equivalence by induction on k: • If k = 0, a point in C 0 is a pair (g 1 ; c ; g 1 ; o ) and the points in the image of i 0 are the pairs satisfying: Since g 1 ; c ( * ; 0) = η( * 1 ; c ) for any pair in C 0 , the inclusion i 0 induces the following deformation retract: • From now on we assume that i k−1 is a weak equivalence for k ≥ 1. We consider the following diagram where g is a point in A k−1 , F A is the fiber over g and F C the fiber over i k−1 (g). Since the two left horizontal arrows are fibrations, the map i k is a weak equivalence if the induced map i g is a weak equivalence.
A point in F C is defined by a pair (g k+1 ; c ; g k+1 ; o ) satisfying the relations of Definition 4.1. Since g k+1 ; c is in the fiber over i k−1 (g), the map sends all the faces of (k + 1 ; c) × [0 , 1] on η( * k+1 ; c ) except the face (k + 1 ; c) × {1}. Furthermore they are no interaction between g k+1 ; c and g k+1 ; o . On the other hand the points in the image of i g coincide with the pair (g k+1 ; c ; g k+1 ; o ) such that: In order to prove that i g induces a deformation retract, we introduce the homotopy (also describe in [ In other words, the points in the image of i g coincide with the pairs such that: Finally the deformation retract is given by: The space Ω Bimod As >0 ( c ; M c ) ; Bimod Act >0 ( ; M) is weakly equivalent to Bimod Act >0 ( ; M * ). If we assume that B(0) * we know from [19, Theorem 6.2] and the As >0 -bimodule map β • α : As → B that sTot(B) is weakly equivalent to the loop space ΩBimod h As >0 (As >0 ; B). Since B is not an operad we can not expect that its semi-totalization has the homotopy type of a double loop space. However we will prove that Bimod h As >0 (As >0 ; B) has the homotopy type of a relative loop space by building an {o ; c}-operad X from the pair (O ; B): X(n ; c) = O(n), for n ≥ 0 ; X(n ; o) = B(n − 1), for n > 0, (12) and the empty set otherwise. The operadic structure is defined by: The {o ; c}-operad X is endowed with a map of operads η : Act → X ;
Double relative delooping: a particular case
The operadic axioms are satisfied except the unit axiom. This axiom holds under the assumption: Theorem 5.1. Under Assumption (13), the relative loop space Ω Operad h (As >0 ; O) ; Operad h {o ; c} (Act >0 ; X) is weakly equivalent to Bimod h As >0 (As >0 ; B). Proof. It is a consequence of Proposition 5.5 and Proposition 5.6. Definition 5.2. In order to describe the homotopy fiber the map (9) of Definition 3.9 we need a cofibrant replacement of Act >0 as a coloured operad. Since Act >0 is cofibrant as an {o ; c}-sequence, we know from [3] that the Boardman-Vogt resolution of Act >0 , denoted by BV(Act >0 ) or just WA in our case, is the object we are looking for. We recall the construction: • Let tree o n be the subset of {o ; c}-trees consisting of trees (t , f ) with n-leaves, f is an {o ; c}-labelling of t and where the trunk is labelled by o, satisfying: • The operad WA is the {o ; c}-sequence given by: . . , f (e |v| (v)) ; f (e 0 (v)) × • It is well known that the operad := { (n) = WA(n ; c)} n>0 is a cofibrant replacement of As >0 as an operad. It is usually called the Stasheff operad.
Proposition 5.5. Under Assumption
satisfying in particular for x ∈ WA(l + 1 ; o), y ∈ WA(n − l ; c) and 1 ≤ i ≤ l the relation: Define ≈ to be the equivalence relation on WA(n ; o) generated by: • t e = l e ∀e ∈ E int (T) with f (e) = o and • t e = l e if e 1 < e such that t e 1 = l e 1 = 1 and f (e 1 ) = c.
Let us prove that˜ is a cofibrant replacement of As >0 as an As >0 -bimodule. The bimodule structure is given by: where δ n ; c is the n-corolla in {c}-trees and δ n ; o is the n-corolla in tree o n . This structure satisfies the bimodule axioms over As >0 and it makesf into an As >0 -bimodule map. Furthermore˜ is a cofibrant replacement: Cofibrant: let˜ n be the As >0 -bimodule generated by {˜ (i)} n i=1 for n > 0. By convention˜ 0 is the As >0bimodule B As >0 (∅). Let us notice that the map WA(n+1 ; o) →˜ (n) preserves the boundary and by definition a point in ∂WA(n + 1 ; o) has one of the following form: hence lies in˜ n−1 . Consequently˜ n is obtained from˜ n−1 by the pushout diagram: where A is the sequence given by A(n) = WA(n + 1 ; o) and the empty set otherwise. The attaching map is the restriction of the quotient map q : WA(n + 1 ; o) →˜ (n) to the boundary. Furthermore if i ≥ n theñ i (n) =˜ (n) and the map ∂A → A is a cofibration. So lim i˜ i =˜ and˜ is cofibrant. Lie in the proof of Proposition 4.4, these sequences of pushout diagram imply that the spaces˜ (n) are CW-complex for each n.
Contractible:
The map q : WA(n + 1 ; o) →˜ (n) is a continuous map between compact CW-complexes.
Since the fiber of q over a point is homeomorphic to a product of polytopes which is contractible, the map q is a weak equivalence [18,Main Theorem]. Hence˜ (n) is contractible for n > 0.
In order to prove that ξ is a weak equivalence, we introduce two towers of fibrations. Define A k and B k to be the subspaces: Top ˜ (i) ; B(i) with A k satisfying the operadic relations and B k the As >0 -bimodule relations for k > 0. In other words A k and B k are respectively the space Operad(WA k+1 ; X * ) and Bimod As >0 (˜ k ; B) where WA k+1 is the sub- so that: By restriction, the map ξ induces an application between the two towers: with ξ = lim k ξ k = holim k ξ k . Consequently, ξ is a weak equivalence if each ξ k is a weak equivalence. We will prove this result by induction on k: • ξ 1 coincides with the identity. It is a weak equivalence.
• Assume that ξ k−1 is a weak equivalence. We consider the following diagram where g is a point in A k−1 , F A is the fiber over g and F B the fiber over ξ k−1 (g). Since the two left horizontal arrows are fibrations, the map ξ k is a weak equivalence if the induced map ξ g is a weak equivalence. Similarly˜ k is obtained from˜ k−1 by the pushout diagram (15). So the fiber F B is homeomorphic to the space Top ξ k−1 (g) k •q WA(k + 1 ; o) ; ∂WA(k + 1 ; o) ; B(k) and we have the commutative square: consequently ξ k is a weak equivalence.
Proof. We can consider Operad {o ; c} (WA ; X * ) as a subspace of Ω Operad( ; X c ) ; Operad {o ; c} (WA ; X) using the inclusion: In order to show that i is a weak equivalence, we introduce two towers of fibrations. One of them is the tower A k of Proposition 5.5. The second one is defined by: Top WA(i ; c) × [0 , 1] ; X(i ; c) × The restriction to the space A k of the inclusion i induces a map between the two towers: Since the space Ω Operad( ; X c ) ; Operad {o ; c} (WA ; X) is weakly equivalent to the limit of C k , the map i is a weak equivalence if each i k is a weak equivalence. We will prove this result by induction on k: • If k = 1, a point in C 1 is a pair (g 2 ; c ; g 2 ; o ) whereas the points in the image of i 1 coincide with the pairs satisfying: g 2 ; c : WA(2 ; c) × [0 , 1] → X(2 ; c) ; (x ; t) → η( * 2 ; c ).
Since g 2 ; c (x ; 0) = η( * 1 ; c ) for any pair in C 1 , the inclusion i 1 induces the following deformation retract: • From now on we assume that i k−1 is a weak equivalence for k ≥ 2. We consider the following diagram where g is a point in A k−1 , F A is the fiber over g and F C the fiber over i k−1 (g). Since the two left horizontal arrows are fibrations, the map i k is a weak equivalence if the induced map i g is a weak equivalence. is an acyclic cofibration. In order to prove that i g induces a deformation retract, we consider a lift H in the following diagram: In other words, the points in the image of i g coincide with the pairs such that: g k+1 ; c (x ; t) = g k+1 ; c H (x ; t) ; 1 = η( * k+1 ; c ), for x ∈ WA(k + 1 ; c) and t ∈ [0 , 1]. | 2014-10-13T09:33:16.000Z | 2014-10-13T00:00:00.000 | {
"year": 2014,
"sha1": "758f96cfe3ed321ebdb7d74d17838b6dd95b3322",
"oa_license": null,
"oa_url": "http://msp.org/agt/2016/16-3/agt-v16-n3-p15-s.pdf",
"oa_status": "BRONZE",
"pdf_src": "Arxiv",
"pdf_hash": "758f96cfe3ed321ebdb7d74d17838b6dd95b3322",
"s2fieldsofstudy": [
"Mathematics"
],
"extfieldsofstudy": [
"Mathematics"
]
} |
213865246 | pes2o/s2orc | v3-fos-license | Curcumin alleviates LPS-induced inflammation and oxidative stress in mouse microglial BV2 cells by targeting miR-137-3p/NeuroD1
Curcumin has been reported to exert protective effects on inflammation-related diseases, including spinal cord injury (SCI). Numerous evidence have suggested miRNAs are one of the important targets for curcumin during its anti-inflammatory function. However, little is known about the contribution of miRNAs on the role of curcumin in SCI. Thus, the objective of this study is to determine the role of miRNA (miR)-137-3p during curcumin treatment after SCI. Expression of miR-137-3p and NeuroD1 was detected using RT-qPCR and western blot assay. Inflammation and oxidative stress were measured with the protein expression levels of tumor necrosis factor (TNF)-α, interleukin (IL)-1β, and inducible nitric oxide synthase (iNOS). The target binding between miR-137-3p and NeuroD1 was confirmed via the luciferase reporter assay and RNA immunoprecipitation. LPS induced a higher expression of TNF-α, IL-1β, and iNOS in mouse microglia BV2 cells, which was attenuated by curcumin. miR-137-3p was downregulated and NeuroD1 was upregulated under LPS challenge. Curcumin also alleviated LPS-induced regulation on miR-137-3p and NeuroD1. The knockdown of miR-137-3p and ectopic expression of NeuroD1 could individually abolish the curcumin-mediated downregulation of TNF-α, IL-1β, and iNOS in LPS-challenged BV2 cells. Besides, NeuroD1 was inversely regulated by miR-137-3p via direct binding. Silencing of NeuroD1 reversed the miR-137-3p downregulation-mediated promoting effect on inflammation and oxidative stress in the presence of LPS and curcumin. Downregulation of miR-137-3p abolishes curcumin-mediated protection on LPS-induced inflammation and oxidative stress in mouse microglial BV2 cells depending on the direct upregulation of NeuroD1.
Introduction
Spinal cord injury (SCI) is a devastating condition that oen affects young and healthy individuals worldwide. Most of these injuries are caused by trauma, which accounts for approximately 10 000 of the new cases annually. 1 More than half of all SCI patients suffer cervical spine injury. 2 The incidence of SCI is relatively low compared with other types of injuries or major debilitating diseases; however, the current incidence of SCI has been increasing each year. 3 Moreover, SCI always results in disability, and places a huge burden on human society and a tremendous impact to the mind and body of patients.
SCI consists of primary injury and secondary injury. The primary injury immediately destroys the cell membrane, myelin and axon, and microvessels, thus triggering secondary injuries, inammation response and oxidative stress. 4 Recent research in models of SCI has shown that spinal cord microglia becomes activated and underlie the pathological pain. 5 Microglial cells derived from tissue monocytes differentiate into macrophages. 6 Activated microglia begin inltrating the injury site around 24 h aer SCI and rapidly release cytokines, chemokines, nitric oxide (NO) and reactive oxygen species (ROS), which stimulate the inammation cascade. 1 Many studies have focused on the acute response to SCI, such as biochemical cascades that contribute to secondary damage. However, the prognosis of SCI patients is greatly determined by the inammation response, which plays an important role in regulating the pathogenesis of SCI. 7,8 Multiple neuroprotective agents have been found for SCI therapy, including curcumin. Curcumin [1,7-bis(4-hydroxy-3methoxyphenyl)-1,6-heptadiene-3,5-dione] is a natural polyphenolic compound extracted from the rhizome of Curcuma longa L. Studies indicate that it has potent anti-cancer, antiarthritic, and anti-diabetic activities. Recently, curcumin has emerged as a potential therapeutic drug in SCI treatment. 6 In SCI, curcumin exerts a treatment effect to protect neurons and to inhibit oxidant and inammatory reactions. 9 The relevant mechanisms underlying curcumin protecting SCI include the promotion of superoxide dismutase (SOD) and GSH, and the suppression of inammation-related factors (e.g. NF-kB), pro-inammatory cytokines (e.g. TNF-a, IL-1b, IL-6 and RANTES), apoptosis-related genes (e.g. caspase-3/7), and oxidationassociated factors (e.g. MDA and CAT), 9 as well as signaling pathways. [10][11][12][13] MicroRNAs (miRNAs) are endogenous small non-coding RNAs consisting of about 22 nucleotides. It is noted that miRNAs can serve as diagnostic, prognostic and therapeutic biomarkers in diseases. 14 Altered miRNA expression is closely correlated with many pathological processes, including cell proliferation, apoptosis, carcinogenesis, neuroinammation and traumatic SCI. 15,16 The imbalance of miRNA is related to a variety of central nervous system diseases, and additionally, it is also associated with glial differentiation. 17,18 miRNA (miR)-137-3p, a welldocumented tumor repressor, has been reported to function in SCI mice/rat models 19,20 induced by spinal cord contusion, as well as in SCI cell models induced by high glucose and H 2 O 2 . 21,22 Besides, the down-regulation of miR-137-3p is invariably observed in the serum from SCI patients, rat SCI models, and H 2 O 2 -induced astrocytes in vitro. 19,20 However, the role of miR-137-3p and its detailed working mechanism in SCI has not been fully elucidated to date. Although evidence has suggested miRNAs as one of the important targets for curcumin when exhibiting its anti-cancer properties, 23,24 little is known about the contribution of miRNAs on the role of curcumin in SCI.
Lipopolysaccharide (LPS) is the most abundant component within the cell wall of Gram-negative bacteria, which has been extensively used in models studying inammation. 25 Cell-based SCI models induced by LPS have been widely used for exploring the pathogenesis of SCI and testing new therapeutic medicine for SCI. Therefore, we herein, constructed an SCI cell injury model in mouse microglia BV2 cells challenged by LPS. The protective effect of curcumin in LPS-induced inammation and oxidative stress was also veried. Finally, the role of miR-137-3p in the process of curcumin-mediated protection in LPSchallenged BV2 cells was investigated.
Chemicals
Lipopolysaccharide (LPS; L4391) from Escherichia coli 0111:B4 and curcumin (C1386) were purchased from Sigma Aldrich (St. Louis, MO, USA). Stock solutions of 100 mg mL À1 LPS and 100 mM curcumin were prepared in cell culture medium and DMSO, respectively.
LPS stimulation of microglia
For LPS stimulation, BV2 cells were cultured in 6-well plates (Corning, NY, USA) prior to different processing. BV2 cells (90% conuence) were incubated in cell growth medium and LPS was added to obtain a nal concentration of 1 mg mL À1 for 48 h. The control group was cells without any treatment.
Treatment of curcumin and experimental groups
BV2 cells were exposed to curcumin at concentration of 10 mM for 48 h. In the preliminary experiments, BV2 cells were divided into three groups, control (without any treatment), LPS (with 1 mg mL À1 of LPS challenge) and LPS + curcumin (with simultaneous treatment of 1 mg mL À1 of LPS and 10 mM of curcumin). For the loss-of function assays, BV2 cells were transfected with anti-miR-137-3p or anti-miR-NC, followed by treatment with LPS and curcumin. Cells in gain-of function analysis were grouped similar to that in the loss-of function assays. In the rescue experiments, BV2 cells were pretreated with co-transfection of anti-miR-137-3p and siRNA against NeuroD1 (siNeuroD1) or non-special genes (scramble), and then co-transfected cells were subjected to LPS and curcumin for 48 h. Notably, all the groups involved curcumin treatment contained less than 0.1% DMSO.
Methyl thiazolyl tetrazolium (MTT) assay
The MTT assay was performed to evaluate the cell viability of BV2 cells aer treatment with LPS and curcumin or not for 48 h. Subsequently, the cells were incubated with 5 mg mL À1 of MTT (Sigma Aldrich; 20 mL in FBS-free medium) for 4 h, then 100 mL of dimethyl sulfoxide (Sigma Aldrich) was added with vigorous shaking for 5 min. The optimal density (OD) at 490 nm was read on a Benchmark Plus™ microplate spectrometer (Bio-Rad, Hercules, CA, USA). The measurement for each group was repeated 4 times.
Cell transfection
BV2 cells were seeded into a 6-well plate (Corning) and incubated overnight. When the cells reached 80% conuence, transient transfection was carried out with Lipofectamine™ 2000 (Invitrogen) according to the manufacturer's instruction. The pcDNA4.1 vector was purchased from Thermo Fisher Scientic (Waltham, MA, USA). The recombinant vector pcDNA4.1-NeuroD1 was constructed. Special siRNAs against mouse NeuroD1 (siNeuroD1), mmu-miR-137-3p mimic, and anti-miR-137-3p were obtained from GenePharma (Shanghai, China), as well as their negative controls (scramble, miR-NC mimic and anti-NC). Uniformly, 30 nM of miRNA mimics, 50 nM siRNAs or 2 mg eukaryotic vectors was used for transfection. For the rescue assays, miRNA (20 nM) and vectors (1 mg) were co-transfected into BV2 cells. The transfected cells were incubated for 24 h for further study.
Western blot assay
Total protein from cultured BV2 cells was isolated in RIPA lysis buffer (Beyotime) supplemented with phenylmethyl sulfonyl uoride (PMSF; Sangon, Shanghai, China). The protein concentrations were determined using the Bradford protein assay reagent (Sangon). Equal amounts of protein (20 mg) from each sample were loaded for the standard procedures of the western blot assay. GAPDH on the same membrane was an internal standard to normalize the protein levels. The primary antibodies were purchased from Cell Signaling Technology (CST; Danvers, Massachusetts, USA) and as follows: TNF-a (#3707, 1 : 1000), IL-1b (#31202, 1 : 1000), iNOS (#2982, 1 : 1000), and GAPDH (#97166, 1 : 1000). Anti-NeuroD1 (#213725, 1 : 1000) was obtained from Abcam (Cambridge, UK). Quantication of the western blot bands was performed on Image J, and the results are presented as fold change normalized to the control group.
Bioinformatics analysis
The identication of the putative miRNA target was performed using miRNA target analysis tools TargetScan Mouse Release 5.2 (http://www.targetscan.org/). With the key research word NeuroD1, several miRNAs were shown to have potential to bind to mouse NeuroD1 ENSMUST00000041099.4 3 0 UTR.
Luciferase reporter assay and RNA immunoprecipitation (RIP)
Considering the bioinformatics analysis, we hypothesized that NeuroD1 is a potential downstream target for miR-137-3p, and the luciferase reporter assay was adopted to verify this binding. Mouse NeuroD1 3 0 UTR fragment (NeuroD1-wt) containing the potential binding sites of mmu-miR-137-3p was cloned by PCR methods into psi-CHECK vector (Invitrogen) and the mutated NeuroD1 3 0 UTR sequence (NeuroD1-mut). BV2 cells were transfected according to the following groups: NeuroD1-wt + miR-NC mimic (miR-NC), NeuroD1-wt + miR-137-3p mimic (miR-137-3p), NeuroD1-mut + miR-NC, and NeuroD1-mut + miR-137-3p. The psi-CHECK vector itself provides a strong Renilla luciferase signal as normalization. Aer 24 h incubation, the cells were collected to measure the Firey and Renilla luciferase activity using the dual-luciferase reporter assay system (Promega). All the data are the average of at least three independent transfections. RIP was performed in BV2 cell extract aer transfection of miR-137-3p/NC. The Magna RIP™ RNA-binding protein immunoprecipitation kit (Millipore, Bradford, MA, USA) was chosen to detect the expression of NeuroD1 mRNA from the samples bound to Ago2 or IgG antibody with RT-qPCR. All operations obeyed the standard instructions.
Statistical analysis
Statistics were analyzed using SPSS 21.0 (SPSS Inc, IBM Corp. Armonk, NY, USA) and presented as the mean AE SD. The Student's t-test method was utilized for comparison between two groups, and one-way ANOVA was used for data comparison in multiple groups. P < 0.05 was considered as statistically signicant.
Curcumin relieves LPS-induced inammation and oxidative stress in mouse microglial BV2 cells
To study the effect of curcumin on the SCI-induced inammation response and oxidative stress, we utilized mouse microglia BV2 cells stimulated by LPS to mimic SCI in vitro. According to the preliminary experiment, 1 mg mL À1 of LPS treatment for 48 h led to $50% cell viability inhibition, and 5-20 mM of curcumin signicantly improved the LPS-induced cell injury. Moreover, 10 mM of curcumin showed the same strong protection as 20 mM (ESI Fig. 1A and B †). Thus, the concentration of LPS was xed at 1 mg mL À1 , and the concentration of curcumin was 10 mM. As shown in Fig. 1A, LPS stimulation greatly up-regulated the expression of TNF-a, IL-1b, and iNOS on protein level, which apparently decreased with the simultaneous treatment of LPS and curcumin. The quantication of western blot on Image J indicated that the alteration of the TNF-a, IL-1b, and iNOS levels was signicantly different (Fig. 1B-D). This result supports the protective effect of curcumin on LPS-induced inammation and oxidative stress in mouse microglia cells in vitro.
Curcumin alleviates LPS-mediated regulation on miR-137-3p and NeuroD1 in mouse microglial BV2 cells in vitro
Considering the vital functions of miR-137-3p 19,20,22,26 and Neu-roD1 (ref. 14, 27 and 28) in the nervous system, we investigated their role in the SCI cell model. Especially, their expression levels were measured in BV2 cells in response to LPS. miR-137-3p was less expressed under LPS stimulation, which was elevated in the presence of curcumin ( Fig. 2A). In the case of NeuroD1, it was upregulated in the stress of LPS compared to the control cells. In addition, curcumin down-regulated the NeuroD1 protein level in LPS-challenged BV2 cells (Fig. 2B). These results demonstrate the aberrant expression of miR-137-3p and NeuroD1 in LPS-induced BV2 cells, suggesting their potential role in curcumin-mediated protection in microglia aer SCI.
Down-regulation of miR-137-3p abolishes curcumin-mediated protection on LPS-induced inammation and oxidative stress in mouse microglial BV2 cells Based on the present study, we hypothesized curcumin can upregulate miR-137-3p expression when exerting protective effects under LPS challenge. Then, consequently, we wondered whether miR-137-3p affects the role of curcumin in LPS-induced cell injury. Thus, a series of loss-of-function assays were carried out. miR-137-3p expression was knocked down in BV2 cells with transient transfection of anti-miR-137-3p. The transfection efficiency was determined by RT-qPCR (Fig. 3A). When simultaneously treated with LPS and curcumin, the anti-miR-137-3p-transfected cells expressed higher levels of TNF-a, IL-1b, and iNOS versus the anti-NC-transfected cells (Fig. 3B-E). These outcomes indicate that miR-137-3p down-regulation abolished the curcumin-mediated negative regulatory effect on TNF-a, IL-1b, and iNOS expression, implying curcumin protects LPS-induced inammation and oxidative stress depending on the up-regulation of miR-137-3p.
Ectopic expression of NeuroD1 suppresses curcuminmediated protection on LPS-induced inammation and oxidative stress in mouse microglial BV2 cells
Similarly, we wondered if NeuroD1 can affect the role of curcumin in LPS-induced cell injury. Gain-of-function analyses were conducted in BV2 cells transfected with pcDNA4.1-NeuroD1 (NeuroD1) or pcDNA4.1-empty (vector). As depicted, NeuroD1 was forcedly expressed during the treatment with LPS and curcumin (Fig. 4A). Ectopic NeuroD1 up-regulated the expression of TNF-a, IL-1b, and iNOS in BV2 cells exposed to LPS and curcumin simultaneously (Fig. 4B-E). These results show that up-regulated NeuroD1 relieved the curcuminmediated inhibition on LPS-induced inammation and oxidative stress, namely, the protection effect of curcumin on LPSinduced inammation and oxidative stress relies on the down-regulation of NeuroD1.
miR-137-3p regulates NeuroD1 expression by target binding in mouse microglial BV2 cells
Next, the regulatory relationship between miR-137-3p and Neu-roD1 was investigated in this study. Algorithm analysis by the publicly available database TargetScan Mouse was used to identify the targets of miR-137-3p. The analysis suggested that mmu-miR-137-3p had 2 potential binding sites on NeuroD1 3 0 UTR: a highly conserved binding site at position 324-331 (Fig. 5A) and a poorly conserved binding site at position 1300-1306 of Neu-roD1 3 0 UTR (not shown). To determine whether miR-137-3p regulates NeuroD1 by binding to the 3 0 UTR, we performed the luciferase reporter assay integrating sequences of the NeuroD1 3 0 UTR containing the binding sites for miR-137-3p or the sequences whose binding site was mutated into a luciferase reporter vector (named NeuroD1-wt and NeuroD1-mut). BV2 cells were co-transfected with NeuroD1-wt/mut and miR-137-3p/NC. The luciferase activity was remarkably reduced when miR-137-3p and NeuroD1-wt were overexpressed; however, there was no difference in the NeuroD1-mut groups (Fig. 5B). The RNA immunoprecipitation (RIP) assay further identied the target binding of miR-137-3p and NeuroD1 (Fig. 5C). The western blot assay showed that NeuroD1 expression was inhibited by the miR-137-3p mimic and promoted by anti-miR-137-3p in the LPSinduced BV2 cells (Fig. 5D). Thus, these results support that NeuroD1 is a direct target of miR-137-3p.
NeuroD1 silencing reverses miR-137-3p down-regulation effect on inammation and oxidative stress
Rescue experiments were performed to clarify the activity of NeuroD1 in mediating the biological action of miR-137-3p under treatment of LPS and curcumin. BV2 cells were co-transfected with anti-miR-137-3p/NC and siNeuroD1/scramble. As shown in Fig. 6A, the NeuroD1 up-regulation induced by anti-miR-137-3p was impaired by siNeuroD1. Under the condition of miR-137-3p knockdown, TNF-a, IL-1b, and iNOS synthesis was promoted, which was blocked by silencing of NeuroD1 (Fig. 6B). The quantication of western blot was performed using Image J (Fig. 6C-E). These results indicate that the down-regulation of NeuroD1 can reverse the promotion effect of miR-137-3p knockdown on LPS-induced inammation and oxidative stress in mouse microglia cells in vitro.
Discussion
Treatment of SCI is currently a signicant challenge in clinic and in research worldwide. Plant-derived medicines have gained increasing attention worldwide due to their safety, high efficiency and minimum side effects. 9 Curcumin is a yellow pigment that exerts powerful anti-inammation and antioxidant potential. 29,30 It has been documented that curcumin protects animals and cell lines against acute toxicity. LPS mediates its toxic effects through the activation of glial cells and the generation of ROS and RNS species via NADPH oxidase activation. 31 In the present study, we used LPS to induce SCI cell injury in BV2 cells. Simultaneous treatment of curcumin alleviated LPS-induced elevation of inammatory factors synthesis and oxidative-related gene expression (TNF-a, IL-1b, and iNOS). Mechanistically, curcumin treatment rescued LPS-induced miR-137-3p expression loss in BV2 cells. Knockdown of miR-137-3p deregulated the inhibitory effect of curcumin on TNFa, IL-1b, and iNOS expression. Simultaneously, we identied that NeuroD1 was negatively regulated by miR-137-3p through target binding. Ectopic expression of NeuroD1 abolished curcumin-mediated protection in LPS-induced BV2 cells, and silencing the expression of NeuroD1 reversed the pro-inammation effect of miR-137-3p downregulation.
In recent years, curcumin has emerged as a potential therapeutic drug in SCI treatment. Aer SCI, curcumin protects neurons and inhibits the inammation response and oxidative stress. 32,33 For example, Zaky et al. 25 screened an overall altered expression prole in ve types of let-7 miRNAs in LPS-induced and curcumin-and/or valproic acid-rat, including in the selfrecovery aer SCI. The anti-inammatory effects of curcumin were uncovered in the research by Ma et al. 34 to be associated with the down-regulation of miR-155 in LPS-treated macrophages and mice. Another study from Hong et al. 35 discovered that the treatment of curcumin could down-regulate the expression of NF-kB/miR-155, thus inhibiting the NF-kB signal pathway and the apoptosis of extravillus trophoblast cells. To the best of our knowledge, these are the only reports descripting the role of miRNAs during curcumin functions before the present study. Here, we found that curcumin induced a higher expression of miR-137-3p during LPS-induced neuro-inammation in mouse microglia BV2 cells; moreover, the anti-inammatory effect of curcumin relied on the miR-137-3p/ NeuroD1 axis. Even though numerous articles have reported that curcumin promotes the degradation of TNF-a and IL-6 stimulated by LPS, there is scarce literature on the involvement of miRNAs when curcumin displays protective roles among diseases, including SCI and LPS-induced inammation. Therefore, the mechanism underlying the protective activity of curcumin, especially through regulating miRNAs needs to be extensively and urgently elaborated.
The exact signicance of the deregulated miRNAs aer SCI is still obscure. Nevertheless, miR-137-3p was declared to be down-regulated aer SCI, together with other miRNAs such as miR-138 and miR-124, and some other miRNAs were up-regulated. 36 It is well accepted that miRNAs exert their functions by targeting their downstream genes by binding to some given area of the 3 0 UTRs. miR-137-3p functions aer SCI through direct negative regulation of several downstream target genes. For example, Wang et al. 22 rstly demonstrated that heme oxygenase-1 decreased H 2 O 2 -induced spinal cord neuron injury (primary neuronal apoptosis and necrosis) through MLK3/ MKK7/JNK3 signaling by downregulating Cdc42, during which miR-137-3p was the essential factor. However, in their research, they did not identify whether miR-137-3p targeted Cdc42 in primary neurons, even though the target relationship between them was widely found in H 2 O 2 -induced cardiomyocyte apoptosis 37 and cancer cells. [38][39][40][41] Although miR-137-3p targeting Cdc42 is likely conserved across different cell types, other target genes for this miRNA have also been reported, such as CDK6 and MITF. 39,42 In SCI, miR-137-3p attenuated rat spinal cord tissue inammation (TNF-a and IL-1) and oxidative stress (SEPN1, GPX1, iNOS, and eNOS) by targeting and modulating NeuroD4. 20 Additionally, H 2 O 2 -induced astrocytes inammation (TNF-a and IL-6) and apoptosis in C8-D1A and C8-B4 cells were inhibited by miR-137-3p/MAPK-activated protein kinase 2 (MK2). 19 Tang et al. 43 reported that the ventral horn of the spinal cord is associated with motoneurons degeneration and upregulation of miR-137-3p in spinal cord reduced nNOS expression and motoneurons death through inhibiting its target calpain-2 levels. In the present study, we identied a new downstream target gene, NeuroD1, in mouse microglia cells. Functionally and mechanically, miR-137-3p down-regulation promoted the synthesis of TNF-a, IL-1b, and iNOS in BV2 cells subjected to LPS and curcumin. Therefore, we believe that miR-137 targeting NeuroD1 mediates the curcumin-mediated protective effect on inammation and oxidative stress in mouse microglia in vitro.
Neurogenic differentiation factor (Neuro) is a family of basic helix-loop-helix (bHLH) transcription factors that plays an important role in the development of neurons and reprogramming other cell types into neurons. 28 Aer SCI, NeuroD1 was distinctively increased in the microglia. 14 As a target of miR-30a-5p, silencing NeuroD1 expression can block the increase in inammatory cytokine product and oxygen removal-related gene expression. NeuroD1 has been proposed to be essential for the survival and maturation of adult-born neurons. 44 For instance, Guo et al. demonstrated that reactive glial cells in a stab-injured cortex can be directly reprogrammed into functional neurons in vivo using retroviral expression of NeuroD1. 45 Recently, Chen et al. concluded that lentivirus carrying the NeuroD1 gene promoted the conversion from glial cells into neurons in a spinal cord injury model. 28 In their study, NeuroD1 up-regulated cells could be reprogrammed into neural stem cells (nestin-positive), and immature (DCX-positive) and functional neurons (NeuN-positive) on days 7, 14 and 21 aer SCI. In our study, NeuroD1 expression was proven to be up-regulated in LPS-induced microglia BV2 cells, and was sensitive to curcumin treatment. A higher expression of NeuroD1 eliminated the protective effect of curcumin on the synthesis of TNF-a, IL-1b, and iNOS in LPS-induced injury. Thus, all these results demonstrate that NeuroD1 may have crucial effects on spinal cord injury recovery.
Extensive pharmacological effects, low toxicity and good tolerance make curcumin a hot spot for exploration in fundamental research and clinical application. However, some studies indicate that curcumin is associated with some limitations such as low oral absorption, bio-distribution, and systemic bioavailability. 46,47 These limitations have led to curcumin not being approved as a drug in clinic. It has been shown that curcumin exerts its effects by targeting a wide sequence of cellular and molecular pathways 10-13 including the NF-kB, MAPK, Nrf2/ARE, and mTOR signaling pathways. However, in SCI, much more experiments should be performed to discuss the precise molecular mechanism of the "curcumin protective effect", especially through regulating miRNAs. Therefore, the effect of curcumin on the expression of various miRNAs is an important area of scientic investigation. Based on our ndings in this study, the signaling pathways underlying the curcumin/ miR-137-3p/NeuroD1 axis should be revealed in inammation response and oxidative stress aer SCI, even though miR-137 and NeuroD1 take part in the MLK3/MKK7/JNK3 and MAPK/ ERK signaling pathway. 14,22 In summary, our research veried the protective activity of curcumin on an LPS-induced SCI cell injury model in mouse microglia BV2 cells. Curcumin treatment decreased LPSinduced inammatory injury by up-regulating miR-137-3p and down-regulating NeuroD1. Moreover, miR-137-3p can directly negatively modulate NeuroD1 expression in LPS-induced BV2 cells. This work provides novel evidence for further understanding the anti-inammation mechanism of curcumin and affords a theoretical basis for the development of miR-137-3p as a new biomarker for the molecular treatment of SCI. | 2019-11-28T12:07:08.176Z | 2019-11-25T00:00:00.000 | {
"year": 2019,
"sha1": "b3e0ab038d1491e1a4ca3f9fa01736c2ab76aff0",
"oa_license": "CCBYNC",
"oa_url": "https://pubs.rsc.org/en/content/articlepdf/2019/ra/c9ra07266g",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "f2c4fa9c56dcad2d1c1d4183aabb675a446e57ba",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"extfieldsofstudy": [
"Chemistry",
"Medicine"
]
} |
255212509 | pes2o/s2orc | v3-fos-license | Prediction of Subsequent Contralateral Patellar Dislocation after First-Time Dislocation Based on Patellofemoral Morphologies
The subsequent dislocation of a contralateral patellofemoral joint sometimes occurs after a first-time lateral patellar dislocation (LPD). However, the anatomic risk factors for subsequent contralateral LPD remain elusive. This study included 17 patients with contralateral LPD and 34 unilateral patellar dislocators. The anatomic parameters of the contralateral patellofemoral joints were measured using CT images and radiographs that were obtained at the time of the first dislocation. The Wilcoxon rank-sum test was performed, and a binary regression model was established to identify the risk factors. The receiver operating characteristic curves and the area under the curve (AUC) were analyzed. The tibial tubercle-Roman arch (TT-RA) distance was significantly different between patients with and without contralateral LPD (24.1 vs. 19.5 mm, p < 0.001). The hip–knee–ankle (HKA) angle, patellar tilt, congruence angle, and patellar displacement were greater in the study group than in the control group (p < 0.05). The TT-RA distance revealed an OR of 1.35 (95% CI (1.26–1.44]), p < 0.001) and an AUC of 0.727 for predicting contralateral LPD. The HKA angle revealed an OR of 1.74 (95% CI (1.51–2.00), p < 0.001) and an AUC of 0.797. The Patellar tilt, congruence angle, and patellar displacement had AUC values of 0.703, 0.725, and 0.817 for predicting contralateral LPD, respectively. In conclusion, the contralateral patellofemoral anatomic parameters were significantly different between patients with and without subsequent contralateral LPD. Increased TT-RA distance and excessive valgus deformity were risk factors and could serve as predictors for contralateral LPD. At first-time dislocation, the abnormal position of the patella relative to the trochlea may also be an important cause of subsequent LPD.
Introduction
Lateral patellar dislocation (LPD) is a common disorder that mainly affects the unilateral patellofemoral joints of female adolescents [1,2]. Overall, 5.4-5.8% of patients with unilateral LPD will suffer the dislocation of contralateral patellofemoral joints at some point during their life [3,4]. Although low incidence of contralateral LPD is observed, it has a significant influence, in terms of psychological and physical trauma, on patients who suffer subsequent contralateral LPD. Preventing subsequent contralateral LPD is crucial, but the etiology of its initiation remains elusive. 2 of 12 In recent years, risk factors for LPD or its recurrence have been well identified, and various anatomic abnormalities with regard to patellofemoral joints have been found to be responsible for LPD [5]. Trochlear dysplasia was reported as the most significant risk factor in the production of patellar instability as it decreases patellotrochlear congruence, with a morbidity of more than 90% in patients with LPD [6]. In addition, some specific deformities may occur concurrently, such as the excessive lateralization of the tibial tubercle, femoral or tibial rotational malformation, coronal malalignment, patella alta, and patella tilt, which could aggravate patellar instability by causing imbalanced forces around patellofemoral joints [7][8][9].
To the best of our knowledge, anatomic risk factors for subsequent contralateral LPD remain unclear; moreover, even research into contralateral-patellofemoral-joint-related morphologies is scarce [10,11]. Patients with skeletal immaturity at first-time dislocation have the highest risk of contralateral LPD [4]. The occurrence of ipsilateral LPD was reported an important risk factor for contralateral dislocation [12]. Trochlear dysplasia of dislocated knees can sometimes predict contralateral LPD after first-time dislocation [4]. Dejour et al. [13] found that dislocators potentially had a trochlear dysplasia in the contralateral unaffected knees. In addition, Simonaitytė et al. [10] reported a 24.1% incidence of trochlear dysplasia in contralateral knees. Our previous study revealed that patella alta and trochlear dysplasia could be traced in contralateral un-dislocated knees [11]. Demehri et al. [14] uncovered anatomic abnormalities in contralateral asymptomatic patellofemoral joints in patients with unilateral LPD.
The hypothesis of this study was that patients who suffered subsequent contralateral LPD after a first-time dislocation were characterized by more severe contralateralpatellofemoral-joint-related skeletal deformities than patients without contralateral LPD. Given the insufficient knowledge in the literature, the purpose of this study was twofold: the first was to verify the difference in the anatomic parameters of unaffected knees at a first-time dislocation between patients with and without contralateral LPD; the second was to identify any anatomic variations that could contribute to or predict subsequent contralateral LPD.
Study Population
Approval from the Institutional Review Board (IRB) of our hospital was obtained, and the informed consent was waived by the IRB. (IRB NO. 2022-K26). A total of 185 consecutive patients admitted to our institution with unilateral LPD and asymptomatic contralateral patellofemoral joints from January 2015 to December 2020 were identified. The inclusion criteria for this retrospective case-control study were as follows: patients who suffered contralateral LPD by November 2022. In total, 19 patients were considered eligible for inclusion in this study. The exclusion criteria were as follows: patients with ipsilateral traumatic or habitual patellar dislocation; patients who did not have a CT scan of the contralateral hip, knee, and ankle joints simultaneously at the time of their first dislocation; patients without weight-bearing full-leg anteroposterior radiographs or lateral knee radiographs; patients with a history of contralateral bone fracture or surgery that may influence the measurements; patients with severe epiphysitis of the femur.
Among the nineteen patients, two subjects who lacked the necessary CT images and radiographs were excluded. As a result, 17 patients with subsequent contralateral LPD were designated as the study group. We conducted 1:2 matching for the patients without contralateral LPD during the same period following the above exclusion criteria. Then, 34 patients matched by age (age at first-time dislocation) and sex, who did not have any contralateral-patellofemoral-joint-related symptoms, were included in the control group. The subjects' inclusion flowchart is shown in Supplementary Figure S1. By the time we submitted this article, the mean follow-up time of the 185 consecutive patients with unilateral LPD was 45 ± 20 months. All of the patients were regularly followed up.
Computed Tomography Technique
CT Images were obtained using a scanner (Somatom Sensation, Siemens Healthcare, Forchheim, Germany) in our hospital within seven days after the first dislocation, ranging from the bilateral ilium to the toes. Patients were in the supine position with the bilateral lower limbs in full extension and the foot positioned in 90 • flexion. The scanning parameters were as follows: tube voltage, 130 kVp; tube current 110-140 mAs; scanning layer thickness and layer spacing, 1 mm; matrix, 512 × 512 pixels. The field of view varied with the individual characteristics of the patients, ranging from 220 to 450 mm.
Radiological Assessment
A total of 51 patients (17 in the study group and 34 in the control group) were available for radiological assessment. CT images of the contralateral lower limbs, weight-bearing full-leg anteroposterior radiographs, or lateral knee radiographs at the first instance of dislocation were retrospectively collected. Trochlear dysplasia, tibial tubercle lateralization, segmental femoral anteversion, knee joint rotation, tibial torsion, patellar tilt, patellar height, coronal malalignment, congruence angle, lateral patellar displacement, and posterior condylar angle (PCA) were measured by an experienced orthopedist and a well-trained radiologist in a blinded and randomized fashion using the picture archiving and communication system (PACS). In the event of any major disputes about the measuring results, especially the Dejour classification of trochlear dysplasia, a discussion with another experienced orthopedist was conducted until a consensus was reached. All of the measurements were conducted two weeks later to assess intra-observer reliability.
Trochlear Dysplasia
Dejour classification, a commonly used four-grading system for evaluating trochlear dysplasia, consists of Type A: shallow trochlea; Type B: flat or convex trochlea; Type C: a convex medial trochlear wall; and Type D: cliff patterns [15]. Lateral trochlea inclination (LTI) and trochlear depth were reported as reliable and objective parameters for quantifying trochlear dysplasia [16]. LTI was measured by the angle between a line tangent to the lateral aspect of the trochlea (LTF) and the surgical transepicondylar axis (SEA) ( Figure 1A). The trochlear depth was the depth formed between the medial and lateral femoral trochlear facets ( Figure 1B).
contralateral-patellofemoral-joint-related symptoms, were included in the control group. The subjects' inclusion flowchart is shown in Supplementary Figure S1. By the time we submitted this article, the mean follow-up time of the 185 consecutive patients with unilateral LPD was 45 ± 20 months. All of the patients were regularly followed up.
Computed Tomography Technique
CT Images were obtained using a scanner (Somatom Sensation, Siemens Healthcare, Forchheim, Germany) in our hospital within seven days after the first dislocation, ranging from the bilateral ilium to the toes. Patients were in the supine position with the bilateral lower limbs in full extension and the foot positioned in 90° flexion. The scanning parameters were as follows: tube voltage, 130 kVp; tube current 110-140 mAs; scanning layer thickness and layer spacing, 1 mm; matrix, 512 × 512 pixels. The field of view varied with the individual characteristics of the patients, ranging from 220 to 450 mm.
Radiological Assessment
A total of 51 patients (17 in the study group and 34 in the control group) were available for radiological assessment. CT images of the contralateral lower limbs, weight-bearing full-leg anteroposterior radiographs, or lateral knee radiographs at the first instance of dislocation were retrospectively collected. Trochlear dysplasia, tibial tubercle lateralization, segmental femoral anteversion, knee joint rotation, tibial torsion, patellar tilt, patellar height, coronal malalignment, congruence angle, lateral patellar displacement, and posterior condylar angle (PCA) were measured by an experienced orthopedist and a welltrained radiologist in a blinded and randomized fashion using the picture archiving and communication system (PACS). In the event of any major disputes about the measuring results, especially the Dejour classification of trochlear dysplasia, a discussion with another experienced orthopedist was conducted until a consensus was reached. All of the measurements were conducted two weeks later to assess intra-observer reliability.
Trochlear Dysplasia
Dejour classification, a commonly used four-grading system for evaluating trochlear dysplasia, consists of Type A: shallow trochlea; Type B: flat or convex trochlea; Type C: a convex medial trochlear wall; and Type D: cliff patterns [15]. Lateral trochlea inclination (LTI) and trochlear depth were reported as reliable and objective parameters for quantifying trochlear dysplasia [16]. LTI was measured by the angle between a line tangent to the lateral aspect of the trochlea (LTF) and the surgical transepicondylar axis (SEA) (Figure 1A). The trochlear depth was the depth formed between the medial and lateral femoral trochlear facets ( Figure 1B).
Tibial Tubercle Lateralization
Tibial tubercle lateralization was calculated by the tibial tubercle-Roman arch (TT-RA) distance that we proposed previously, which was demonstrated to be more reliable than the tibial tubercle-trochlear groove (TT-TG) distance [17]. Briefly, the highest portion of the Roman arch was identified on the axial CT slice showing intact femoral condyles, and the middle portion of the bony tibial tubercle at the insertion of the patellar tendon was regarded as the landmark for measuring the TT-RA distance ( Figure 2). vertical distance to the posterior condylar reference line (PCRL), L, M, and G, respectively. Trochlea depth is calculated according to the following formula: [(L + M)/2 − G].
Tibial Tubercle Lateralization
Tibial tubercle lateralization was calculated by the tibial tubercle-Roman arch (TT-RA) distance that we proposed previously, which was demonstrated to be more reliable than the tibial tubercle-trochlear groove (TT-TG) distance [17]. Briefly, the highest portion of the Roman arch was identified on the axial CT slice showing intact femoral condyles, and the middle portion of the bony tibial tubercle at the insertion of the patellar tendon was regarded as the landmark for measuring the TT-RA distance ( Figure 2). The line perpendicular to the PCRL and parallel to RAL is drawn through the TT (TTL), and the distance between the RAL and TTL is defined as the TT-RA distance.
Femoral Anteversion
Femoral malrotation was assessed using a recently proposed method [18]. Briefly, segmental femoral torsion parameters (total, neck, mid, and distal torsion) were measured by four independent lines: the proximal femoral head-neck-axis, the femur-lesser trochanter line, the tangent of the distal/posterior femur, and the SEA ( Figure 3). The total femoral anteversion was defined as the angle between the proximal femoral head-neck axis and SEA, with a value of more than 20.4° indicating a pathology. The line perpendicular to the PCRL and parallel to RAL is drawn through the TT (TTL), and the distance between the RAL and TTL is defined as the TT-RA distance.
Femoral Anteversion
Femoral malrotation was assessed using a recently proposed method [18]. Briefly, segmental femoral torsion parameters (total, neck, mid, and distal torsion) were measured by four independent lines: the proximal femoral head-neck-axis, the femur-lesser trochanter line, the tangent of the distal/posterior femur, and the SEA ( Figure 3). The total femoral anteversion was defined as the angle between the proximal femoral head-neck axis and SEA, with a value of more than 20.4 • indicating a pathology.
Tibial Tubercle Lateralization
Tibial tubercle lateralization was calculated by the tibial tubercle-Roman arch (TT-RA) distance that we proposed previously, which was demonstrated to be more reliable than the tibial tubercle-trochlear groove (TT-TG) distance [17]. Briefly, the highest portion of the Roman arch was identified on the axial CT slice showing intact femoral condyles, and the middle portion of the bony tibial tubercle at the insertion of the patellar tendon was regarded as the landmark for measuring the TT-RA distance ( Figure 2). The line perpendicular to the PCRL and parallel to RAL is drawn through the TT (TTL), and the distance between the RAL and TTL is defined as the TT-RA distance.
Femoral Anteversion
Femoral malrotation was assessed using a recently proposed method [18]. Briefly, segmental femoral torsion parameters (total, neck, mid, and distal torsion) were measured by four independent lines: the proximal femoral head-neck-axis, the femur-lesser trochanter line, the tangent of the distal/posterior femur, and the SEA (Figure 3). The total femoral anteversion was defined as the angle between the proximal femoral head-neck axis and SEA, with a value of more than 20.4° indicating a pathology.
Knee Joint Rotation and Tibial Torsion
Knee joint rotation is the angle between the posterior femoral condylar reference line (PCRL) and the dorsal tibia condylar line (Supplementary Figure S2) [19]. Tibial torsion was assessed by measuring the angle between the dorsal tibia condylar line and a line through the medial and lateral malleolus [20]. PCA was assessed by the angle formed between the SEA and the PCRL, and the length of the SEA was defined as the transepicondylar width (TEW) [18].
Patellar Height and HKA Angle
The Insall-Salvati index was calculated to assess patellar height and was defined as the ratio between the length of the articular surface of the patella and the length of the patellar tendon on the lateral plain radiograph, with a value of >1.2 indicating patella alta (Supplementary Figure S3) [7]. Coronal malformation was evaluated by the hip-knee-ankle angle: the angle between the femoral and tibial mechanical axis on weight-bearing full-leg anteroposterior radiographs [21], and a value of >0 degree was referred to valgus deformity in this study.
Patella Position Relative to the Trochlea
Patella tilt was assumed to be the angle between the PCRL and patella width line ( Figure 4A) [22]. The congruence angle was the angle between the line bisecting the sulcus angle and the line connecting the lowest portion of the sulcus to the apex of the patella ridge ( Figure 4B) [23]. Patellar displacement measured the positive values indicating lateral translation and was defined as the distance between the patellar medial edge and the medial femoral condyle ( Figure 4C) [24].
center of the femoral head and neck (Line a). (B) The femur-lesser-trochanter line is drawn through the center of the femur and the midpoint of the lesser trochanter (Line b). (C) Line c is tangent to the posterior aspect of the femur on the slice just above the attachment of the gastrocnemius. (D) A line through the sulcus of the medial epicondyle and the prominence of the lateral epicondyle (SEA) is shown. The angles formed between Line a and Line b, Line b and Line c, Line c and SEA, and Line a and SEA are regarded as the neck torsion, mid torsion, distal torsion, and total femoral torsion, respectively.
Knee Joint Rotation and Tibial Torsion
Knee joint rotation is the angle between the posterior femoral condylar reference line (PCRL) and the dorsal tibia condylar line (Supplementary Figure S2) [19]. Tibial torsion was assessed by measuring the angle between the dorsal tibia condylar line and a line through the medial and lateral malleolus [20]. PCA was assessed by the angle formed between the SEA and the PCRL, and the length of the SEA was defined as the transepicondylar width (TEW) [18].
Patellar Height and HKA Angle
The Insall-Salvati index was calculated to assess patellar height and was defined as the ratio between the length of the articular surface of the patella and the length of the patellar tendon on the lateral plain radiograph, with a value of >1.2 indicating patella alta (Supplementary Figure S3) [7]. Coronal malformation was evaluated by the hip-kneeankle angle: the angle between the femoral and tibial mechanical axis on weight-bearing full-leg anteroposterior radiographs [21], and a value of >0 degree was referred to valgus deformity in this study.
Patella Position Relative to the Trochlea
Patella tilt was assumed to be the angle between the PCRL and patella width line ( Figure 4A) [22]. The congruence angle was the angle between the line bisecting the sulcus angle and the line connecting the lowest portion of the sulcus to the apex of the patella ridge ( Figure 4B) [23]. Patellar displacement measured the positive values indicating lateral translation and was defined as the distance between the patellar medial edge and the medial femoral condyle ( Figure 4C) [24].
Statistical Analysis
The average value of each parameter measured by both observers was used for the final statistical analysis, which was conducted independently via SPSS software (Version 21.0; IBM Corp, Armonk, NY, USA) by a well-trained orthopedist. The TT-RA distance
Statistical Analysis
The average value of each parameter measured by both observers was used for the final statistical analysis, which was conducted independently via SPSS software (Version 21.0; IBM Corp, Armonk, NY, USA) by a well-trained orthopedist. The TT-RA distance was normalized by TEW to reduce individual differences. The inter-and intraobserver correlation coefficients (ICCs) and weighted kappa analysis were conducted, with a value of >0.75 indicating excellent agreement. Because of the small sample size of this study, all continuous data were presented as the median and interquartile range (IQR). The Chi-square test and Wilcoxon rank-sum test were performed to identify the differences in anatomic parameters between the two groups. The binary logistic regression model was established to identify the anatomic risk factors for contralateral LPD. Receiver-operating characteristic curves (ROC) and the area under the curve (AUC) were used to evaluate the diagnostic ability of each parameter for subsequent contralateral LPD after a first-time dislocation. The Youden index of any parameter with an AUC of more than 0.7 was calculated to identify its sensitivity and specificity.
Post hoc analysis was performed using G-Power software (version 3.1.9.4, Heinrich-Heine-Universitat Dusseldorf, Dusseldorf, Germany). For a large effect size of 1.01 according to the TT-RA distance in the two groups, a power of 0.95 was calculated (n 1 = 17, n 2 = 34; alpha, 0.05).
Results
In total, 10.3% (19/185) of patients suffered contralateral LPD after first-time dislocation, with a mean follow-up time of 45 months; 17 patients with subsequent contralateral LPD and 34 patients without contralateral LPD or any patellofemoral symptoms were included in this study. The demographic data of the patients in the two groups are shown in Table 1. The interval time between the first-time dislocation and the time of contralateral dislocation varied, ranging from 13 months to 75 months. The follow-up time for the control group was 41 (IQR, 14) months. Overall, 35.3% of patients in the study group had skeletal immaturity at their first-time dislocation. The ICCs and 95% confidence interval (CI) of each measurement are shown in Table 2. All the measurements showed good to excellent inter-and intra-observer agreements (ICCs > 0.75). The differences in the anatomic parameters between the two groups are shown in Table 3. Severe trochlear dysplasia (Type B-D) represented a larger proportion of the study group than the control group (94.1% vs. 88.2%, p < 0.001). LTI and trochlear depth were smaller in the study group (11.9 • and 3.8 mm, respectively) than in the control group (13.9 • and 4.1 mm, respectively) (p < 0.001). The TT-RA distance and the ratio of TT-RA/TEW were greater in patients with contralateral LPD (24.1 mm and 32.7%, respectively) than in patients without contralateral LPD (19.5 mm and 27.6%, respectively). The median value of the HKA angle in the study group was 1.9 • , compared to 0.8 • in the control group (p < 0.001). Patellar tilt, congruence angle, and patellar displacement were greater in the study group than in the control group (p < 0.001). The results of the unadjusted regression model (simple analysis using continuous data) are shown in Figure 5. LTI and trochlear depth revealed significant ORs of 1.21 and of 1.42 with regard to subsequent contralateral LPD, respectively (p < 0.001). The TT-RA distance and HKA angle were associated with a contralateral LPD with an OR of 1.35 (95% CI (1.26-1.44) p < 0.001) and an OR of 1.74 (95% CI (1.51-2.00), p < 0.001), respectively. Patellar tilt, the congruence angle, and patellar displacement showed significant correlations with subsequent contralateral LPD, with OR values of 1.13, 1.57, and 2.74, respectively (p < 0.001). The distal femoral torsion and tibial rotation also revealed significant ORs of 1.06 (p = 0.011) and of 1.05 (p = 0.002), respectively.
ROC curves were analyzed to calculate the diagnostic capacity of these parameters for subsequent contralateral LPD (Table 4 and Figure 6). LTI, trochlear depth, distal femoral torsion, and tibial rotation had AUCs of <0.7 (p > 0.05). The TT-RA/TEW had an AUC of 0.741 for contralateral LPD, with a cutoff value of 29.5% (82.3% sensitivity and 61.8% specificity, p = 0.006); the same was true for the results of the TT-RA distance. Patellar displacement and the HKA angle revealed significant AUCs of 0.817 and of 0.797, with cutoff values of 9.2 mm (sensitivity 88.2% and specificity 64.7%) and of 1.3 • (sensitivity 82.4% and specificity 70.6%), respectively (p < 0.001). Patellar tilt and the congruence angle had an AUC of 0.703 and of 0.725 for predicting contralateral LPD, respectively. of 1.42 with regard to subsequent contralateral LPD, respectively (p < 0.001). The TT-RA distance and HKA angle were associated with a contralateral LPD with an OR of 1.35 (95% CI (1.26-1.44) p < 0.001) and an OR of 1.74 (95% CI (1.51-2.00), p < 0.001), respectively. Patellar tilt, the congruence angle, and patellar displacement showed significant correlations with subsequent contralateral LPD, with OR values of 1.13, 1.57, and 2.74, respectively (p < 0.001). The distal femoral torsion and tibial rotation also revealed significant ORs of 1.06 (p = 0.011) and of 1.05 (p = 0.002), respectively.
Discussion
The most important findings of this study are as follows. The contralateral patellofemoral anatomic parameters were significantly different between patients with and without subsequent contralateral LPD; these parameters include trochlear dysplasia, valgus malalignment, and tibial tubercle lateralization. The TT-RA distance and HKA angle are verified as risk factors and can serve as potential predictors for subsequent contralateral LPD. In addition, although contralateral patellofemoral joints are asymptomatic at first-
Discussion
The most important findings of this study are as follows. The contralateral patellofemoral anatomic parameters were significantly different between patients with and without subsequent contralateral LPD; these parameters include trochlear dysplasia, valgus malalignment, and tibial tubercle lateralization. The TT-RA distance and HKA angle are verified as risk factors and can serve as potential predictors for subsequent contralateral LPD. In addition, although contralateral patellofemoral joints are asymptomatic at first-time dislocation, the abnormal positioning of the contralateral patella does exist, and has characteristics such as excessive patellar tilt, the congruence angle, and patellar displacement, which are implicated in subsequent contralateral LPD.
Skeletal abnormalities were demonstrated to be implicated in LPD. Trochlear dysplasia, excessive lateralization of the tibial tubercle, femoral anteversion, and coronal malalignment were reported as the main risk factors for LPD in that they reduce patellotrochlear congruence and increase the lateral vector force of the patella [15,25]. Previous literature has reported that 5.4% to 5.8% of patients with unilateral LPD could suffer subsequent contralateral LPD after first-time dislocation [3,4]. In this study, the incidence of contralateral LPD was 10.3%. The severe anatomic abnormalities with regard to contralateral patellofemoral joints may be a reason for such higher incidence. It yields a great significance to explore the role of contralateral anatomic abnormalities in subsequent LPD for both disease recognition and prevention.
To the best of our knowledge, the literature regarding the morphologies of contralateral patellofemoral joints in patients with unilateral LPD is scarce. Anatomic abnormalities of contralateral asymptomatic patellofemoral joints were identified in patients with unilateral LPD [14]. The contralateral asymptomatic knees had increased patellar heights and excessive lateralization of the tibial tubercle in patients with unilateral patellar dislocation [11]. Dejour et al. [13] explained that a large proportion of patients with trochlear dysplasia in one knee potentially had a trochlear dysplasia in the contralateral intact knee. In addition, Simonaitytė et al. [10] reported a 24.1% and 84.5% incidence of trochlear dysplasia and patellar alta in contralateral knees, respectively.
Previous literature focused on risk factors affecting ipsilateral LPD recurrence after first-time dislocation [26], but the risk factors for subsequent contralateral LPD have not been adequately studied, especially anatomic ones. Patients with skeletal immaturity at the first-time dislocation have the highest risk of contralateral LPD [4]. Parikh et al. [27] reported that the presence of trochlear dysplasia in the affected knees had an OR of 8.7 for subsequent contralateral patellar instability. Christensen et al. [4] considered trochlear dysplasia of the ipsilateral dislocated knees as a risk factor for contralateral LPD after first-time dislocation. It remains unclear why trochlear dysplasia in ipsilateral knees could predict contralateral LPD; the significant correlations between bilateral skeletal features may be one reason [11]. In this study, trochlear dysplasia of contralateral knees was found to be more severe in patients who suffered subsequent contralateral LPD than in patients without contralateral LPD, which was considered as a risk factor but was of limited predictive value for subsequent contralateral LPD (AUC < 0.7).
The TT-RA distance and HKA angle could reliably reflect tibial tubercle lateralization and coronal malalignment, respectively, which could contribute to patellar instability by increasing patella lateralization and the pressure of the lateral trochlear facet [28,29]. Christensen et al. [4] reported that the TT-TG distance in the affected knees could not predict subsequent contralateral LPD. In addition, the TT-TG distance in the affected knees was not correlated with contralateral anatomic abnormalities in patients with unilateral patellar dislocation [11]. In this study, excessive TT-RA distance in the contralateral knees at first-time dislocation was verified as a risk factor for subsequent contralateral LPD, with a fair predictive ability. When the value of the TT-RA distance was 20.0 mm, the sensitivity of subsequent contralateral LPD would be 82.3%, and for every 1 mm increase in the TT-RA distance, the risk would increase 1.4 times. To the best of our knowledge, this study is the first to report the HKA angle of contralateral knees in patients with subsequent LPD. When the value of the HKA angle was 1.3 • , it revealed a high sensitivity (82.4%) for predicting subsequent contralateral LPD; in addition, for every 1 degree increase in the HKA angle, the risk of subsequent contralateral LPD was 1.7-fold higher.
Rotational malformation of lower extremities, such as excessive femoral anteversion and tibial external rotation were likely to be identified in patients with LPD [30] and served as risk factors for patellar instability [31]. We have demonstrated that the differences in femoral anteversion and tibial rotation between ipsilateral and contralateral knees are not significant in patients with unilateral LPD [11]. In this study, except for tibial rotation, differences in the rotational parameters between patients with and without subsequent contralateral LPD were not significant, indicating that lower limb malrotation may exist independently of contralateral LPD. On the other hand, tibial external rotation could not serve as a risk factor or predictor for subsequent contralateral LPD because of a relatively small AUC and OR value, which warrants further investigation to include a large proportion of patients.
The abnormal position of the patella relative to the femoral trochlea, such as excessive patellar tilt, congruence angle, and patellar displacement, were reported to be implicated in LPD by reducing patellotrochlear congruence and aggravating patellar instability [22][23][24]. These parameters were significantly different between patients with and without contralateral LPD and were verified as predictive factors for contralateral LPD, with an OR of 1.6 to 2.7. As it stands, even though contralateral patellofemoral joints were asymptomatic, the abnormal position of contralateral patella did exist at the time of first dislocation, which may contribute to contralateral LPD. With regard to patellar height, Parikh et al. [27] considered ipsilateral patella alta as a risk factor for contralateral LPD. Based on our previous research, the correlations between the patellar height in ipsilateral affected knees and the anatomic abnormalities in contralateral asymptomatic knees were not significant [11]. Moreover, Simonaitytė et al. [10] demonstrated that the mean value of the Blackburne-Peel index was 1.2 in contralateral intact knees, compared to 1.3 in dislocated knees (p > 0.05). In this study, contralateral patellar height measured using the Insall-Salvati index was not significantly different between patients with and without contralateral LPD (1.32 vs. 1.34), indicating that it is of limited value for predicting subsequent contralateral LPD. These results should be verified via a study to include a larger sample size.
Our study had some limitations. First, this study only included anatomic parameters; the roles of clinical data in contralateral LPD, such as injury mechanism, exercise intensity, and the relaxation of multiple ligaments were not discussed. Second, the sample size in the study group was relatively small due to the low incidence of subsequent contralateral LPD, meaning that a study including a large population is necessary. Third, patients in the control group may suffer contralateral LPD in the future, which could result in a potential bias in the results. Further follow-up is necessary. Fourth, patients with skeletal maturity and immaturity were not completely separated, and the roles of age and gender at first-time dislocation in subsequent contralateral LPD were not reported in this study. Fifth, the anatomic parameters of the ipsilateral patellofemoral joints at the first-time dislocation in patients with and without contralateral LPD were not compared. Sixth, the question of whether the abnormal anatomic parameters could influence the time of suffering subsequent contralateral LPD warrants further investigation.
Conclusions
Contralateral patellofemoral anatomic parameters were significantly different between patients with and without subsequent contralateral LPD. Increased TT-RA distance and excessive valgus deformity were risk factors and could serve as predictors for contralateral LPD. At first-time dislocation, the abnormal position of the patella relative to the trochlea may also be an important cause of subsequent LPD. | 2022-12-29T16:08:49.644Z | 2022-12-26T00:00:00.000 | {
"year": 2022,
"sha1": "f07e9dcda10794085f63a0d765f7affe72421e31",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2077-0383/12/1/180/pdf?version=1672048747",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "2d9b08a58727e6ffb59c21ca131aabff580bf1c8",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
270692603 | pes2o/s2orc | v3-fos-license | Eco-climatic challenges and innovations: Navigating the future of rubber plantations
Rubber plantations are essential to the economies and ecosystems of many tropical regions, but they face significant threats from environmental and climatic factors. This review paper examines the impact of soil quality, water availability
Introduction
Natural rubber (NR) production and processing from the Hevea (rubber or Pará) tree is a significant economic activity in numerous nations, providing livelihoods for over 40 million people globally.Over five thousand items rely on natural rubber as a key component (Pinizzotto et al. 2021) [16] .More opportunities for using NR will arise due to the need to replace energy-intensive and non-renewable materials.For it to reach its full potential and provide benefits to the people who depend on it, it is crucial to ensure it is generated sustainably.In this article, we'll go over two NR sustainability hotspots and offer some solutions to make rubber production more sustainable so it can be a component of the circular bio-economy.In the last 30 years, the amount of land used for rubber cultivation has increased by 1.8 times globally.The rubber market in mainland Southeast Asia has grown at a faster rate than any other commodity, and it has grown even more rapidly in nations that aren't typically known for growing rubber.Have voiced worries about the effects of expanding rubber cultivation on ecosystems and livelihoods.Research commonly compares the impact to those of the prior land cover, which is often a natural forest (Gitz et al., 2022) [1] .Consequently, the review focuses on the effects of deforestation rather than Hevea plantings themselves.Converting varied land uses other than natural forests to Hevea production has not been well studied to determine its impacts.
Rubber and natural resources
According to Selvalakshmi et al. (2020) [2] , the amount of extra land needed to supply the demand for rubber by 2024 might range from 4.3 million hectares to 8.7 million ha.Since these predictions were issued, the world's rubber market has shifted, with NR demand marginally down and predicted to stay that way until 2024.Many people are worried that expanding rubber plantations may lead to the monoculture conversion of very biodiverse landscapes like woodland and mosaic landscapes.As with any agricultural monoculture that supplants more diversified systems, this is a worry that is not unique to (Shao-jun et al., 2015) [3] found that species composition changes and species richness diminishes when forests are converted to rubber monoculture (Selvalakshmi et al., 2020) [2] .
International Journal of Advanced Biochemistry Research 2024; 8 (6): 459-467 Plantations, with their more complex habitat structures, support more biodiversity than monocultures, whereas agroforestry systems allow for the survival of certain forest species that would otherwise perish in monocultures.Further, research is needed to determine the effects of plantation management practices (such as pest control) on ecosystems, the interactions between species in complex systems, and the comparison of the effects on biodiversity of different spatial structures (Shao-jun et al., 2015) [3] .
Importance of studying environmental and climatic effects
The productivity and sustainability of rubber plantations are intricately linked to environmental and climatic factors.Soil quality, water availability, pest and disease pressure, and weather patterns play critical roles in determining the health and yield of rubber trees.In recent years, climate change has emerged as a significant threat to rubber plantations, with rising temperatures, altered precipitation patterns, and increased frequency of extreme weather events posing challenges to traditional rubber-growing regions (Ahrends et al., 2015) [4] .
Economical Perspectives
For many rural residents, rubber plantations provide a means of subsistence and economic advancement.The ecosystems that have been damaged by shifting cultivation can be restored with the help of these plantations.People's living conditions are enhanced through community-based rubber plantations.Due to the financial incentives and benefits, many farmers shifted their focus from conventional farming to rubber plantation.Ling et al. (2022) [5] found that biogas can be made from natural rubber processing waste and used to dry the rubber and cook in homes.Habitat and stream hydrology are severely impacted in numerous nations as a result of water contamination caused by rubber processing enterprises.Fish, prawns, turtles, shellfish, and edible flora along stream banks undergo drastic declines as a result.It has a detrimental impact on people's ability to make a living and on the safety of food.Price swings, insecurity in food supplies, and illness all pose risks to the income of smallholder farmers.The income and livelihood of people in various locations are impacted by environmental concerns arising from rubber's intense cultivation (Ling et al., 2022) [5] .
Ecological Significance
Rubber plantations have a significant ecological impact on their surrounding environments.On the positive side, rubber trees sequester carbon, which helps mitigate climate change by decreasing atmospheric CO2 levels.Rubber plantations can also contribute to soil stabilization and watershed protection, preventing erosion and maintaining water quality.However, the expansion of rubber plantations has raised ecological concerns, particularly regarding biodiversity loss and habitat destruction (Chattopadhyay, 2021) [6] .Converting natural forests into rubber monocultures reduces habitat availability for wildlife and disrupts ecological networks.This transformation can lead to the decline of native species, including endangered flora and fauna.Additionally, using agrochemicals in rubber plantations can contaminate soil and water resources, posing risks to both ecosystems and human health.The history and global distribution of rubber plantations underscore their economic and ecological significance.While rubber plantations are vital for the livelihoods of millions and support various industries, they also present ecological challenges that need to be addressed through sustainable management practices (Panda et al., 2020) [7] .Understanding the dual impact of rubber plantations is essential for ensuring their long-term viability and minimizing their environmental footprint.As global demand for natural rubber continues to rise, concerted efforts are needed to promote sustainable rubber cultivation and mitigate the adverse effects on biodiversity and ecosystems.
Environmental factors affecting rubber plantations Effects on water availability
Since rubber plants have a higher water requirement than other plants, they deplete groundwater supplies and cut into the share that other plants have.Many latex and rubber processing facilities pollute nearby soil and groundwater by dumping partially treated or untreated wastewater into the environment.More and more land is being covered by rubber plantations as part of tribal rehabilitation efforts, which is drastically cutting down on biodiversity and might have serious effects on the region's water supplies.The latex material, which includes water, sugar, proteins, resins, and rubber, can lead to water contamination.This is the primary source of the water utilized for spring cleaning as well as the wastewater produced by processing activities and the rubber sheet manufacturing process.Because of the release of extremely polluting pollutants, (Hazir et al., 2020) [8] the presence of biological oxygen and ammonia makes the waste products from rubber processing considered.The use of acid in the latex thickening, conserving, and blending procedures results in acidic waste liquidated from the rubber manufacturing plant.Due to using sulfuric acid in latex clotting, the processing of rubber results in an effluent with a high concentration of sulphate.Monocultures, the common method of growing rubber, deplete soil nutrients and necessitate extensive use of fertilizers and pesticides.The natural ecosystem may suffer as a result of this, which can lead to water and soil contamination.In addition to hurting the environment and water usage, factories use energy and water-intensive procedures.Due to wellmanaged topography, gully formation under rubber plantations is limited, which increases the risk of landslides.According to Prasada et al. (2021) [9] , soil nitrogen and organic carbon interaction is lower in rubber plantation areas compared to forest land.Due to the extensive obstruction of sunlight under rubber plantations, water levels tend to decline in many locations where rubber is dominant.Seasonal variation determines the amount of precipitation.The rubber tree awning's ability to intercept rainfall varies with the seasons.Rubber trees don't have a very high water retention capacity.While water loss from rubber plantations may be small, nutrient loss is quite large.The rubber tree has a drying effect on wet soil and a tendency to slow the flow of water.The control of the hydrological cycle is thus affected by this (Prasada et al., 2021) [9] .
Soil quality and composition
Rubber trees prefer loamy soils with a balanced mixture of sand, silt, and clay.Such soils provide good aeration, drainage, and root penetration.Heavy clay soils, which retain too much water, or sandy soils, which drain too quickly and lack nutrient-holding capacity, are less suitable for rubber cultivation.Nutrient-rich soils support vigorous growth and high latex yields.Regular soil testing and fertilization are necessary to maintain soil fertility.Organic matter, such as compost or green manure, can enhance soil structure and nutrient content.The use of chemical fertilizers should be carefully managed to avoid soil degradation and environmental pollution (Golbon et al., 2018) [10] .Deep soils, with a minimum depth of 1.5 meters, allow for extensive root development, which is crucial for the stability and health of rubber trees.Shallow soils can restrict root growth, leading to poor tree anchorage and reduced access to nutrients and water.Soil erosion is a significant concern in rubber plantations, especially in hilly terrains.Erosion can deplete the soil of essential nutrients and organic matter, reducing its fertility.Soil erosion can be avoided and soil health can be preserved by implementing soil conservation techniques like contour planting, terracing, and cover crops into practice.
Effect on soil health
Soil health is negatively affected by rubber plantations, according to numerous researchers.According to (Mangmeechai, 2020) [11] soil erosion is a worldwide issue that rubber plantations can contribute to by reintroducing soil erosion.A rubber tree can significantly lessen soil erodibility.Soil organic matter oxidation can lower soil temperatures, which aids in building.The shade cast by the rubber trees causes this to occur.Rainfall speeds up organic matter breakdown, releases nutrients, and disrupts the surface soil's collective organization (Mangmeechai, 2020) [11] .
Rubber and Soil: Problems with management
The clearing of secondary forests to create space for rubber plantations is described in another study conducted in Xishuangbanna Prefecture.Trees were cut down from sloped ground and terraces were constructed by hand in this case.Bulldozers are commonly used to make terraces in other regions.Both methods involve digging down to the subsoil, which exposes the less-absorbent soils below.Soil compacting is another way mechanical terracing decreases water absorption.Surface erosion may speed up, natural streamflows may be interrupted, stream sediments may be elevated, and the likelihood of landslides may increase (Gitz et al., 2020) [1] .Clearing terraces of vegetation before new trees' roots have developed to a point where they can support soil or shield from rain increases the likelihood of problems.When the rubber trees were young, farmers in Xishuangbanna would plant them and then intercrop them with upland crops like rice, corn, groundnuts, and beans for the first four years.Twice yearly or less frequently, chemical fertilizers were spread.The understory vegetation was either pulled out by hand or treated with herbicides after the rubber trees had reached maturity.One reliable indicator is the amount of carbon in the soil, which is the most important component of soil organic matter.When soil contains a lot of organic matter, it has better physical qualities, such as being able to hold water and being more stable.Plants rely on the nutrients and trace elements provided by organic matter in the soil to flourish (Yu et al., 2019) [13] .Soil carbon levels dropped sharply in the first five years following the research site's rubber plantation conversion from secondary forests.After a steep decline, the rate of decline leveled off around 20 years after the rubber trees were first planted.Approximately 68% of the initial level of carbon in the soil's uppermost layer remained under secondary forest conditions at this time.When taking a regional perspective, (Meijide et al., 2018) [14] found that disruptive soil preparation and management, rather than rubber trees themselves, are the main culprits in the soil degradation caused by rubber plantations.Soil organic carbon losses and severe degradation of soil quality are linked to mechanical terracing.People living in rural areas may be at risk of chemical contamination from pesticides and fertilizers that seep into surface and groundwater sources.Furthermore, without organic soil amendments, the consistent use of inorganic fertilizers can lead to soil acidification and, in the long run, a dramatic reduction in soil quality.In the mountainous region of mainland Southeast Asia, mechanical terracing and extensive fertilization are only used on rubber plantations, even though they might theoretically be used for numerous crops (Meijide et al., 2018) [14] .
Pests and Diseases
Pests and diseases are significant threats to rubber plantations, potentially causing substantial yield losses and tree mortality.Effective management of these threats is crucial for maintaining healthy rubber plantations and ensuring sustainable production.Several pests can infest rubber trees, including insects, mites, and rodents.Common pests include the rubber tree lace bug (Leptopharsa heveae), the mealybug (Dysmicoccus brevipes), and various species of leaf-eating caterpillars [15] .These pests can cause defoliation, reduced photosynthesis, and weakened trees, ultimately affecting latex production.Rubber plantations are susceptible to various diseases caused by fungi, bacteria, and viruses.The most devastating diseases include South American Leaf Blight (SALB), caused by the fungus Microcyclus ulei.SALB can cause severe defoliation and tree death, posing a major threat in South America and other rubber-growing regions.Another significant disease is Powdery Mildew, caused by the fungus Oidium heveae, which leads to white, powdery fungal growth on leaves, reducing photosynthesis and tree vigor.Corynespora Leaf Fall (CLF), caused by the fungus Corynespora cassiicola, results in premature leaf drop and can significantly reduce latex yields.Integrated Pest Management (IPM) is a sustainable approach to managing pests and diseases that combines biological, cultural, mechanical, and chemical control methods.IPM strategies for rubber plantations include biological control, which utilizes natural predators, parasites, or pathogens to control pest populations.For example, introducing predatory insects to manage lace bug infestations.Cultural practices involve implementing good agricultural practices such as crop rotation, proper spacing, and sanitation to reduce pest and disease incidence.Chemical control involves using pesticides judiciously and as a last resort, ensuring that applications are targeted and follow recommended guidelines to minimize environmental impact and resistance development.Developing and planting disease-resistant rubber tree varieties is a long-term strategy to combat major diseases.Breeding programs focus on selecting varieties with genetic resistance to specific pathogens, reducing the need for chemical control measures, and enhancing plantation sustainability.This multifaceted approach is essential for maintaining the health and productivity of rubber plantations in the face of environmental challenges.
Breeding and selection of high-yielding and disease and pest-resistant clones
Breeding and genomic marker-assisted selection to create clones that are resistant to climate change and produce abundant crops is another strategy.The NR business was established on a relatively limited genetic foundation; the majority of the Hevea brasiliensis trees present in Asian plantations today are descendants of just 22 seedlings gathered by Henry Wickham from the Brazilian Amazon Basin in the 1800s.Commercial rubber production began in Malaysia and spread to other nations that grew the tree's seeds.Subsequently, expeditions were launched to the Amazon in search of fresh germplasm, to increase genetic variety and production.Opportunities for adaptation can be found in expanding the genetic base of cultivated rubber.Wild germplasm has genes that could be useful for breeding rubber to resist climate change stress.While studies in China revealed substantial variation among clones in their susceptibility to hurricane damage, recent work in Thailand demonstrated a promising genetic variability among the current commercial clones for breeding drought-tolerant clones.employing SNP (single nucleotide polymorphisms) markers for new genetic selection from various Hevea species, including H. Nitida, H. Spruceana, and H. brasiliensis, researchers might further improve the possibility of employing rubber germplasm for climate change adaptation.Modern allows for the acceleration of the breeding process.For testing and global clone exchanges, international collaboration is crucial.
Climatic Factors Affecting Rubber Plantations Rubber systems and climate change
According to Pinizzotto et al. (2021) [16] , there are numerous ways in which climate change interacts with natural rubber systems.They can play a role in the production or absorption of greenhouse gas emissions and are already feeling the effects of climate change (Pinizzotto et al., 2021) [16] .According to Min et al. (2020) [17] , rubber is best grown in regions with yearly mean temperatures between 26 and 28 degrees Celsius and rainfall between 1800 and 2500 millimeters (Min et al., 2020) [17] .However, in borderline locations, the weather might be much colder or drier.Droughts and floods brought on by climate change will make life less pleasant in some long-established regions, but warming will make life better in other, cooler, peripheral regions.Another potential avenue for expansion is to move to higher latitudes and altitudes.Additionally, in drier regions, changes may promote rubber cultivation over oil palm.Drought may postpone tree maturity, while more frequent rainfall may decrease tapping days or increase pests/diseases; both extreme weather occurrences are likely to affect rubber production.Another major worry is wind damage, which is becoming more common and more powerful typhoons.The consequences of increasing temperatures on the physiology of rubber trees, as well as their effects on yields and the dispersion of pests and diseases, remain unclear, and additional research is required to fill these gaps.Agronomic practices that are resilient to climate change and the development of clones that are resistant to climate change and have high yields can be used in tandem to adapt rubber cultivation to climate change.The chance to express such actions is presented by developing nations' national adaptation plans.
Impacts of climate change on natural rubber systems
Conditions in peripheral areas can be cooler or drier, or even both, while the majority of rubber plantations are situated in regions with an average annual temperature range of 26-28°C and more than 1,500 mm of rainfall.Global temperatures are anticipated to rise by 2°C to 3°C by 2050, as per the predictions of the IPCC.Climate change will have varying impacts on the several regions that presently grow rubber since the climatic margins of this crop are mostly dictated by rainfall and temperature.Drought will make some traditionally fertile places less so while warming will make other marginally fertile places more (Toriyama et al., 2022) [18] .Rubber Trees however grown in a limited number of regions around the world, therefore finding a suitable location for expanding rubber tree cultivation is required.Policy planners can use information on soil, physiography, and socio-economic characteristics to determine whether to expand or contract rubber trees based on climate suitability (Ray et al., 2014) [30] .Hevea land suitability is predicted to change in several research in China, India, Malaysia, and the broader Mekong subregion.Northern Thailand, Laos, Yunnan, and Hainan provinces in China, southern Brazil, Gabon, and south-eastern Cameroon are some of the marginals producing locations where rubber is now more easily grown due to colder and more humid weather.However, this might all change in the future.Another potential avenue for expansion is to move to higher latitudes and altitudes.Because oil palm farms are only found in the wet tropics, rubber may soon replace oil palm in drier regions.Rapid and severe trunk snapping and branch breaking can do permanent harm to a plantation.Pests and diseases brought on by increased humidity are another consequence of climate change; there have been noticeable shifts in the frequency and intensity of these events.Recent wetter and longer rainy seasons played a role in the outbreak of Pestalotiopsis, a fungal leaf fall disease, on Hevea in South Sumatra, according to a study.Fungicides are most effective when applied during the early stages of Pestalotiopsis infection.Whereas, (Pradeep et al., 2022) Pradeep induced extended and unusually dry seasons considerably decreased the disease's occurrence.Nonetheless, growth stunting and decreased latex production were additional outcomes of the protracted dry season (Pradeep et al., 2022) [19] .
Regional impacts of rubber plantations Impact on major rubber-producing regions
Southeast Asia Southeast Asia is the leading region for rubber production, with Thailand, Indonesia, and Malaysia accounting for the majority of global output.The region's tropical climate, characterized by consistent rainfall and warm temperatures, provides ideal conditions for rubber tree cultivation.However, climate change poses significant threats.In Thailand, irregular rainfall patterns and prolonged dry seasons have led to water stress, reducing latex yields.Conversely, excessive rainfall has resulted in flooding, waterlogging, and increased incidence of root diseases.In Indonesia, the shifting monsoon patterns have affected the rubber-tapping season, while rising temperatures have stressed rubber trees, making them more susceptible to pests and diseases.Malaysia faces similar challenges, with the added pressure of land competition from palm oil plantations (Hazir et al., 2018) [20] .The country has experienced increased pest outbreaks, such as infestations by the rubber tree lace bug, exacerbated by changing weather conditions.Africa In Africa, countries like Côte d'Ivoire, Nigeria, and Liberia are significant rubber producers.These countries have vast areas suitable for rubber cultivation, but they face distinct environmental challenges.In Côte d'Ivoire, irregular rainfall and prolonged dry periods have impacted rubber yields, prompting farmers to adopt irrigation systems and water conservation practices.Nigeria's rubber industry suffers from outdated agricultural practices and insufficient pest and disease management infrastructure, leading to lower productivity.Liberia, with its favourable climate and ample land, has the potential for expanding rubber production.However, the sector is recovering from years of civil conflict, which disrupted agricultural activities and infrastructure.Climate change is now adding to the complexities, with unpredictable weather patterns affecting planting and tapping schedules.South America South America, particularly Brazil, is the native home of the rubber tree, but the region's rubber industry has faced significant challenges.The most prominent threat is South American Leaf Blight (SALB), caused by the fungus Microcyclus ulei.This disease has devastated rubber plantations, leading to a decline in production.The climate in the Amazon basin, with high humidity and frequent rainfall, creates ideal conditions for the spread of SALB, complicating disease management efforts (Ma et al., 2019) [21] .To mitigate these challenges, Brazil has focused on developing disease-resistant rubber tree varieties and implementing strict quarantine measures to prevent the spread of SALB.Other South American countries like Guatemala and Ecuador have also faced challenges related to climate variability, such as irregular rainfall and temperature fluctuations, impacting rubber productivity.
Effect of rubber plantations on climate change
Thailand -Impact of Irregular Rainfall in Thailand, a major rubber-producing country, irregular rainfall patterns over the past decade have significantly impacted rubber plantations.The dry season has extended beyond the usual months, causing water stress and reducing latex flow.Farmers in the Surat Thani province reported a 20-30% decrease in latex yield due to inadequate rainfall during critical growth periods.To combat this, many have adopted microirrigation systems to ensure a consistent water supply.Additionally, mulching practices have been implemented to conserve soil moisture and reduce evaporation.Indonesia -Rising Temperatures and Pest Infestations Indonesia has experienced rising temperatures over the past few years, which have stressed rubber trees and made them more vulnerable to pests such as the mealybug (Dysmicoccus brevipes).Farmers in the West Kalimantan region reported a significant increase in pest infestations correlating with higher average temperatures.The increased pest pressure led to defoliation and reduced photosynthetic activity, ultimately lowering latex yields.Integrated Pest Management (IPM) strategies, including biological control agents and cultural practices like regular field sanitation, have been adopted to manage pest populations and improve tree health.Prolonged Dry Periods in Côte d'Ivoire, prolonged dry periods have posed a significant challenge to rubber cultivation.The unpredictable rainfall has led to inconsistent soil moisture levels, affecting tree growth and latex production.In response, the Rubber Research Institute of Côte d'Ivoire initiated a program to develop droughtresistant rubber tree varieties.These new varieties have shown promising results in field trials, demonstrating better growth and latex yield under water-stressed conditions.Additionally, farmers have been trained in water management techniques, such as rainwater harvesting and efficient irrigation practices, to mitigate the impact of dry spells.Brazil -South American Leaf Blight (SALB) Brazil's rubber industry has been severely affected by South American Leaf Blight (SALB).The high humidity and frequent rainfall in the Amazon basin create favourable conditions for the spread of this fungal disease.In response, Brazil has focused on breeding and cultivating disease-resistant rubber tree varieties.The establishment of quarantine zones and rigorous monitoring has also been crucial in preventing the spread of SALB to unaffected areas.Collaborative research efforts with international partners have led to the development of fungicides and biocontrol agents, offering new tools for managing the disease.The impact of environmental and climatic factors on rubber plantations varies across different regions, with each facing unique challenges.Southeast Asia contends with irregular rainfall and rising temperatures, Africa grapples with outdated practices and climate variability, and South America battles devastating diseases like SALB.These case studies illustrate the diverse climatic impacts on rubber production and highlight the importance of adopting adaptive strategies, such as developing resistant varieties, implementing efficient water management practices, and utilizing integrated pest management.Addressing these challenges is essential for sustaining rubber production and ensuring the livelihoods of millions of smallholder farmers globally.
Adaptation and Mitigation Strategies
Rubber system adaptation Research over the last ten to fifteen years has yielded a wealth of information that can be helpful in the adaption process.Two supplementary approaches may be used to adapt rubber agriculture to climate change: first, using agronomic techniques that are robust to climate change; and second, creating clones that are resistant to climate change and have good yields by using breeding and genomic marker-assisted selection.Rubber systems can be adapted to climate change in several ways.For the first two years after planting, nursery plants should be shaded.Intercropping with bananas, for instance, could do this (Panklang et al., 2022) [22] .It has been suggested to use mulching in drier marginal locations to retain soil moisture or to water young plants.Soil water infiltration, reduced runoff and erosion, improved soil quality, and increased nutrient availability can be achieved by preserving surface cover through methods such as allowing some natural weed flora, intercropping with legumes, or leaving part or all of the tree biomass in the inter-rows.A rubber plantation's performance can be greatly improved with careful fertilizer control, especially in the beginning.The soil quality improves gradually during the mature stage of rubber, which is distinct from the immature stage.Rain guards and adaptive tapping management can mitigate the effects of increased rainfall on the bark.A tapping rest time and low-intensive tapping could be part of tapping management to reduce the number of days of tapping and the costs connected with it, all while keeping the annual yield the same.
Rubber trees contribute to the adaptation of farming systems Some of the environmental changes brought about by climate change include higher average temperatures and reduced soil moisture.Because of its higher actual evapotranspiration (AET) compared to grassland cover, reforestation can enhance local climate conditions through evaporative cooling, which reduces surface temperature.Research on rubber tree plantations in Thailand indicated an annual effective transpiration rate (AET) of around 1,150 mm and an average net radiation utilization rate (RPUR) of 0.73.Based on these results, it may be inferred that tropical rainforests and well-managed rubber tree farms may exhibit similar evaporative cooling and moisture recycling behaviours.As a response to climate change, the production of rubber has been suggested as a substitute for conventional, short-term rainfed crops in Sri Lanka.Among the possible advantages are the following: a retention of up to twice the surface soil moisture; a reduction of midday air temperatures within the rubber plantation of up to 6°C; and an average decrease of 3.7 °C during the day.The farmers will also appreciate the improved working conditions this brings about.In areas that are most susceptible to the effects of climate change, livelihood resilience is crucial.When compared to non-rubber growers, Sri Lankan rubber farmers have more social capital and better access to other forms of livelihood capital.But smallholders whose only income is from rubber are particularly vulnerable to swings in the commodity's price, particularly 6 in the absence of government assistance or CSR initiatives by industrial allies.More stability may be experienced by smallholders whose produce is diverse.A more resilient and sustainable economy is the result of income diversification.An advantage of RAS that may have been discovered in Indonesian trials was this.
Mobilizing climate action to create an enabling environment
National Adaptation Plans (NAPs) were put in place to do two things: a) make countries less susceptible to climate change by making them more resilient and adaptive; and b) make it easier to incorporate adaptation measures to climate change into current and future policies, programs, and initiatives, especially development planning at all levels and in all relevant sectors.These plans and the national strategies that emerge from them may incorporate rubber more effectively.Some examples have already been provided.Among the agricultural exports from Sri Lanka, NAP rubber is one of several commodities that have identified adaptation options, such as improving germplasm, enhancing farm and nursery management practices, developing sectoral capacity, monitoring and surveillance of pests and diseases, and initiating research studies to assess climate impacts.Within the framework of climate change, Cameroon's NAP includes a strategy to increase the country's potential to produce rubber.We can also use other strategies.As an example, Chile's National Adaptation Plan (NAP) includes a section tailored to plantations, along with other agriculturally-related measures (disease and insect monitoring, for example).A national process for rubber could be inspired by certain nations' multistakeholder dialogues, such as Uganda's and Uruguay's.
Role of rubber in climate change mitigation
Vijayan et al. (2024) [23] have mentioned rubber as a possible candidate for reducing global warming.It can be shown from these that rubber plantations are carbon stocks similar to cocoa plantations, or even some agroforestry or forestry systems (based on the plantation's age) (Vijayan et al., 2024) [23] .According to (Lai et al., 2023) [24] , some argue that longer rotations store more carbon.A more comprehensive greenhouse gas emissions balance should take NR's effects into account.While rubber trees, when planted in damaged locations, effectively absorb carbon dioxide, tree replacement projects, and swidden farms can produce varied amounts of carbon emissions (Lai et al., 2023) [24] .Carbon stocks from Northern Laos's rubber and swidden farms, for instance, were determined by (Pinizzotto et al., 2021) [16] .In terms of carbon stock, they demonstrated that a 30-year-old rubber plantation can outperform the 5year fallow swidden system, taking into account emissions from soil preparation before rubber planting.But this advantage will be for nothing if rubber-displaced swidden agriculture eventually supplants natural forests.Therefore, what and how rubber is replaced are crucial factors in determining its ability to contribute to mitigation.In general: When rubber trees are planted instead of natural forests, a lot of carbon is lost.Planting rubber on highly degraded land increases carbon stocks.Depending on the length of the fallow period of the system replaced, the contribution can be neutral or slightly positive when rubber replaces swidden systems.Why Forests lose carbon as rubber plants push out swidden systems.NR systems, when planted with other trees, can store carbon as effectively as secondary forests.Using rubber plantation wood instead of fossil fuels is another method natural rubber systems could help with mitigation.Additional wood harvest from forests and timber plantations may be mitigated if more rubber wood is used in furniture making.As an example, the primary resource for Malaysia's furniture sector is rubber wood, which has supplanted the diminishing supply from natural forests (Zhai et al., 2019) [26] .
Mitigation from the cultivation of rubber Increasing carbon stocks
Several studies have looked into rubber's ability to reduce greenhouse gas emissions by acting as a carbon sink.The highest total vegetative carbon stock, measured in plantations aged 30-40 years, was 105.73 Mg C ha -1 , according to a study conducted by (Zhai et al., 2019) [26] .The plantations in this age range had been in cultivation for 5-40 years.While carbon stocks in plantations that were 20-30 years old were larger than in semi-arid, sub-humid, humid, and temperate agroforestry systems, those in plantations that were 10-20 years old were similar to those in cocoa-based agroforestry that was 10 years old.Tropical forests in north-eastern India and mango agroforestry systems in Indonesia are examples of plantations that have carbon stores older than 30 years.Yang et al. (2019) [27] found that older rubber plantations in Xishuangbanna, China, had a maximum carbon stock of 148 Mg C ha -1 at elevations below 800 m, based on their plantations that were 6 to 35 years old.Soil and tree carbon reserves are also impacted by rotation length (Yang et al., 2019) [27] .Total carbon stocks increased with rotation length, reaching a maximum of 173.60 Mg C ha -1 for the 45-year rotation, while the lowest was 89.86 Mg C ha -1 for the 25-year rotation, according to a study that modelled the effect of rotation length (25, 30, 35, 40, and 45 years) on C stocks in Chinese rubber plantations.
Limiting negative impacts of land use change
Reducing the demand for new land and prioritizing degraded land for new rubber planting are two complementary measures to mitigate the negative implications of land use change.The availability of highyielding clones and effective management procedures determine the variation in rubber yields among countries.To minimize additional land conversion, it is most efficient to decrease this yield difference.To attain greater and more consistent harvests, it is crucial to enhance genetic material.Scientists in the field of plant breeding are currently focused on developing more robust clones that can withstand severe illnesses, have a shorter immaturity phase, and produce abundant latex and timber.
Contribution to adaptation to climate change
Rubber tree plantations, if properly managed, may mimic the cooling and water recycling processes of tropical rainforests, which is only one way in which the introduction of rubber trees to other agroecosystems helps them adapt.As a response to climate change, the production of rubber has been suggested as a substitute for conventional, shortterm rainfed crops in Sri Lanka (Orobator et al., 2020) [28] .Among the possible advantages are the following: a retention of up to twice the surface soil moisture; a reduction of midday air temperatures within the rubber plantation of up to 6 °C; and an average decrease of 3.7 °C during the day.The farmers will also appreciate the improved working conditions this brings about.It is also a way to earn money in many ways.
Seasons
Warmer surface temperatures and limited moisture availability may result in lower relative humidity levels than currently experienced.This could have an impact on humidity-sensitive hydrological and ecological processes like evapotranspiration, runoff, and plant growth.Almost all stations showed strong indications of decreasing trends in daily sunshine hours during all seasons except the premonsoon season.Solar radiation (sunshine duration) has an important impact on surface temperature, evaporation, the hydrologic cycle, and ecosystems, therefore the primary source of energy required to sustain life on this planet.It was hence proven that sunshine duration in India has decreased for the entire month, and the decreasing trends were significant (Raj, 2015) [31] .
Rubber Processing and By-Products
Rubber processing products like a sheet, crepe, block rubbers, or latex concentrate produce a large amount of effluent.Hevea's most significant product is latex.Rubber trees have primarily focused on obtaining a high latex yield rather than timber production.However, once Hevea's useful latex-producing life is complete, rubber wood can be used as timber, and this commodity is quickly gaining popularity as an alternative to tropical rainforest timber (Samarappuli, 1996) [32] .
Policy and management approaches
National policies that are advantageous Without an enabling context, technical solutions like increasing production and encouraging climate-smart farming practices would not be able to accomplish the intended climate goals.Problems with producers' and markets' bargaining positions and pricing structures make it difficult to implement innovations at the beginning of the value chain.The intricate web of interdependencies necessitates concerted effort from all parties involved.For natural rubber to play a role in a forestbased circular bio-economy that benefits communities, policies are needed.This is particularly true in nations where rubber production is still in its infancy, and there needs to be a focus on sustainable development in light of climate change.Appropriate legislative and regulatory frameworks are necessary to support policies at various levels.In nations that have been cultivating, processing, and selling rubber for the longest period, or other commodities with established production systems, some may already exist.Crones that are resistant to pests and diseases and have high yields are essential, but so are early warning systems, financial aid, and technical assistance for farmers to use locally appropriate practices.It is also important for large-scale plantation owners to make their plantations more sustainable.From an economic, social, and environmental perspective, the greatest benefits may emerge from crosssector collaborations.To achieve the various goals related to sustainable rubber production in the context of climate change, which should, in theory, aid in adaptation, mitigation, and other advantages, policies are necessary, International promises to incorporate rubber into instruments Plans and procedures at the national and international levels should take into account the significant climate action and sustainable development potential of natural rubber production.The Paris Agreement and Nationally Determined Contribution (NDCs), which better recognized the synergies and trade-offs between adaption and mitigation and sustainability departments, have created more opportunities for land use integration, particularly in rubber production.
Conclusion
Rubber plantations, which are vital to tropical economies and ecosystems, are threatened by environmental and climatic factors.Water availability, soil quality, and pest and disease challenges affect rubber tree health and latex output.These components must be controlled well to sustain rubber production and keep rubber plantations profitable.Climate change and environmental variability affect rubberproducing regions like South America, Africa, and Southeast Asia.Due to increasingly unpredictable rainfall and greater temperatures, Southeast Asian countries have needed sophisticated irrigation and pest control measures.To survive extended dry seasons, Côte d'Ivoire and Nigeria are improving water management and developing droughtresistant agricultural varieties.Brazil and South America are breeding disease-resistant plants and enforcing quarantines to stop South American Leaf Blight.Rubber plantation management must use sustainable methods to mitigate climate change and other environmental hazards.These include breeding hardier rubber trees, increasing irrigation, eliminating pests, and protecting soil.Research and collaboration are necessary to develop innovative, costeffective, and ecologically friendly solutions.Rubber plantations must address environmental and climatic issues to survive.The rubber industry's economic and ecological viability hinges on climate change adaptation.The industry must adopt sustainable and adaptable methods to achieve this. | 2024-06-24T15:09:00.863Z | 2024-01-01T00:00:00.000 | {
"year": 2024,
"sha1": "578ad304948f54d0d3f99d6dfe56bfb74f1e762d",
"oa_license": null,
"oa_url": "https://doi.org/10.33545/26174693.2024.v8.i6f.1349",
"oa_status": "GOLD",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "9dfa9d4d535716af6f6344748cb7934fab63ae02",
"s2fieldsofstudy": [
"Environmental Science",
"Agricultural and Food Sciences"
],
"extfieldsofstudy": []
} |
8798349 | pes2o/s2orc | v3-fos-license | Racial/ethnic disparities in annual mammogram compliance among households in Little Haiti, Miami-Dade County, Florida
Abstract Introduction Breast cancer is the most commonly diagnosed cancer and the 2nd leading cause of cancer-related deaths among women in the U.S. Although routine screening via mammogram has been shown to increase survival through early detection and treatment of breast cancer, only 3 out of 5 women age ≥40 are compliant with annual mammogram within the U.S. and the state of Florida. A breadth of literature exists on racial/ethnic disparities in compliance with mammogram; however, few such studies include data on individual Black subgroups, such as Haitians. This study assessed the association between race/ethnicity and annual mammogram compliance among randomly selected households residing in the largely Haitian community of Little Haiti, Miami-Dade County (MDC), Florida. Methods This study used cross-sectional, health data from a random-sample, population-based survey conducted within households residing in Little Haiti between November 2011 and December 2012 (n = 951). Mammogram compliance was defined as completion of mammogram by all female household members within the 12 months prior to the survey. The association between mammogram compliance and race/ethnicity was assessed using binary logistic regression models. Potential confounders were identified as factors that were conservatively associated with both compliance and race/ethnicity (P ≤ 0.20). Analyses were restricted to households containing at least 1 female member age ≥40 (n = 697). Results Overall compliance with annual mammogram was 62%. Race/ethnicity was significantly associated with mammogram compliance (P = 0.030). Compliance was highest among non-Hispanic Black (NHB) households (75%), followed by Hispanic (62%), Haitian (59%), and non-Hispanic White (NHW) households (51%). After controlling for educational level, marital status, employment status, the presence of young children within the household, health insurance status, and regular doctor visits, a borderline significant disparity in mammogram compliance was observed between Haitian and NHB households (adjusted odds ratio = 1.63, P = 0.11). No other racial/ethnic disparities were observed. Discussion Compliance with annual mammogram was low among the surveyed households in Little Haiti. Haitian households underutilized screening by means of annual mammogram compared with NHB households, although this disparity was not significant. Compliance rates could be enhanced by conducting individualized, mammogram screening-based studies to identify the reasons behind low rate of compliance among households in this underserved, minority population.
Introduction
Breast cancer is the most commonly diagnosed cancer and 2nd leading cause of cancer-related deaths among women. [1] An estimated 234,190 new breast cancer cases and 40,4730 breast cancer deaths are expected in the United States (US) in 2015, with approximately 7% (15,470) of the incident cases and 7% (2,830) of the deaths occurring in the state of Florida. [2] Breast cancer-related costs reached $16.5 billion in the US in 2010, and is projected to increase to $19 billion if recent trends in incidence and survival continue. [3] Prior to 2015, the American Cancer Society (ACS) recommended an annual mammogram starting at age 40 for women at average risk of breast cancer for early detection of the cancer. [4] In October 2015, the ACS updated their guidelines to an annual mammogram for women ages 45 to 54 years and a biannual mammogram for women ages 55 and older, with women ages 40 to 44 having the option to begin annual screening. [5] Although screening cannot prevent the development of breast cancer, routine use of mammogram has been shown to increase survival via early detection and treatment of the cancer. [6][7][8][9][10][11] Despite advanced and available screening methods, only 3 out of 5 women age 40 or older are compliant with annual mammogram at the national and state (Florida) level. [12] In order to further reduce breast cancer-related mortality and healthcare costs, it is critical to identify populations within the state that underutilize screening and to develop culturally appropriate interventions aimed at increasing compliance with mammogram and clinical breast examination within these groups.
Although incidence of breast cancer is lower among Black women compared with White women, higher mortality rates are observed among Black women partly due to being diagnosed at later stages and to having lower stage-specific survival. [2,12] Black women also generally experience a higher burden of poverty and lack of insurance compared to White women; [13] however, elevated mortality among Black women still persists after controlling for socioeconomic factors and factors that affect diagnosis (presence of comorbidities, follow-up after screening, quality of treatment, aggressiveness of tumor, etc.). [14][15][16][17] Despite the increased risk in incidence and mortality among Black women, screening rates for this group are no higher than that of White women at the national level (51% vs 52%, respectively). [12,[18][19][20] Research has shown that Black women have less knowledge of breast cancer, greater fear of mammogram and of being diagnosed with cancer, and increased cancer fatalism than White women. [21][22][23] In addition, Black women who do not screen using mammogram report low selfefficacy, [21] more perceived barriers to mammogram, [21,[23][24][25] fewer perceived benefits of mammogram, [24] and lower perceived susceptibility to cancer [24] compared with Black women who do complete mammogram.
Rates of breast cancer screening among Black subgroups, such as Haitians, remain unclear because national studies do not distinguish Haitians from other Black populations. Two population-based studies that investigated breast cancer screening among Haitian women suggest that screening rates among Haitian women are lower than that of White and Black women. [26,27] A recent qualitative study conducted among 15 Haitian women living in Miami-Dade County (MDC), Florida found that Haitian women face multiple challenges when it comes to breast cancer screening, including misperceptions about screening guidelines, disease etiology, and risks. [28] Haitians, like other immigrants, face a number of significant barriers to screening, including socialcultural factors (conflicting etiologic beliefs, viewing illnesses as symptomatic, and observable), structural barriers (lack of financial resources/insurance, language difficulties, and lack of information), and psychosocial barriers (fear of cancer diagnosis and medical treatment). [29] Immigrants of lower socioeconomic status are also more likely to live in medically underserved areas, have multiple jobs, and lower levels of education-factors that hinder compliance with screening. [13,30] On the other hand, increasing length of residence in the US is positively associated with receipt of preventive screening, with screening rates among immigrants approaching that of non-foreign-borne women over time. [31] Previous history of breast cancer is also a strong predictor compliance with screening guidelines. [26] The aim of this study is to assess the association between race/ ethnicity and mammogram compliance among 697 randomly selected households residing in Little Haiti, MDC, Florida.
Data collection and participant recruitment
This study is a secondary analysis of data from the randomsample, population-based Little Haiti benchmark survey conducted between November 2011 and December 2012. Details of the study are described elsewhere. [32] Briefly, the aim of the survey was to collect household health and wellness indicators for families residing in the Little Haiti community of MDC, Florida. To approximate the geographic area of Little Haiti, 20 US census tracts with a Haitian population of 30% to 49% were selected. Addresses in these census tracts were obtained from MDC and were selected for participation in the household survey using simple random sampling. Sampled households were visited by trained staff. The face-to-face survey consisted of 156 questions, taking approximately 40 to 50 minutes to complete. It was administered via at-home interviews with a consenting adult (18 years or older) in English, Spanish, French, or Creole, depending on the respondent's preference. The adult respondent completed the interview on behalf of all members of the household. Of the 1798 households randomly selected for the survey, 951 (52.9%) responded, 634 (35.3%) refused participation, and 213 (11.8%) were unreachable after a minimum of 7 attempts to interview a household member. Response rates did not differ significantly by census tract. Although the data from this survey are not publicly available, Florida International University partners with members of the community to analyze the data as needed.
Ethical review
The present study received expedited ethical approval from the Florida International University Health Sciences Institutional Review Board.
Outcome and study variables
The outcome of the study is mammogram compliance. To best approximate the American Cancer Society's guidelines for breast cancer screening in effect at the time of the Little Haiti survey, compliance was defined as completion of mammogram within 12 months prior to the survey by all female household members age ≥40 years. [4] The use of mammogram within the households was ascertained using the following survey question: "About how long ago, if ever, did anyone in the household have any of the following? [In the case that more than 1 person fits into one of the categories below, report the longest since anyone in the household had had any of the following] . . . A mammogram (females 40 and over only)." Based on the literature and the variables available in the survey, 13 sociodemographic and health-related variables with potential to influence compliance with mammogram were selected: race/ ethnicity; primary language; educational level, marital status, and employment status of the head of the household; poverty; presence of children under age 6 within the household; health insurance; source of health insurance; language barrier with provider; provider visits; regular provider; and household history of cancer. All variables were self-reported by the respondents. Race/ethnicity was categorized as Haitian versus the following non-Haitian groups: non-Hispanic White (NHW), non-Hispanic Black (NHB), Hispanic, and other. Marital status was categorized as single or other versus married/living with someone, with the former comprising the responses "single," "separated," "divorced," and "widowed." Poverty was calculated based on annual household income, household size, and number of children under age 18 residing in the household, and using thresholds established by the US Department of Health and Human Services. [33] The presence of children under age 6 within the household was included in the study to examine the effect of having at least 1 nonschool aged child on compliance with mammogram. [34] Lack of health insurance was defined as having at least 1 household member who lacked health insurance at any point within the 12 months prior to the survey. Sources of health insurance examined in this study included work-sponsored insurance, Medicare, and Medicaid. Language barrier with provider was defined as having at least 1 household member that experienced communication issues with his/her provider due to speaking different languages within the 12 months prior to the survey. Provider visit was defined as having at least 1 household member who visited a provider within the 12 months prior to the survey. Household history of cancer was defined as having at least 1 household member who was diagnosed by a physician with any type of cancer within the 5 years prior to the survey.
Statistical analysis
Of the 951 households that completed the Little Haiti survey, 697 (73.3%) households contained at least 1 female member age 40 years or older (Fig. 1). These households comprised the study sample. Secondary data analysis was conducted to assess the association between race/ethnicity and mammogram compliance.
Owing to the nature of the survey, the unit of analysis was the household. Pearson Chi-square tests were used to identify factors associated with race/ethnicity and with mammogram compliance. Binary logistic regression was performed to obtain unadjusted and adjusted odds ratios with 95% confidence intervals. To examine possible clustering at the census tract level, we also ran the binary logistic regression models utilizing Stata survey design command and compared variance estimates to the standard model. Since the variance estimates were nearly identical, we concluded that a clustering effect was not likely to be present, and therefore utilized the standard logistic regression models. Factors conservatively associated with both race/ethnicity and mammogram compliance (Chi-squared P-value 0.20) and those of clinical importance were selected a priori as independent variables for the binary logistic regression models. Variables were excluded from the model if the percentage of missing values was large (i.e., 10% or greater), low variability was observed within the response categories overall or when stratified by the outcome (i.e., if approximately 90% or more of the values were contained within a single response category), if they were highly correlated with other independent variables, or if multicollinearity was present. Correlation between variables was assessed with Pearson correlation coefficients; multicollinearity was assessed using variance inflation factors. [35] All analyses were conducted using Stata 14 (StataCorp, College Station, TX) and using a significance level of a = 0.05.
Characteristics of the sample
Nearly half of the households reported being of Haitian descent (53%); the remaining households self-reported as Hispanic (21.9%), NHB (18.0%), and NHW (7.3%). The majority of Haitian households spoke primarily Creole; the majority of Hispanic households spoke primarily Spanish; and the majority of NHW and NHB households spoke English (Table 1). A greater proportion of Hispanic and Haitian households had a head of the household with less than high school degree and who was married or living with someone compared with NHW and NHB households. Nearly half of the NHB, Hispanic, and Haitian households had a head of the household who was employed full time, whereas only one-third of NHW households had a head who was employed full time. The proportion of NHW households with a retired head of the household was twice that of the other racial/ethnic groups. Poverty was twice as prevalent among Haitian households compared with NHW, NHB, and Hispanic households. Few households included at least 1 child under the age of 6; however, the proportion of NHB and Haitian households that contained a young child was 7 times that of NHW households. Three out of 5 Hispanic and Haitian households reported having at least 1 uninsured member within the prior 12 months; this is nearly 50% greater than that of NHB households and twice that of NHW households (Table 1). More than twice as many NHW households had at least 1 member insured by Medicare compared with NHB, Hispanic, and Haitian households (P < 0.001). Nearly 50% more Haitian households had at least 1 member covered by Medicaid compared with NHB and Hispanic households; Medicaid coverage was twice as prevalent among Haitian households compared with NHW households (P = 0.037). Most households reported having at least 1 member that visited a provider within the 12 months prior to the survey and at least 1 member that had a regular provider. Few households reported having experienced a language barrier with their provider (11%); however, such language barriers were twice as common among Haitian households compared with Hispanic households. History of any cancer within NHW households was twice that of NHB households, and nearly 4 times that of Hispanic and Haitian households.
Compliance with annual mammogram
Overall compliance with annual mammogram was 62% ( Table 2). Mammogram compliance was significantly associated with race/ethnicity (P = 0.030) ( Table 3). Compliance was lower among Haitian households compared with NHB and Hispanic households (22% and 5% lower, respectively). Conversely, compliance was 14% higher among Haitian households compared with NHW households. As expected, compliance increased with increasing educational level of the head of the household (P = 0.084). Compliance with mammogram differed by employment status (P = 0.001); rate of compliance was greater among households whose head was retired or employed full time compared with those whose head was unemployed or employed part time. Compliance was also greater among households that spoke English compared with other languages (P = 0.032); greater among households that were above poverty thresholds compared with those below poverty thresholds (P < 0.001); greater among households with at least 1 child under age 6 compared to those with no young children (P = 0.074); greater among households in which all members were continuously insured within the 12 months prior to the survey compared with those that had at least 1 uninsured member (P < 0.001); greater among households with at least 1 member insured through work compared with those insured by other sources (P < 0.001); greater among households with at least 1 member insured through Medicare compared with those insured by other sources (P = 0.01); greater among households with at least 1 member insured through Medicaid compared with those insured by other sources (P < 0.001); greater among households in which at least 1 member visited a provider within the 12 months prior to the Language: "Spanish and other" includes Spanish, French, and Hebrew. Educational level: "Bachelor degree or above" includes bachelor's degree, master's degree, doctoral degree, and professional degree. survey compared with those in which no member had visited a doctor (P < 0.001); and greater among households that had at least 1 member that had a regular provider (P < 0.001).
Nonresponse
One-fifth of the households had missing data for mammogram compliance (N = 146); however, these households were comparable to those with valid responses on all socioeconomic factors, except education level. A greater proportion of households with missing data for mammogram had a head with vocational/ technical school or some college (36% vs 65%), while a greater proportion of households with valid responses had a head with a bachelor's degree or above (68% vs 18%)(P = 0.048).
Variables selected for inclusion in the binary logistic regression model
The following variables meet the criteria for inclusion in the binary logistic regression model: race/ethnicity, educational level, marital status, employment status, having at least 1 child < 6 years within the household, health insurance status, and regular provider visit. Multicollinearity was present between race/ ethnicity and primary language (variance inflation factor = 1.64 and 1.55, respectively), they were also highly correlated (r = 0.757). The latter was excluded from the model. Poverty was excluded due to a high percentage of missing values (32%). Source of health insurance was excluded because households were allowed to select more than one source. Regular provider was excluded because it was not associated with race/ethnicity (P = 0.63). Language barriers with provider and household history of cancer were excluded because they were not associated with mammogram compliance (P = 0.59 and 0.43, respectively).
Odds of mammogram compliance
No multicollinearity was observed in the adjusted model. After adjusting for potential confounders, the odds of complying with annual mammogram was 35% lower among NHW households compared with Haitian households; and 63% and 29% greater among NHB and Hispanic households, respectively, compared with Haitian households (Table 4). Although these disparities were not statistically significant, the disparity between Haitian and NHB households was borderline significant (P = 0.11). Of the covariates included in the model, significant disparities were observed by employment status, insurance status, and physician visit. Odds of complying with mammogram was 47% lower among households with an unemployed head compared with those with a head employed full time (P = 0.014); 61% lower among households with at least 1 member who was uninsured compared with those in which all member were insured over the 12 months prior to the survey (P < 0.001); and 78% lower among households in which no member had visited a provider compared with those in which at least 1 member visited a provider within the 12 months prior to the survey (P < 0.001).
Discussion
Overall compliance with mammogram was low among the surveyed households in Little Haiti. Compliance significantly differed by race/ethnicity. Compliance was lower among Haitian households compared with NHB and Hispanic households. Contrary to our expectations however, compliance was greater Language: "Spanish and other" includes Spanish, French, and Hebrew. Educational level: "Bachelor degree or above" includes bachelor's degree, master's degree, doctoral degree, and professional degree. Marital status: "Single or other" includes single, divorced, separated, and widowed; compared to "married/living with someone. among Haitian compared with NHW households. After adjusting for educational level, marital status, employment status, young children within the household, health insurance status, and provider visits, no significant disparities in mammogram compliance were observed by race/ethnicity. Although only borderline significant, the odds of complying with annual mammogram was 63% higher among NBH households compared with Haitian households. Overall compliance was nearly 22% lower than the 2020 Healthy People target for breast cancer screening. [36] However, compliance with annual mammogram in our study was comparable to that at the national and state levels (62% vs 59% vs 59%, respectively). [12,37] Compliance among our NHW households was comparable to that of the NHW population at the national level (51% vs 52%, respectively), whereas compliance among our NHB and Hispanic households was notably higher than that of the Black and Hispanic populations at the national level (75% vs 51% and 62% vs 46%, respectively). [12,[18][19][20] Surprizingly, compliance in our study was lowest among NHW households. Based on existing literature, we expected compliance among the NHW households to be comparable to that of the NHB households, and higher than that of the Hispanic households. [12,19,20] This unexpected result may be due to the small NHW population in MDC, and subsequently the relatively small number of NHW households included in the study. These households may differ from the overall NHW population of the state and nation. Although they were at least comparable to, if not fared better than the other groups in terms of socioeconomic status (SES), the NHW households in our study may differ from the other households on unmeasured factors that influence mammogram compliance. In addition, Black women have been found to over-report use of mammogram more often than NHW women. [38] Thus, the observed disparity in mammogram compliance between NHW and NHB households and between NHW and Haitian households may be exaggerated. The 2nd lowest compliance rate was observed among Haitian households; compliance among Haitian households was 16% lower than that of NHB households and 5% lower than that of Hispanic households. Haitian and Hispanic households had lower levels of SES than the NHB households in our study. Compared with Haitian and Hispanic households, NHB households were generally more educated and fewer were below U.S. poverty thresholds, fewer had an uninsured member, and fewer had a member that experienced language barriers with a provider. After controlling for these and other available socioeconomic and health-related factors, all observed disparities in mammogram compliance by race/ethnicity disappeared.
Compliance with mammogram among the Haitian households in our study was comparable to the rate observed in a recent study of Haitian women, [39] but higher than the rates reported in two older studies. [26,27] In a 2011 survey of 96 Haitian women from Little Haiti, Seay et al found that 58% of the Haitian women in their study complied with biannual mammograma rate comparable not only to that of the Haitian households in our study, but also to that of the 138 Hispanic women from Hialeah, Florida included in the same Seay et al study (57%). [39] A similar rate, albeit slightly higher, was observed among the Hispanic households in our study (62%). Limitations of the Seay et al [39] study included the use of a convenience sample; controlling for only site, health insurance coverage, and usual place of care in their multivariate analysis; and the non-inclusion of NHW and NHB women in the study. A 2007 study conducted by Kobetz et al [26] in Little Haiti found that only 42% of the Haitian women Table 4 Odds of complying with annual mammogram among Little Haiti households with at least one female member ≥40 years (n = 697) Educational level: "Bachelor degree or above" includes bachelor's degree, master's degree, doctoral degree, and professional degree. Marital status: "Single or other" includes single, divorced, separated, and widowed; compared to "married/living with someone." Health insurance: Defined as at least 1 household member being uninsured within the 12 months prior to the survey Provider visit: Defined as at least 1 household member having visited a provider within the 12 months prior to the survey. Note: The following study variables were excluded from the model due to not meeting the criteria for inclusion, as presented in the text: primary language; poverty; source of health insurance; regular provider; language barriers with provider; and household history of cancer. AOR = adjusted odds ratio, CI = confidence interval, NHB = non-Hispanic Black, NHW = non-Hispanic White, OR = odds ratio. * Reported for the head of the household. [26] An older study by Mandelblatt et al [27] reporting results of a telephone survey conducted in New York City in 1992 found a similar low rate of compliance with biannual mammogram among Haitian women (42%). The main aims of this study however was to assess the effect of age and health status on compliance with mammogram, clinical breast examination, and Pap smear. Although the study controlled for race/ethnicity in its multivariate analyses, the adjusted associations between the screening tests and race/ethnicity were not presented nor discussed. In addition, characteristics of the study women were not presented by race/ethnicity and therefore we cannot determine if the Haitian women included in the study were comparable to the Haitian households in our study. [27] Our study was the first, to our knowledge, to assess the independent association between race/ethnicity and mammogram compliance in Little Haiti, MDC. The main strength of our study is that it utilized a large, random-sample and included a large sample of Haitian households. Our main limitations were that the survey was conducted at the household-level and was not specifically designed to study breast cancer screening. It is important to note that compliance with mammogram was based on the longest time since last mammogram for any female age 40 or older within the household. Thus, compliant households in this study were defined as households in which all females age 40 or older completed a mammogram within the year prior to the survey. Noncompliant households were defined as households in which at least 1 female age 40 or older had not completed an annual mammogram; it is possible that these households may have contained at least 1 female who had completed an annual mammogram. Owing to the nature of the survey, a number of variables that could potentially influence compliance with mammogram were not available for analysis, such as personal and family history of breast cancer, knowledge of breast cancer and its screening methods, physician recommendation for screening, and factors relating to acculturation. In addition, we could not control for age because the unit of analysis was the household and we did not know the specific age(s) of the female(s) within the household that completed mammogram. As a result, we cannot rule out the possibility of residual confounding. Although we could not control for poverty due to a large percentage of missing values, we were able to control for multiple other socioeconomic variables. Last, the data were self-reported and not validated through medical records. The survey was completed by any household member age 18 or older, and thus not necessarily completed by the female member that is recommended for screening. It is possible that the respondent of the survey is unsure of the frequency of use of mammogram by women within the household. This may be reflected in the high percentage of missing data on mammogram use. Although nonresponse was high, households that provided a valid response for mammogram use were comparable to the study sample. In addition, it is unclear if the women within these households completed mammogram for preventive or diagnostic reasons.
The findings from this study will provide basis for developing an intervention aimed at increasing breast cancer screening in Little Haiti, MDC, Florida. Similar to other studies, we observed that mammogram compliance was low among our Haitian population. Surprizingly, we found compliance was lowest among NHW households and that having young children within the household was a predictor of complying with annual mammogram. These unexpected findings call for further exploration. | 2018-05-08T18:24:08.778Z | 2016-07-01T00:00:00.000 | {
"year": 2016,
"sha1": "3c35d4d8f6b4e3bc526206f11f4468a0fb206617",
"oa_license": "CCBYND",
"oa_url": "https://doi.org/10.1097/md.0000000000003826",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "47e9e45bfed1c31f31a4a949fd91695e7054af6e",
"s2fieldsofstudy": [
"Sociology",
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
4351604 | pes2o/s2orc | v3-fos-license | Control of Embryonic Gene Expression and Epigenetics
Preimplantation embryo development follows a series of critical events. Remarkable epigenetic modifications and reprogramming of gene expression occur to activate the embryonic genome. In the early stages of preimplantation embryo development, maternal mRNAs direct embryonic development. Throughout early embryonic development, a differential methylation pattern is maintained although some show stage‐specific changes. Recent studies have shown that differential demethylation process results in differential parental gene expression in the early developing embryos that may have an impact on the correct development. In the recent years, noncoding RNAs, long noncoding RNAs (lncRNA) and short of mRNAs and therefore their role in preimplantation devel‐ opment has gained significance.
Introduction
Preimplantation embryo development follows a series of critical events. These events start at gametogenesis, formation of mature gametes, and lasts until parturition. Male and female gametes are derived from primordial germ cells (PGCs) by the processes of spermatogenesis and oogenesis, respectively. PGCs have unique properties of gene expression, epigenetics, morphology and behaviour. Once the PGCs undergo mitosis, spermatogenesis and oogenesis progress differently. In spermatogenesis, spermatogonia undergo mitosis starting at puberty until death and each primary spermatocyte produces four spermatids at the end of meiosis. In oogenesis, PGCs differentiate into oogonia, they enter meiosis and arrest until puberty. Unlike meiosis II in spermatogenesis, secondary oocyte does not complete meiosis II until fertilisation. With completion of meiosis II, each oogonia produce a single viable oocyte [1].
At fertilisation, the oocyte completes meiosis and the fertilised oocyte is called the zygote. Oocyte and sperm nuclei fuse resulting in syngamy (Figure 1). The zygote undergoes a series of cleavage divisions, forming two-cell, four-cell, eight-cell morula and blastocyst stages [2] (Figure 1). During cleavage stage divisions, programming of maternal and paternal chromosomes takes place to create the embryonic genome (embryonic genome activation, EGA) and to start the preimplantation embryo development. If the EGA fails, the development does not continue because of the inability of the embryo to have cellular functions [3]. This activation is initiated by the degradation of maternal nucleic acids, specific RNAs stored in oocytes, proteins and other macromolecules [4]. Upon EGA, which starts at the two-cell stage in mouse and four-to eight-cell stage in human [5], remarkable reprogramming of expression occurs in the preimplantation embryo. These reprogramming events are controlled by DNA methylation, histone acetylation, transcription, translation and miRNA regulation [6]. Therefore, the development of preimplantation embryos includes continuous molecular, cellular and morphological events. These events would eventually form a multilineage embryo that has a capability to implant and continue the foetal development.
In this chapter, different factors affecting gene expression during preimplantation embryo development will be discussed. Epigenetic factors, focusing on methylation profiles, of gametes and preimplantation embryos will be reviewed. The effects of noncoding RNAs on gene expression will be thoroughly evaluated.
Gene expression and epigenetics
For a normal developing embryo, the expression of both maternal and paternal genes is required. An intense epigenetic change occurs upon fertilisation to establish pluripotency [7]. Although there are a number of post-translational modifications within chromatin including acetylation, ubiquitination, SUMOylation and phosphorylation; methylation of histone lysine and arginine residues is the main focus in preimplantation embryos.
Methylation and chromatin modification not only play crucial roles in determining the transcriptional state but also are capable of determining the transcriptional repression Figure 1. Schematic diagram outlining the main stages of preimplantation embryo development. Fertilisation followed by syngamy, cleavage divisions results in two, three, four, and so on cell embryos which eventually form the morula and the blastocyst. [8][9][10]. The mechanism leading to the changes in methylation is not well established, but it has been suggested that the reprogramming takes place by either passive or active demethylation. Indirect pathways of demethylation are associated with DNA repair [11][12][13][14]. Two main stages, PGCs and preimplantation embryos, are important in the regulation by methylation.
Epigenetic modification of the zygote and the preimplantation embryos
In mammals (human, bovine, rat, pig and mouse), the zygote undergoes genome-wide demethylation [15][16][17] with the exception of imprinted genes [18]. The male pronucleus of the zygote undergoes selective demethylation due to the loss of DNA replication leading to asymmetric methylated sister chromatids [15,16,19,20]. These events start following the sperm decondensation in humans and in mouse with some variations [17,21,22]. The female pronucleus of the zygote remains highly methylated at this stage [17,21,22]. Demethylation of the maternal genome starts with the first cleavage divisions [19,23,24]. By the morula stage, the mouse preimplantation embryos become undermethylated. Polarisation and compaction of individual blastomeres start at around eight-cell stage of the developing embryo. Many factors are involved in these processes including E-cadherin (CDH1), partitioning defective homologue 3 (PARD3), PARD6B and protein kinase C zeta [25][26][27].
The blastocyst stage embryo has a fluid-filled cavity and two cell populations consisting of inner cell mass (ICM) and trophectoderm (TE). All the blastomeres are believed to be totipotent in cleavage embryos until four-to eight-cell stage since these cells form both the ICM and TE lineage [28]. ICM develops into epiblast, whereas TE forms the extraembryonic tissues such as placenta. ICM is composed of pluripotent cells that have the capacity to develop into any cell type of the foetus. Transcriptional and epigenetic events strictly regulate these differentiation events. A number of transcriptional factors play a crucial role in blastocyst formation. These include caudal type homeobox 2 (CDX2) for TE specification, octamer 3/4 (OCT4) and NANOG for the establishment of ICM pluripotency [29][30][31]. CDX2 is extensively expressed in eight-and 16-cell stage and it is expressed only in TE cells of the blastocyst [32]. Although OCT4 and NANOG are also expressed broadly at eight-and 16-cell stage embryos, they are only expressed in ICM in blastocysts [32]. A number of transcription factors are required for blastocyst formation. Embryos lacking CDX2 expression cannot form blastocoel cavity but they have the ability to implant [30]. Lack of OCT4 or NANOG expression causes failure of ICM and the development of these embryos is arrested at the blastocyst stage [31,32]. TEAD4 is another transcription factor that has a role in blastocyst transition in which the lack of TEAD4 nuclear localisation impairs TE-specific transcriptional programme in inner blastomeres [33]. Furthermore, the aberrant expression of TCFAP2C transcription factor also leads to embryonic arrest during morula to blastocyst transition [34] and Klf5 mouse-mutant embryos arrest at the blastocyst stage [35].
The remethylation process starts shortly after implantation [16,22,23,36]. This de novo methylation occurs asymmetrically, such that ICM is hypermethylated possibly due to the Dnmt3b methylase [37], whereas TE remains hypomethylated due to the active demethylation by enzyme catalysis and passive demethylation [11,14,22]. Alteration of the methylation profiles in embryos has been shown to cause alterations of ICM and TE differentiation.
Variations of the H3 arginine 26 residue (H3R26me) were shown to lead to changes of TE and ICM differentiation of a blastomere [38]. X-chromosome inactivation is an epigenetic phenomenon in which the activity of X chromosomes is strictly regulated to equalise X-chromosome expression and gene dosage between males and females and relative to autosome chromosomes [39]. For correct development, X-chromosome dosage compensation is crucial. The inactivation of X chromosome occurs in at least two phases: initiation and maintenance. X-inactivation mouse model systems have shown that the inactivation of X chromosome takes place during early embryogenesis of the female embryo by undergoing transcriptional silencing of genes along the X chromosome [40]. In human preimplantation embryos, it has been shown that the reduced expression of X chromosomes in females ensures the dosage compensation [41]. LncRNA XIST expression activates the X-chromosome inactivation by engaging proteins functioning in chromatin remodelling [3,42]. With the advanced technologies, including single-cell RNA sequencing, it has emerged that lncRNAs XACT and XIST are expressed on the active X chromosome in the early human preimplantation embryos [43]. Furthermore, the expression of these two RNAs has never been shown to overlap. Introducing XACT into heterologous systems caused the accumulation of Xist RNA in cis and therefore it may be involved in the control of XIST association to chromosome in cis and may temper its ability of silencing. It is also possible that XACT functions in balancing the X-chromosome inactivation at the early stages of preimplantation embryo development [43,44]. Recently, the dosage compensation was shown to be driven by a CAG promoter of a new Xist allele (Xist(CAG)) [45]. Furthermore, Xist(CAG) upregulation in preimplantation embryos showed variation depending on the parental origin and the paternal expression was suggested to be preferentially inactivated with the paternal Xist(CAG) transmission [45].
Epigenetic modification of the gametes
In germ cells, methylation is maintained in a sex-specific manner. Methylation in PGCs diminishes as they migrate to the gonads. Studies suggest that in females, remethylation occurs after birth when the oocytes are in the process of development. When demethylation is completed, the PGCs either enter mitosis in males or arrest at meiosis in females [46].
Reprogramming of the methylation in the embryo is necessary for parent-specific expression of genes [14]. Gene expression varies during preimplantation embryo development due to these reprogramming events and appropriate gene expression determines the survival of the embryo [6]. Recently, short noncoding RNAs, microRNAs (miRNAs) and long noncoding RNAs (lncRNA) have gained importance in their potential function to affect numerous pathways by targeting multiple genes [47,48].
Gene expression and small noncoding RNAs: microRNAs
MiRNAs are a large family of short noncoding RNAs between 17 and 25 nucleotides (nt) in length [49]. MiRNAs were first identified in Caenorhabditis elegans over two decades ago [50] and since then many have been identified in multiple organisms, such as worms, flies, fish, frogs, mammals and plants, by molecular cloning and bioinformatics [51]. Most miRNA sequences are conserved among a wide range of mammalians [52], though there are some that differ from each other only by a single nucleotide [53]. The conserved miRNA sequences among different species can be distinguished by the nomenclature such that when only the first three letters differ this indicates the same sequence in different species, that is, hsa-miR-145 in Homo sapiens and mmu-miR-145 in Mus musculus [54].
MiRNAs have been shown to be of great importance in a wide variety of biological processes involving cell cycle regulation, apoptosis, cell differentiation, imprinting, homeostasis and development, including limb development [55], morphogenesis of lung epithelial [56], embryonic angiogenesis [57], formation of hair follicle and proliferation of T-cell [58,59]. They play key roles in regulating transcriptional and post-transcriptional gene silencing in many organisms by targeting mRNAs for translational inhibition, cleavage, degradation or destabilisation [53,[60][61][62][63][64]. Each miRNA has multiple mRNA targets that may regulate up to 30% protein-coding genes and shape protein production from hundreds to thousands of genes [65][66][67]. MiRNAs recognise their targets through base pairing of the complementary sequence of their seed sequence (2-8 nt of miRNAs) within the open reading frame (ORF) and 3′untranslated region (UTR) of target mRNA [68]. Although the targets of miRNAs are not fully known, bioinformatics studies show a range of possible target genes [69]. The functional activities and the predicted/observed targets of miRNAs can be identified using miRNA databases. These databases can be accessed using the following URL: (http://www.targetscan. org/, http://www.microrna.org/microrna/home.do and http://mirdb.org/miRDB/).
MiRNA biogenesis
MiRNA biogenesis involves multiple important steps. MiRNAs are first transcribed from genomic DNA into primary miRNA (pri-miRNA), which contains a stem-loop structure, by RNA polymerase II. These pri-miRNAs are then processed by Drosha, which is a 30-160 kDa protein with one dsRNA-binding and two catalytic domains [70]. In the presence of DGCR8, both strands of the hairpin are cut generating a pre-miRNA product of approximately 70 nt in size [71]. These pre-miRNAs are carried from the nucleus into the cytoplasm by Exportin-5 (Exp5), which is a nucleocytoplasmic transporter in karyopherin family that has binding sites for pre-miRNAs in the presence of RAs-related nuclear protein (Ran) and guanosine triphosphate (GTP) [72,73]. These miRNAs are further cleaved by cytoplasmic RNase endonuclease, Dicer, making 21-22 nt double-stranded structure. Although one of the strands is usually degraded, both strands of the pre-miRNA may be associated with Argonaute (Ago)-protein-containing complex and they are mediated by RISC/miRNP (RNA-induced silencing complex/mi-ribonucleoprotein) to form single-stranded mature miRNAs. MiRNAs associated with RISC mainly target mRNAs and they either inhibit their translation or cause degradation of mRNA that results in reduced protein synthesis [70,74].
Studies showed that processing of miRNAs by Dicer was vital and any defects, such as deletion of Dicer in the developing animals, caused aberrations [75,76]. Lack of Dicer in Drosophila germ line stem cells postponed the G1/S phase transition [77], suggesting that miRNAs may be vital for stem cells to bypass this checkpoint. Reduced and disorganised spindles, incorrect chromosome alignment and defects in gastrulation were observed with the Dicer-mutant oocytes in mouse and in C. elegans, respectively [50,78]. Injection of miR-430 in zebrafish and C. elegans partially repaired the gastrulation, retinal development and somatogenesis [78]. Dicer deletion in zebrafish, mouse and hippocampal initiated problems in the nervous system and led to the inability of forming mature miRNAs that resulted in variations of brain morphogenesis and differentiation of neurons [79,80]. Although the axis formation and early differentiation of maternal-zygotic Dicer-mutant zebrafish and mouse embryos were normal, they still triggered defects in somitogenesis, morphogenesis that affected the brain formation, gastrulation, heart development and apoptosis in limb mesoderm, respectively [78,[81][82][83]. Apoptosis was enhanced in the developing limb mesoderm of Dicer null mouse [84]. Dicer deficiency mainly led to embryo death in mouse around embryonic day 7.5 [50,78,85] and in zebrafish [86] that may indicate the importance of miRNA-mediated gene silencing at maternal to zygotic transition.
Complete loss of Dicer1 in somatic cells of mouse reproductive tract not only showed reduced expression of miRNAs but also caused the female mice to become infertile with compromised oocyte and embryo integrity [50,87]. Dicer-deficient male mice were shown to have poor proliferation of spermatogonia. Loss of Dicer1 in the germ line of male mice (homozygote Dicer1) led to decreased fertility due to abnormal spermatogenesis. The number of germ cells was reduced with abnormal spermatids, abnormal phenotype of spermatocytes with condensed nucleus, abnormal sperm motility and mutant testes with Sertoli tubules [88]. Studies suggest that the transfer of maternal cytoplasmic Dicer disguised the early abnormal phenotypes [78,89].
Knock-out of Ago2 in mouse embryonic fibroblasts and haematopoietic cells caused decreased levels of mature miRNAs [61,90,91]. Ago2-deficient oocytes were observed to develop the mature oocytes with abnormal spindles and chromosomes were not able to unite properly with reduced expression levels of miRNAs (more than 80%). Loss of Ago2 function leads to embryo death around embryonic day 9.5 in mouse [92].
Expression of miRNAs in preimplantation embryos
The expression of miRNAs in preimplantation embryos has been mainly studied by knockout experiments, by cloning experiments and by identifying individual miRNAs by microarray analysis and real-time polymerase chain reaction [93]. The expression studies have been carried out using animal models and tissues, cultured cells; that is, cancer cells and human embryonic stem cells; and mouse/bovine/human gametes and embryos. Human embryonic stem cells, which are derived from the inner cell mass of an embryo at the blastocyst stage and are characterised by their ability of self-renewal and multipotency, are the key in gene expression research since the access of human embryos is difficult and these cells are one of the closest representations of human embryos. Studying miRNA expression in stem cells not only gives insight into potential miRNAs expressed in human embryos but also may show the important role of miRNAs in the stem cell functioning [94].
MiRNA expression has been observed as early as oogenesis and spermatogenesis in mouse, bovine and human [95,96]. Differences in the miRNA expression have been observed between immature and mature oocytes that may represent the natural turnover and indicate that each embryonic stage is defined by a specific miRNA. Similar miRNA expression profiles in mature mouse oocytes and early developing embryos indicate that at these stages the zygote has maternally inherited miRNAs [50]. Similar to oocyte, sperm carries a range of miRNAs. Approximately 20% of these miRNAs are located in the nuclear or perinuclear part of the sperm indicating that these miRNAs are transferred to the zygote at the time of fertilisation [97]. It was suggested that the sperm-borne miRNAs may down-regulate the maternal transcripts in mammals. However, when this hypothesis was tested using microarray analysis, it was shown that none of these miRNAs in the sperm have significant importance since all of them were already present in the oocytes (meiosis II) [98].
Multiple miRNAs were involved in the formation of germ cell layers. MiR-290, which was expressed at different levels during preimplantation embryo development of mouse embryos, had a negative effect on the germ cell and mesoderm differentiation in the mouse ES cells via targeting Nodal inhibitors [99]. In zebrafish, however, miR-290 cluster played an important role in regulating the mesoderm induction [100]. Therefore, it is not clear if miR-290 has an inhibitory effect on the mesoderm differentiation. Other miRNAs have been shown to have an effect in mesoderm differentiation in zebrafish, such as miR-15 and miR- 16 [100], which were also expressed in mouse preimplantation embryos [50].
Mainly, the same miRNAs are expressed during the cleavage divisions of the embryo in mouse and bovine. However, their expression levels often vary during these stages. In murine embryos, the level of miRNA expression is reduced by as much as 60% between one-and twocell stages. At the end of four-cell stage, mouse embryos have approximately twice as much miRNA compared to the two-cell stage embryo. This implies that the maternally inherited miRNAs degrade at this stage and the EGA starts between the one-cell and four-cell stages [50]. Even though the synthesis and degradation of miRNAs coexists during the preimplantation embryo development in mice, the overall miRNA expression increased towards the blastocyst stage [101].
Gene expression and long noncoding RNAs
In the last few years, in addition to short noncoding RNAs, the lncRNA have gained importance in their roles to affect gene expression. The mammalian genomes consist of long intergenic noncoding RNAs (lincRNAs) that have been suggested to take a role in the regulation of pluripotency during preimplantation embryo development [110]. Human pluripotency transcripts 2, 3 and 5 (HPAT2, HPAT3 and HPAT5) were reported to adjust the pluripotency and ICM formation in preimplantation embryos. Furthermore, HPAT5 was shown to interact with let-7 family of miRNAs [110]. Implantation of embryos involves complex mechanisms and many different genetic and physiological factors are involved during the process. Developing preimplantation embryo must have a good coordinated interaction with the maternal uterine endometrium. LncRNAs were shown to be differentially expressed in endometrial tissues obtained from pigs with pregnancy and non-pregnancy with two lncRNAs, TCONS_01729386 and TCONS_01325501, with potential roles in implantation [111].
Gene expression and assisted reproductive technologies
In Western world, approximately 1% of children are born with assisted reproductive technology (ART) treatments. The infertile couples have the best possibility to conceive a child with these treatments. Although these techniques have been considered to be safe in terms of foetal and post-natal development [112,113], there is an increased risk for morbidities, especially imprinting disorders [114]. Furthermore, the global gene expression profiles vary due to in vitro culture of zygotes [115,116] and in vitro fertilisation processes [117]. Following in vitro culture, apoptotic and morphogenetic pathways have shown to be altered [118].
Intra-cytoplasmic sperm injection (ICSI), one of the widely used ART techniques, provides infertile couples with sperm motility problems a great chance to have a baby. ICSI is a unique process in which the sperm is injected into the ooplasm [119]. However, ICSI bypasses a number of physiological processes that would normally take place. These embryos derived from ICSI were shown to be cleaved at a slower rate. Furthermore, a reduced number of embryos become hatched with a fewer number of cells and the calcium oscillations are shorter with different patterns [120]. Mice embryos generated by ICSI were shown to be obese and have anomalies of the organs [121].
Conclusion
Normal development of preimplantation embryos involves complex mechanisms. For a normal developing embryo, the expression of both maternal and paternal genes is required. Several factors are involved in the regulation of parental genes in preimplantation embryos. Epigenetic modifications are one of the most important factors that are involved in the regulation of gene expression during preimplantation embryos. Extensive research studies have been performed throughout the years to establish the methylation profiles of the mammalian gametes and embryos. In the more recent years, the importance of noncoding RNAs in the regulation of genes has become clear. A handful of studies have been performed to analyse the expression of microRNAs, which have been shown to regulate mRNAs that encode up to 30% human protein-coding genes. The expression of miRNAs has been observed in mouse, bovine and human gametes and embryos. | 2018-03-26T22:03:46.171Z | 2017-09-06T00:00:00.000 | {
"year": 2017,
"sha1": "338d26ca533cfb91c2dde092b6b508905af3c733",
"oa_license": "CCBY",
"oa_url": "https://www.intechopen.com/citation-pdf-url/54686",
"oa_status": "HYBRID",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "99f0e0d34660dd8fbe4958ad21ce6c83686de7ac",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Biology"
]
} |
69782903 | pes2o/s2orc | v3-fos-license | An Automated Deployment Engine Based on a PaaS Cloud Platform of the Micro-service Architecture System
. In view of the deployment management problems caused by the service atomization, architecture complexity and large-scale cluster of micro-service architecture system, a PaaS cloud platform automation deployment engine is designed and implemented based on the Docke PaaS platform so as to provide a simple, flexible, efficient, full-stack, full-process deployment solution to the micro service architecture system.
Introduction
In order to cope with the changing needs of users and the rapid growth of user size, it is needed for information systems to have the ability of rapid deployment, reliable operation and flexible expansion. With the development of cloud computing technology, the idea of "decentralization" is adopted in micro-service architecture, which advocates dividing a single application into a group of small services. The services coordinate and cooperate with each other to solve the rapid changes that traditional single-block architecture can not adapt to.
Micro-service architecture distributes system functions among discrete services, which are more atomic and less autonomous. High-density deployment is adopted to support on-demand expansion. However, while bringing benefits, micro-services are becoming more and more difficult to deploy and manage due to their complex architecture and large-scale cluster. The emergence of Docker-based PaaS platform provides the possibility to build a simple, flexible, efficient, full-stack, full-process micro-service architecture system deployment solution to solve the above problems.This paper proposes a PaaS cloud platform deployment engine for the micro-services architecture system to solve the deployment problems of complex microservices architecture systems in an automated manner. Traditional application systems generally divide the C/S architecture or the B/S architecture. Regardless of the C/S architecture or the B/S architecture, in order to ensure the reliability and performance of the application system, the service is generally deployed on multiple servers and these servers are built as service clusters. Access requests on all clients must pass through the load balancing device to reach the service cluster. The services of application system generally use databases and files as their data support. Also for reliability and performance considerations, database servers and file servers are also built into clusters to provide data support for services, but the data itself is generally not directly stored on the database servers and file servers, it is stored on a dedicated centralized storage device.
At present, the cloud platform-based application system deployment is divided into client, business service, data, resource and platform services. The deployment mode is shown in Diagram 1. The cloud platform-based application system is characterized by following a micro-service architecture and splitting logically complex business services into a large number of business-simple micro-services and data storage objects; the application system is defined by describing the dependencies between these microservices and data storage objects. All micro-services that make up the application are deployed on the compute nodes in the form of service containers. For reliability and performance consideration, each micro-service needs to be registered with the service gateway, and one or more service containers are mounted under the service gateway. Cloud partitions, cloud caches, and cloud databases supported by data storage services provide data support for business services. The cloud platform virtualizes resources such as computing, storage, and network into resource pools, and provides support for the deployment of business services and the creation of data storage objects such as cloud partitions, cloud caches, and cloud databases.The cloud platform also provides a series of platform services, such as deployment center, service discovery, etc. to provide unified deployment and service calls for business systems.
Dependencies of Micro-service Architecture System Deployment
Micro-service architecture system deployment needs to deal with three kinds of dependencies: software dependencies, topology dependencies, and resource dependencies.
(1) Software dependencies The software of the PaaS cloud platform node usually consists of multiple layers of software stacks, and the installation of the upper layer software depends on the installation of the underlying software. The entire software stacks can be divided into four layers from bottom to top: the operating system, the platform software, the technical framework, and the application. The operating system is usually a Linux-like operating system; the platform software is usually a database, a web server, a cache service, etc.; the technical framework is attached to the platform software, providing a framework or a class library that provides feature language-related software functions, such as SSH (Struts) +Spring+Hibernate; the application is a program or software grouping that implements business logic and provides business-related functionality.
(2) Topology dependencies Large-scale complex information systems based on micro-service architecture often involve multiple clusters. Each cluster is composed of several nodes or sub-clusters. The nodes communicate with each other through the server interface. The system deployment needs to establish the topology between system services so as to deal with the relationship between nodes and nodes, nodes and clusters, and clusters and clusters. The relationship between a node and a node is generally a communication relationship. The source node establishes a communication connection with the target node through the service port. When the platform is deployed, the service port (such as IP, port, URL, etc.) of the target node needs to be deployed on the source node, and ensure the Layer 3 link between the source node and the target node is accessible. The relationship between a node and a cluster is usually a dependency, including network dependencies and node dependencies. Network dependencies refer to the node of the cluster and the cluster belong to the same subnet. The node dependencies refer to the node of the cluster is managed or controlled by a certain master node of the cluster. The cluster needs to be accordingly deployed according to the dependencies relationship between the master node of the network and the cluster. At the same time, the nodes of the entire cluster are batch-operated through the dependencies, such as batch deployment and batch restart.
(3) Resource dependencies
Cloud platforms are typically built on shared cloud resource pools, and application deployments need to estimate resource requirements for the platform. Virtual machines, virtual storage, and virtual network resources with the appropriate specifications and quotas are applied for from the cloud resource pools based on resource requirements. For the PaaS platform, the deployment of the micro-services architecture system needs to estimate and apply for certain platform service resources such as cloud databases, application server clusters, and cloud caches from the cloud resource pools.
3 Requirement Analysis and Architecture Design of Automated Deployment Engine for Micro-service Architecture System
Deployment Requirements of Micro-service Architecture System
The automated deployment of the micro-services architecture system is to solve the problem of system dependencies as automatically as possible during the deployment process. The deployment of the engine automatically completes the deployment and distribution of the micro-services system, and realizes the allocation, installation and deployment of various resources, sub-cluster, nodes, and software so that the application package submitted by the developers is loaded onto the application node of the PaaS platform, and finally the cluster of the micro-service system runs accurately.
(1) Automatically processing software dependencies of container nodes The installation, deployment and startup of the software of each layer in the entire software stack can be automatically completed according to the hierarchical structure of the software stack, and the installation steps of the same or similar hierarchical software stack can be reused.
(2) Automatically processing topology dependencies of micro-service system It is convenient to establish a topology model of the micro-service architecture system, describe the affiliation, the communication relationship and the dependency relationship among the cluster/sub-cluster, the nodes of the platform, and recursively complete the deployment of the entire cluster according to the affiliation and the dependency relationship, automatically complete related parameter deployment of nodes and networks according to the communication relationship and reuse cluster topology and deployment process with the same structure.
(3) Automatically processing cloud resource dependencies The computing, storage, and network resource dependencies in the cloud platform resource pools can be automatically completed according to resource requirements, including resource dependencies such as container clusters, cloud partitions, and distributed file systems, and the creation of basic resources and related parameters deployment are completed according to the dependency relationship. The creation and deployment process of cloud resources with the same requirements can be reused.
(4) Improving the deployment efficiency of the microservice system on the PaaS cloud platform It is needed to reduce the workload of manual deployment, compress the software stack of the nodes to improve the deployment efficiency of a single node, increase the parallelism of platform node deployment, and improve the deployment efficiency of the entire micro-service architecture system cluster.
Automated Deployment Engine Design of the Micro-service Architecture System
The automated deployment engine of the micro-services architecture system is built based on open source Docker container platform and scheduling system, distributed file system, cloud cache, cloud database and other cloud infrastructures, including PaaS platform management portal, PaaS automated deployment engine, container cluster, cloud warehouse, container cluster, deployment center and other templates, and the system architecture is shown in Diagram 2. Among them, the cloud infrastructure service provides application cluster computing, application, configuration, scheduling and destruction services of storage and network resources in the form of API. The cloud repository stores container images such as the operating system, basic software, technical framework, and application software required for the application cluster. The deployment center stores various deployment parameters, domain names, port numbers, and data source addresses required for deployment. The PaaS automated deployment engine provides the core functions of automated deployment such as micro-services architecture system deployment model resolution, deployment process implementation, resource control, and elastic scheduling. The container cluster node implements the loading and running of the application software stack by pre-installing the Docker engine. The PaaS automated deployment engine implements the deployment process of the application cluster by scheduling the process engine and the process orchestration framework, and calls the management interface of the cloud infrastructure through the management agent to complete the deployment of the entire container cluster. The application system of micro-services architecture is composed of business service micro-services. Therefore, both the application system and the business microservices need to define deployment templates. The business micro-service deployment template is shown in Diagram 3. Business micro-service deployment templates include statement for image labels, cloud partitions, cloud caches, data sources, ports, and resource requirements. Among them, the resource requirement is the only dynamic item in the business service template that needs to be temporarily specified by the users during deployment. The other items are static items whose content has been determined during the process of deploying the template, but the content of some static items is allowed to be arranged and adjusted according to the content of the resource requirements item at the time of deployment. The resource requirements item states the deployment device architecture option set and service features that the users expect, and its content is used as the allocation basis of other items except the port during deployment. The schema type of the business service adaptation and the image identifier corresponding to the architecture type are declared in the image label item. The deployment computing device architecture type, the amount of computing resources, and the image ID corresponding to the deployed computing device architecture type are determined by referring to the resource requirements during deployment, and are used in the process of creating a container for calling the container management service. The cloud partition item declares all the cloud partition paths used by the business service. These paths will be built on the same cloud partition and used in the process of calling the distributed file service to create/mount the cloud partition. If the item is empty, indicating the business service does not need to use the cloud partition. The cloud cache item declares the computing resources needed by the cloud cache in the business service, such as the number of CPU cores and memory capacity, and is used in the process of calling the cloud cache management service to create/mount the cloud cache. If the item is empty, indicating the business service does not need to use the cloud cache. The data source item declares all the data source information used by the business service, and is used in the process of calling the cloud database management service to create/mount the cloud database. If the item is empty, indicating the business service does not need to use the cloud database. The port item declares all service port information for the business service to be developed externally, and is used when calling the gateway to register the service cluster and calling the domain name service to register the domain name.
The application system deployment template based on the micro-service architecture is shown in Diagram 4. Application deployment templates include claims for business services, shared cloud partitions, shared cloud caches, shared data sources, security domains, and resource requirements. The resource requirement is the only dynamic item in the business service template that needs to be temporarily specified by the user during deployment. The other items are static items whose content has been determined during the process of deploying the template, but the content is allowed to be allocated and adjusted during deployment according to the content of the resource requirements item. The resource requirements item declares the deployment device architecture option set, service characteristics, isolation requirements, etc. that the user expects, and its content is used as the allocation basis of other items during deployment. The business service item declares the identifier and version number of all business services of the application systems. The identifier and version number can uniquely identify a business service deployment template. When deploying, the deployment resource requirements of each business service will be finalized with reference to the resource requirements. The shared cloud partition item declares the shared cloud partition group used by multiple business services in the application system, each group represents a shared cloud partition, and the business service identifier in each group represents the business service using the shared cloud partition. If the item is empty, indicating the application system has no shared cloud partition. The shared cloud cache item declares a shared cloud cache packet used by multiple business services in the application system, each group represents a shared cloud cache, and the business service identifier in each group represents a business service using the shared cloud cache. If the item is empty, indicating the application system has no shared cloud cache. The shared data source item declares a shared cloud database grouping used by multiple business services in the application system, each group represents a shared cloud database, and the business service identifier and the JNDI name in each group collectively represent a JNDI used by the business service under the shared cloud database. If the item is empty, indicating the application system has no shared cloud database.
Process of system deployment
A deployment on the PaaS cloud platform generally requires some major steps such as allocating resources, creating data storage objects, deploying service containers, registering service clusters, and registering domain names. The specific deployment process is shown in Diagram 5. Firstly, the user sets the deployment parameters, and the cloud platform reads the corresponding deployment template from the cloud repository, performs an environment verification for this deployment based on deployment parameters, statements in the deployment template, and current computing resources and data storage objects. If the environment verification passes, the deployment process continues, otherwise the deployment process is aborted; secondly, according to the result of the environment verification and the deployment parameters, the deployment information in the template is allocated, and if the allocated data storage object does not exist, it is created, otherwise it is mounted; thirdly, the user applies for resources from the resource scheduling service, creates a service container, and registers related service deployment information, service clusters, and domain names. Finally, security domains and isolation measures are set to complete security isolation.
According to the general steps of the above deployment, the deployment of the micro-service deployment engine is performed as shown in Diagram. 6. Figure 6. Deployment process of micro-services
Container
Firstly, the micro-service deployment engine is limited to read the business service deployment template from the cloud repository and parse it, and submit the parsed deployment data items to the resource orchestration template, and at the same time, the business service computing resource requirements submitted by the user or the application system deployment engine are also submitted to the resource allocation template; Secondly, the resource orchestration template queries the currently available computing resources from resource scheduling service, and generates the business service computing resource allocation scheme for this deployment according to the adapted computing resource architecture type declared in the deployment template, the service characteristics declared in the computing resource requirements, the computing resources amount, and the currently available computing resources. The resource allocation template also obtains the currently existing cloud storage object from the cloud storage management service, and generates the cloud storage orchestration scheme of the current deployment according to the mounted cloud storage identifier declared in the computing resource requirements; The cloud storage orchestration scheme is submitted to the cloud storage allocation template to invoke the corresponding cloud storage management service and create a cloud storage object or a cloud storage object; the business service computing resource allocation scheme is submitted to the container scheduling template to create a service container; then, the deployed registration template registers the service container deployment information, the service cluster information, and the domain name information to related deployment services (for example, deployment center, gateway, domain name services, etc.); finally, the security isolation template calls the cloud security service to set up isolation measures according to the deployed security requirements. Figure 7. Deployment process of application system The deployment of the application system is shown in Diagram 7, and the application system deployment is triggered by the user. The process of system deployment is similar to the process of micro-service deployment. After the cloud storage resources are allocated to the system, the business service deployment engine is invoked to deploy the business micro-services.
Conclusions
The system automated deployment engine of the PaaS cloud platform built based on Docker container and micro-service architecture technology achieves a good balance between the deployment efficiency and deployment flexibility of the micro-service architecture system. The software stack supported by any node through the Docker container, the mode deployed through the customized micro-service architecture system supports the complex cluster architecture, supports the full-stack, full-process automation deployment of large-scale complex micro-service systems, and supports the PaaS cloud platform to better respond to the changes in user demands and rapid growth in user scale. | 2019-02-19T14:07:50.958Z | 2018-01-01T00:00:00.000 | {
"year": 2018,
"sha1": "df0165fa99c060d4d088df2ec2bd0dc30afb74ca",
"oa_license": "CCBY",
"oa_url": "https://www.matec-conferences.org/articles/matecconf/pdf/2018/86/matecconf_icct2018_02020.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "a9574250b558c6ef8d240c8bd5887a928f0dd09a",
"s2fieldsofstudy": [
"Computer Science",
"Engineering"
],
"extfieldsofstudy": [
"Computer Science"
]
} |
56322546 | pes2o/s2orc | v3-fos-license | STUDENT PERFORMANCE IN CURRICULA CENTERED ON SIMULATION-BASED INFERENCE
Using simulation-based inference (SBI), such as randomization tests, as the primary vehicle for introducing students to the logic and scope of statistical inference has been advocated with the potential of improving student understanding of statistical inference and the statistical investigative process as a whole. Moving beyond the individual class activity, entirely revised introductory statistics curricula centering on these ideas have been developed and tested. Preliminary assessment data have been largely positive. In this paper, we discuss three years of cross-institutional tertiary-level data from the United States comparing SBI-focused curricula and non-SBI curricula (86 distinct institutions). We examined several pre/post measures of conceptual understanding in the introductory algebra-based course using multi-level modelling to incorporate student-level, instructor-level, and institutional-level covariates. We found that pre-course student characteristics (e.g., prior knowledge) were the strongest predictors of student learning, but also that textbook choice can still have a meaningful impact on student understanding of key statistical concepts. In particular, textbook choice was the strongest “modifiable” predictor of student outcomes of those examined, with simulation-based inference texts yielding the largest changes in student learning outcomes. Further research is needed to elucidate the particular aspects of SBI curricula that contribute to observed student learning gains.
Introduction
Spurred on by George Cobb's 2005 USCOTS talk andarticle (2007), several groups have been developing full high school and college-level introductory statistics curricula that put tactile and technology-based simulations at the heart of helping students learn about inference, often early in the course (e.g., Lock et al. 2013;Tabor and Franklin 2013;Diez, Barr, anḑ Cetinkaya-Rundel 2014;Forbes et al. 2014;Tintle et al. 2015;Zieffler et al. 2015). These approaches focus not on use of computer models to help students visualize statistical concepts (e.g., see Mills 2002 for a review) or on simulation-based learning (e.g., Novak 2014), but on a change in both content and pedagogy. These changes are driven by the ability to carry out standard inferential analyses (p-values and confidence intervals) through simulation rather than relying only on methods centering on the normal distribution. This also naturally facilitates a more active learning environment for the students. For example, the Tintle et al. curriculum (ISI, Introduction to Statistical Investigations) uses a coin tossing model in week 1 of the course (with physical coins and then using the computer) to introduce the logic of statistical significance before moving on to more traditional analyses . Introducing students to inferential reasoning through simulations and randomization tests is also part of the Common Core State Standards in Mathematics (www.corestandards.org/Math).
Although there has been anecdotal and statistical evidence of the effectiveness of this approach (e.g., Tintle et al. 2011, Tintle et al. 2012Budgett and Wild 2014;Pfaankuch and Budgett 2014;Reaburn 2014;Stephens, Carver, and McCormack 2014;Zieffler et al. 2014;Maurer and Lock 2015), especially for lower performing students ), more research is needed. As part of a recent NSF grant, we have been providing workshops and support to teachers who wanted to start implementing such an approach. In this article, we examine data from the 2013/2014 academic year on several pre-/post-measures of student attitudes and conceptual understanding across a broad range of instructors at different institutions. For the broader research study, our goals are to start exploring: 1. Do gains in students' conceptual understanding substantially differ across curricula? In which topics do we see the strongest and weakest performance? 2. Are gains seen by instructors with more experience with a simulation-based curriculum as evident with instructors who are teaching such an approach for the first time? 3. Can we characterize certain instructional experiences/ institutional differences/student backgrounds with higher or lower improvement? 4. How does student understanding of inference develop through repeated exposures during the course? 5. Are students able to transfer their knowledge of statistical inference to novel situations? This article will focus mostly on goals 2 and 3. In particular, we will explore the feasibility of using cluster analysis and hierarchical linear models with cross-institutional assessment data. We recruited the authors and class-testers of the ISI curriculum at several institutions to give pre-/post-tests to their students to assess the robustness of the curriculum. We briefly address goal 1 but due to limitation with the 13/14 data, we will focus mostly on different implementations of a single curriculum. (Elsewhere, we focus on changes in student attitudes, progression of student understanding through embedded exam items within the course, and student performance on a high-level transfer question.) This is a preliminary report from our first year of data collection. It is important to remember the observational and preliminary nature of these data. We conclude with suggestions for future research and next steps in facilitating such research.
The Curriculum
"Simulation-based inference" has been used to describe the use of methods such as bootstrapping and randomization tests in introducing students to the logic of statistical inference. In our own curriculum we begin each chapter, including the first day of the course, with tactile and computer simulations of chance models. Subsequent topics are motivated by a six-step statistical investigation method (research question, data collection, data exploration, draw inferences, formulate conclusions, and looking back and forward) and how each step changes with different data structures (e.g., one sample, two samples, multiple samples). To draw inferences, students simulate a chance model to approximate a p-value or create a confidence interval . These simulations are performed using student-focused javascript applets from the Rossman/ Chance collection through in-class, lab, and out-of-class exercises, varying by instructor. Analysis methods based on the Central Limit Theorem (e.g., z-procedures and t-procedures) are then discussed as long-run approximations to the simulation results (e.g., the applets allow the user to overlay the theoretical distribution for direct comparison). This approach moves beyond introducing students to sampling distributions through simulation, but considers simulation and randomization tests as the primary tool for carrying out inferential analyses throughout the course. Now that textbooks exist that fully implement this approach for undergraduate introductory statistics courses, we need to be examining more data on whether and how students' attitudes and conceptual understanding are impacted by this approach. As part of an NSF grant, we are conducting professional development workshops and recruiting individuals, using and not using such a curriculum, to administer pre and post assessment tools. This development has been aided by advice from an advisory board established as part of the NSF grant (our Randomization Based Curriculum Developers, RBCD group) that has reviewed items and discussed results for validity.
Participants
Instructors were invited during Fall 2013 and Winter 2014 to participate in our assessment plan. This included instructors who helped develop the ISI curriculum, instructors who had been using simulation-based materials for several years, and instructors who were brand new to the ISI curriculum. Many instructors in the latter group participated in short professional development workshops (1 day to 4 days) offered by the ISI author team and other developers of simulation-based curricula through the NSF grant and Consortium for the Advancement of Undergraduate Statistics Education (CAUSE). Instructors were asked to submit a survey detailing how they taught the course (e.g., number of weeks, percentage of time spent on student-led experiences), though these data were incomplete. Most instructors administered both the Survey of Attitudes Towards Statistic (SATS) instrument (Schau 2003) and a concept-based inventory as pre-test at the beginning of the course and as post-tests toward the end of the course (some as review, some embedded in the final exam). Our 30-question concept inventory (see Section 3.2) was an instrument we developed using/adapting/extending items from CAOS and GOALS (Comprehensive Assessment of Outcomes in a first Statistics Course; Goals and Outcomes Associated with Learning Statistics; e.g., delMas et al. 2007;Sabbag and Zieffler 2015). In addition to these pre-/post-questions, we developed some multiple choice questions that focused on particular areas, such as student understanding of strength of evidence. The SATS instrument also includes some demographic data on the students (whether or not the course was required, GPA, major, grade level, number of previous high school or college math/stat courses, type of degree seeking, and age). Instructors were offered a small stipend for participating in the assessment program.
Concept Inventory
Our concept inventory was a modified version of the CAOS test (similar modifications were also being made resulting in the GOALS assessment). As noted in Appendix A in the online supplemental information, some questions are the same, some questions were slightly modified in context or wording, and some questions were expanded or contracted (a multiple choice vs. valid/invalid options for separate statements). Based on student performance cited by Tintle et al. (2011) as well as Fall 2012 pilot testing, we made these modifications and deleted questions that we did not feel were as discriminating of student performance (e.g., students appear to have a strong understanding of reading a scatterplot when they enter the course; students consistently struggled on an item pre-and post-test).
We also added a few items based on the following considerations: 1. Are students using a simulation-based curriculum more likely to state a large p-value is evidence in favor of the null hypothesis? (Q17) 2. Can students evaluate the strength of evidence from a study with a small p-value but also a small sample size? (Q16) 3. Can students find convincing evidence of an extreme statistic even with a small sample size? (Q35) 4. Can students compare the strength of evidence between two studies with the same statistic but different sample size? (Q36) 5. Do students realize that a sample size does not need to be excessively large in order to be considered representative of the U.S. population? (Q19) The items and field-testing results from over 500 students from Fall 2012 were shared with the RBCD advisors before final adjustments were made. The classifications of the items was very similar to those in Tintle et al. (2011) with one graphing question and one identifying appropriate conditional proportions for comparison moved to descriptive statistics, and questions relating simulation and sampling variability questions grouped together. This gave us at least three questions in each area: Descriptive Statistics (9), Data Collection (4), Confidence Intervals (5), Tests of Significance (9), and Sampling Variability/Simulation (3). We will discuss details of student performance on these components in Section 4.3.
The Sample
Through our workshops and conference presentations, we recruited 40 instructors to participate in our assessment plan during Fall 2013¡Spring 2014, with some instructors using the instruments in both fall and spring. Instructors varied in the implementation of the attitude and concept instruments (during 2013/2014 implementation these were offered as separate instruments), particularly with respect to level of incentives provided to students. For example, some instructors offered extra credit or homework or quiz credit for participation, others offered none, and some embedded the post-course concept questions in the final exam. For all but the last case, students were given the option of opting out of completing the questions but still receiving course credit.
We established minimum times for students to spend on the assessment as an exclusion criterion; if a student spent less than 3 min (or opted out) on the attitudes pre-survey or less than 10 min on the concept inventory (or opted out), those observation were removed. If student's time data were missing, we focused on whether the student responded to at least 90% of the questions on the instrument. Then, if the response rate in a section was below 40%, we removed that section from our analysis. Using these criteria, we created two datasets that were used at various points of the analysis (see Table 1). We will use the first dataset to focus on student and instructor characteristics entering the course and the second dataset to focus on student gains on the concept inventory.
For the Baseline Data, we ended up with 20 distinct instructors in the fall. In the spring, 13 of those instructors participated a second time, plus 17 new instructors. This gave us 37 distinct instructors and 50 "instructor-terms" or "sections." This included four high school teachers and two community college teachers. The rest were four-year college (25) or research university (6) instructors. The rest were four-year college (25) or research university (6) instructors. One of the high school sections was a "dual enrollment" course allowing immediate credit at a neighboring college.
For the Gains Data, we ended up with 15 distinct instructors in the fall. In the spring, 12 of these instructors participated a second time plus 9 new instructors. This gave us 24 distinct instructors and 36 "instructor-terms" or "sections." This included three of the high school teachers, one of the community college sections, 16 four-year college instructors, three university instructors, and the dual-enrollment section.
Though we will refer to the instructor-terms as sections for the remainder of the article, one instructor could have had multiple sections in the same term. We did not collect sufficient information to differentiate among sections within the same term but did differentiate across terms where we thought there could be more variation in implementation and experience.
Instructor and Student Characteristics
From the Gains Data, Figure 1 shows the conceptual gain (posttest-pre-test in proportion correct on the 30 concept inventory questions) for the students in each section. We also considered using a measure such as "single-student normalized gain" (e.g., Hake, 1998;Meltzer 2002;Colt et al. 2011) which focuses on the percentage of potential gain achieved, but instead will include pre-test scores as a predictor in the multi-level models. The overall average gain is only 0.084, but this is on par with the average gain seen on the similar CAOS test (delMas et al. 2007). The average normalized gain was 0.151, though with some large negative outliers (e.g., a student going from 73% correct on pre-test to 37% correct on post-test). The overall average pre-test score was 0.498 and the overall average posttest score was 0.582. We also see there is a considerable amount of student-to-student variability in the gains on the concept inventory, but also some section-to-section variability. One of our goals is to see whether we can account for some of that section-to-section variability in student conceptual gains.
One possible explanation of the variability in the gains across sections is the level of experience of the instructor with the curriculum. In classifying the instructors by experience with the curriculum (Gains Data), we coded five as experienced instructors (e.g., author team members, some with sections both fall and spring), seven as having a "middle" level of experience (have previously used similar materials such as Introduction to Statistical Concepts, Applications, and Methods (ISCAM) more than twice), 10 as "new" instructors to the curriculum (have used the materials at most twice), and two instructors who were not using a simulation-based curriculum (e.g., Moore's Basic Practice of Statistics). One of these nonuser instructors used the assessment items in the fall and then became a new user in the spring.
The boxplots in Figure 2 illustrate that after dividing the instructors into the four experience groups (with only two nonusers), there is still considerable variability between sections in the Table 1. Datasets used for analysis based on participation rates.
Dataset 1: Baseline Data
Students who spent long enough on the pre-attitude survey and the concept inventory pretest and whose instructor had at least 40% class participation on the pre-tests 37 instructors (50 instructor terms), 1877 students Dataset 2: Gains Data Students who spent long enough on both the pre-and postconcept inventory and whose instructor had at least 40% class participation on both concept tests 24 instructors (36 instructor terms), 1116 students same category (some sections with as few as five students) and relatively much less distinction between the experience categories.
Although it is very risky to draw conclusions based on the two nonusers, there is evidence that the instructors' level of experience with the curriculum is a significant predictor of how much the students gain (p-value % 0.0017). However, the R 2 is very small (1.3%) and a Tukey multiple comparisons only detects differences between each group with the nonuser group and not between each other.
In an effort to further explore similarities and distinctions between sections, we examined two K-means cluster analyses: one on student characteristics and one on instructor characteristics. We wanted to see whether some classroom environments were similar enough to each other to be pooled together and whether these clustering variables would explain much variation in student gains.
Using the Baseline Data, 13 student level variables included age, GPA, grade level (0 D high school, 1 D lower division college, 2 D upper division college), number of previous high school math/stat classes, number of previous college math/stat classes, sex (0 D male, 1 D female), pre-concepts performance, and the six scales from the attitudes pre-test. Looking at the student averages across the sections (and seeing where the within and between subject sums of squares balance), we find four clusters (Table 2).
1. Cluster 1 (8 sections, 4 in Gains Data): Sections that generally had more previous high school and college mathematics courses and highest pre-concept scores, more positive attitudes coming into the course, including the perceived value of statistics. Higher proportion of women and upper classmen compared to the other clusters. 2. Cluster 2 (5 sections, 4 in Gains Data): HS and community college sections with fewer previous math and statistics courses but similar pre-concept scores on average. Generally more positive attitudes coming into the course. Higher proportion of women than the other clusters. 3. Cluster 3 (29 sections, 22 in Gains Data): Sections with lower GPAs, lower division college students on average. 4. Cluster 4 (8 sections, 6 in Gains Data): Sections with generally more negative attitudes coming into the course and expected to put in more effort. More likely to be male. Figure 3 examines the gains on the concept inventory by the end of the course and compares those scores across these four clusters. There is some evidence of differences in the average conceptual gains between the clusters (p-value D 0.037), but with only cluster 3 significantly different (higher gains) than cluster 4 (Tukey's HSD p-value D 0.049). Less than 1% of the variation in student gains can be attributed to the student clusters. We can also consider evidence of a ceiling effect as cluster 1 had higher concept scores to begin with (we consistently found a negative association between pre-concept score and gains in concept score as discussed below). It is interesting to note that including the instructor's level of experience in the model is significant, but the p-value for student cluster does not change much and the interaction between these terms is weakly significant.
A similar cluster analysis was done based on instructor level characteristics. However, more than half of the instructors did not complete the instructor survey. This resulted in a usable dataset of 19 instructors across 30 sections (1212 students).
First, we separated out the high school teachers. Then, the six variables used for classifying college sections based on instructor characteristics were type of department (1 D statistics, 0 D other), tenure status (1 D tenured, 0 D untenured), years of teaching, what percentage of class time was "studentled" rather than "instructor-led" (self-reported from the instructor survey), length of class weeks (1 D half-term, 2 D quarter, 3 D semester), and sex of instructor. Table 3 shows the results of the cluster analysis.
1. Cluster 1 (2 sections): This is a half-semester course taught by an experienced female in a mathematics department. 2. Cluster 2 (9 sections; 5 in Gains Data): These are statistics department faculty on the quarter system. 3. Cluster 3 (16 sections, 14 in Gains Data): These are math department faculty teaching semester-long courses. They tend to have more years of teaching and less student-led class time. Lower scores on effort imply the student does not plan to have to work as hard in the course; lower scores on difficulty (Diff.) imply the student perceives the course will be difficult. 4. Cluster 4 (10 sections, 9 in Gains Data): Similar to cluster 3 but slightly less experience and more student-led class time (though still less on average than clusters 1 and 2). The high school and dual-enrollment sections will be considered as cluster 5 (3 sections in Gains Data). The instructor sex may be a proxy for other variables (e.g., most of the instructors on the quarter system were female), but there has also been some interest in the role of instructor gender on student achievement (e.g., Friend 2006; Thomas 2006;Dee 2007;Antecol, Eren, and Ozbeklik 2012), mostly at lower grade levels. Figure 4 shows the conceptual gains for these five clusters. Overall the post-test performances in the clusters look very similar, and there are still substantial within cluster differences among sections.
The following output examines the instructor cluster effects after adjusting for the instructors' level of experience with the curriculum. This model and the one-way ANOVA model (not shown) indicate that the clustering by instructor-variables is not useful to the model, with or without the level of experience variable.
In examining our cluster-based groupings, we find that the instructors' level of experience with the curriculum is the most significant, though we still need to be cautious with the very small number of sections of nonusers in this dataset. After adjusting for the instructors' level of experience with the curriculum, we do have a marginally significant relationship with the student level clusters. However, we do not find significant effects from the instructors' clusters after adjusting for the instructors' level of experience. We also did not have sufficient response rate on the instructor survey (variables like studentled percentage), so subsequent analyses will not consider the instructor clusters, but will only consider some individual instructor level variables that we could verify independently, namely instructor sex, type of school (high school, community college, 4-year college, research university), length of term, and level of experience with the curriculum.
Hierarchical Models
Next, we explored additional models in an attempt to explain section-to-section variability in student conceptual gains. We used hierarchical modeling to include student and instructor level variables in a same model, and to account for the correlation between students who are within the same section (e.g., Gelman and Hill 2006). The unconditional means model (or random-intercept model which compares the mean gains across the 36 sections in the Gains Data) found an intraclass correlation coefficient of 0.006, implying the section-to-section variability only accounts for 0.6% of the total variability in student conceptual gains. If we remove the nonusers from the dataset, this coefficient drops to 0.002. These results suggest that it will be difficult to find variables at the section level that account for significant variability in student performance, though adjusting for other variables may still reveal some patterns. Several regression models were explored using MCMCglmm in R version 3.1.2 using the Gains Data. (We also explored lmer but had more problems with convergence. We did use lmer for factor p-values in our final models.) As mentioned previously, one of the strongest predictors of students' gain (post-pre concepts scores) is the pre-concept score. We do find some evidence of a negative quadratic association (see Figure 5, with separate curves for the four student level clusters).
This negative association suggests that students who know more coming into the course tend to not learn as much as in the course, with higher scores for student cluster 1 (more prepared, more positive pre-attitudes) and lower scores for cluster 4 (lower pre-attitudes) after adjusting for pre-concept performance. Whereas the overall post-test scores are strongly related to pre-test scores, we see a quadratic effect where the gains are larger for the students with lower pre-test scores. For more analysis of the performance of lower-performing students, see Tintle et al. (2014).
To illustrate a hierarchical model, below is a model for predicting conceptual gains based on the students' pre-attitude of the effort they plan to spend on the course and the instructor sex. The Effort variable is the sum of the student's responses across four questions (e.g., "I will study hard for every statistics test"), with higher effort scores indicating the student plans to work harder in the course. So we can model the gain for student j with instructor i as where we are going to allow the intercepts and the slopes to vary across instructors. So for example, we could think of the intercepts as varying based on the instructor's sex: and the slopes (the relationship between effort and gain) as also varying based on the instructor's sex: Putting these equations together, the hierarchical or multilevel model has the following form: gain ij D b 00 C b 01 instructor sex i C b 10 effort ij C b 11 instructor sex i £effort ij C e 1i effort i C e 0i C e ij : Figure 6 shows the section-to-section variability in the conceptual gains versus planned effort as well as solid "pooled" Figure 5. Scatterplot of conceptual gains versus pre-concept performance by student clusters. Figure 6. Scatterplot of conceptual gains versus prior expected effort by instructor sex. regression lines from the hierarchical model (weighting by sample size etc.) for male and female instructors.
From the solid lines, we see that the overall association in our dataset is slightly negative for the male instructors and positive for the female instructors. This could indicate a different impact of planned effort on conceptual gains. For example, with female instructors, students who plan to work hard in the course tend to gain more in the course, but not with male instructors. It could also be an indication of other confounding relationships as well.
As another example, Figure 7 shows the individual and pooled relationships between gain and prior Affect scores separated by the instructors' level of experience with the curriculum. Affect is a measure of students' "feelings concerning statistics" (Schau 2003). Larger values indicate a more positive opinion of statistics (e.g., "I will like statistics").
We see that students who have a higher appreciation of Statistics coming into the course tend to have higher gains for the middle and new instructors, but less for the experienced and nonusers of the curriculum. We applied this multi-level modeling approach using the student cluster variable and the four instructor level variables (student cluster, instructor sex, type of school, and level of experience with curriculum). We also included the student preconcept scores (quadratic) as that appears to be a highly significant variable on its own. We included interaction terms between pre-concepts and instructor sex, pre-concepts and school type, pre-concepts and level of experience, and student cluster and instructor sex. After applying a backwards elimination process (using a 0.15 cut-off), the output in Figure 8 shows the signs of the coefficients and their p-values.
The significant variables in predicting student gains on the concept inventory appear to be 1. A negative quadratic (concave up) association with preconcept scores. 2. The negative effect of pre-concepts is smaller (flatter) for male instructors than for female instructors (Figure 9(a) for illustration), but larger for students at two-year colleges (Figure 9(b)). In other words, male and female instructors tended to see similar gains for students with low pre-test scores but female instructors tended to see lower gains for students with high pre-test scores compared to male instructors. Whereas community college students tended to see lower gains, especially among the students with higher pre-test scores. 3. Higher gains for student cluster 1 (more prepared, more positive pre-attitudes), especially with male instructors, yet in the other clusters, the gains tended to be lower for male instructors. (See Figure 9(c) for the unconditional boxplots.) 4. The student cluster effect stems primarily from cluster 4 (lower pre-attitudes, mostly male students), with higher gains in cluster 2 (HS and community college students with less background) and cluster 1. 5. Experienced users had higher gains with average gains decreasing with less instructor experience with the curriculum and the lowest gains on average with the nonusers. Finally, we looked at the individual student-level variables and the three instructor-level variables. We also tested interactions between the pre-concepts scores and individual attitude scales, pre-attitude scales with instructor level of experience, student age by sex, and pre-effort by instructor sex. With backwards elimination, none of these interactions were statistically significant at the 0.05 level in the final model. The closest was difficulty £ instructor experience (factor p-value D 0.089). The final model, including this one interaction, is shown in Figure 10. The intraclass correlation coefficient of this model is 0.051, indicating that just 5% of the unexplained variation can be attributed to differences in sections.
The significant variables appear to be 1. A negative quadratic (concave up) association with preconcept scores. 2. Higher gains on average for students who had higher cognitive confidence pre-test scores (believing they can learn the material). 3. Students who expected the course to be less difficult tended to have higher gains except with the more experienced instructors, for which expected difficulty was not really related to gains. (Figure 11(a)). 4. A quadratic (concave up) relationship with GPA. Students with GPAs above 3.0 are predicted to see higher gains. 5. Two-year college students who score lower on the pretest have lower gains compared to the other institution types, but the higher performing students on the pre-test tend to have slightly lower gains on the post-test for university and high school students (Figure 11(b)). 6. Lower gains on average for instructors with less (or no) experience with the curriculum.
Item-by-Item Performance
Appendix A shows the pre-and post-test percentages for students within each type of instructor experience. If the item was similar to a CAOS item that is noted in the table along with the CAOS normative data as reported in Tintle et al. (2011). Table 4 shows the average percentage correct of the questions (shown in parentheses) in each sub-topic on the pre-test (with standard deviations) and the average gain in the percentage correct. Coming into the course (pre-test during week 1), students performed strongest on the data collection questions. The students in simulation-based curricula showed the largest gains on the tests of significance and confidence intervals questions. Several observations are worth noting: 1. (Q7) Students continue to struggle with a question asking them to choose a histogram over case-value plots as most informative in examining shape, center, and spread of the distribution, frequently picking a symmetric shape. 2. (Q9) When students were asked a pointed question that would identify a low response rate as a concern for generalizability, the students in the simulation-based curricula performed worse on the post-test compared to the pre-test. 3. (Q10-13) Small gains are seen across the confidence interval questions, more so for the simulation-based curricula. 4. (Q16) Performance on a question related to issues of power was worse on the post-test compared to the pre-test. 5. (Q17) Students do exhibit a tendency to find an insignificant result to be evidence in favor of the null hypothesis. 6. (Q19) Students continue to perform poorly on a question asking them to ballpark a sample size necessary for a specified margin-of-error. 7. (Q20) Students on the simulation-based curricula far out performed other students on a post-test question asking whether a larger or small p-value is desirable. 8. (Q21-23) But performance is more inconsistent when asked to identify incorrect p-value interpretations. Just under 50% consider a p-value interpretation as the probability of the alternative as valid. 9. (Q24-26) Students did not perform well on questions asking them to match a histogram with a verbal description of a variable. 10. (Q31) On the post-test, less than half could identify a correct description of a simulation (and invalidate others) but this was still marked improvement from the pre-test. 11. (Q35) When asked to evaluate a result of 12 successes out of 14 attempts, students were evenly split in choosing "random chance," "some evidence," and "strong evidence" against random chance.
The Flipper
One instructor in our dataset used the assessments in Fall 2013 and then again in Winter 2014, after changing to the simulation-based curriculum. In looking at the concept inventory as a whole (Figure 12), this instructor saw significantly higher gains in the second semester. The gains (not shown) appeared to be highest for the Data collection and Confidence interval questions.
Discussion
This study has provided an example of using hierarchical models to explore the relationship of various variables on student achievement in an introductory statistics course across multiple institutions. The preliminary analysis is consistent with earlier evidence on the potential of centering the course on simulation-based inference, but also raises questions and suggestions for future research, particularly about the relative impact of student-level and instructor-level variables on student performance across different curricula. This study has also revealed several areas for improvement, both in how we are assessing the students and also identifying the content students are finding most difficult.
After adjusting for pre-attitude measures, pre-concept score, and a few other student-and instructor-level variables, we do find evidence that students in a nonsimulation-based curriculum tended to achieve lower gains on our concept inventory than students in such a curriculum. However, we must keep in mind the limited number of nonuser sections in this dataset. We also find some evidence of higher gains for students with more experienced instructors, but those effects are smaller and the "middle" experience instructors are similar to the "new" instructors. This provides some evidence of the robustness of the curriculum to instructors trying it for the first time. Instructors willing to switch to a simulation-based curriculum should immediately see similar gains as more experienced instructors.
The most significant predictors of student gains are the students' pre-test scores, with students scoring lower on the pre-test achieving higher gains on average, and student GPA coming into the course. Of the pre-attitudes, prior beliefs of cognitive competence and difficulty seemed to be the better predictors, with similar coefficients. The higher gains for students with higher GPAs and more positive attitudes entering the course is consistent with student clusters 1 and 2 achieving higher gains on average. Students who enter the course expecting it to be more difficult and expecting to need to put in lots of effort into the course tend to not perform as well. Instructors may be well served to discuss expectations and possibilities with students at the beginning of the course. Although these data point to potential impacts of institutional differences, instructor pedagogical choices, and other student level variables, including prior attitudes, our next steps focus on obtaining additional and more diverse data to further comment on these and other factors (e.g., a priori quantitative majority, race, etc.).
Student-level variables appear to have more impact than the individual instructor-level variables that we had available in this initial dataset. However, there is some evidence that the impact of these variables differs between male and female instructors. Some of the interactions we observed may be proxies for other things (e.g., most of the Statistics Department instructors in our dataset were female, on the quarter system, and among the more experienced users of the curriculum). More data are needed to be able to separate such confounding variables. Similarly, we are not able to distinguish between the change in sequencing of ideas, focus on simulation-based content, and the inevitable active learning pedagogy from heavier use of simulations. However, it appears that these potential interactions merit further investigation and that multi-level modeling appears a feasible way to capture such cross-level relationships. It is important to understand the role of instructor-student interactions on the impact of different curricula.
In examining the questions that students showed less improvement on, one theme seems to be the role of sample size. Students are still exhibiting some confusion on when they can have strong evidence with small sample sizes, how sample size is related to generalizability, and how sample size relates to power. The emphasis of this curriculum on ideas of statistical inference is evident in those areas showing more improvement. As seen in Tintle et al. (2011), gains on descriptive statistics questions are more modest, though students are showing stronger background on those questions entering the course. We do conjecture that one reason for lower performance on the question of matching standard deviations to graphs is inconsistency in how the answers are labeled and how the graphs are labeled (e.g., answer option A to choose graph C).
Further research will explore in more detail the development of student understanding throughout the course. Some instructors have noted slow transfer of inferential thinking in the first few topics, raising the questions of how much repeated exposure is necessary for students to develop a deep understanding of statistical significance and which experiences are most critical for student learning. For example, students seem to struggle longer than we might expect to know what "observed result" to use when calculating the p-value (vs. the hypothesized parameter value). Perhaps giving students interesting data and then asking them to carry out simulations does not sufficiently illustrate to students the distinction between "real" versus "hypothetical" data. Having students carry out more of the studies themselves and having a statistic they actually observed themselves may help keep the real data from becoming abstract (e.g., Gould 2010;Kuiper and Sturdivant 2015).
Next Steps
We have collected similar data for the 2014-2015 school year with 76 instructors at over 40 institutions. We have improved our process in the following ways: 1. Combining the concept inventory and attitude questions into one assessment. We are hoping this will help with response rate though more students may decide to not complete the instrument in one sitting. 2. Replacing the "number of previous math and statistics courses" questions with a question simply asking whether they have taken a previous statistics course. 3. Creating separate forms for different instructors to reduce erroneous identification, though we still need to check for repeat names and mismatches on the pre-/ post-test. 4. Rephrasing of questions on the instructor survey and more aggressive follow-up to ensure complete and accurate instructor responses. 5. Most importantly, more efforts to recruit nonusers as well as instructors using other simulation-based curricula. We have also made a few changes to the concept inventory: 1. The problematic histogram/case-value plot question is no longer the first question on the instrument. 2. We reordered the answer options in Questions 33 and 34 so the graph choice matches the forced choice (e.g., a refers to Graph A, b refers to Graph B, etc.). We have added additional questions: 1. Rather than allowing "all of the above" to the descriptions of a simulation, this has been broken into several valid/invalid statements. 2. Duality of confidence interval and test of significance. 3. Two questions on factors that impact width of confidence intervals. 4. Comparing strength of evidence across several pairs of dotplots. 5. Drawing cause-and-effect conclusion when random assignment is present in study design. 6. An invalid p-value interpretation related to the difference in conditional proportions. In addition to these pre-/post-comparisons, we have also developed a series of "common questions" for instructors to use on midterm exams throughout the term. For example, in year 1, we focused on student understanding of the simulation process (data to be analyzed). In year 2, we will focus on more in depth assessment of student understanding of confidence intervals. We have also developed a high-level transfer question that can be used on the final exam. This is an adaptation of the 2009 AP Statistics question that expects students to evaluate skewness by considering a sampling distribution of the mean/ median statistic. After using several iterations of this question as an open-ended question, we are now pilot testing a multiple choice version for broader implementation.
Summary
Using multi-level regression models, we conducted an initial exploration of the impact of both student-level and instructorlevel variables on the performance of students in 36 different sections of introductory statistics at 23 different institutions. In the models we explored, the student-to-student variability far exceeded the section-to-section variability. How much the students knew coming into the course and how confident they feel about their ability to learn the material appear to be stronger predictors of how much they learn regardless of instructor characteristics. However, it would be worth exploring additional interactions. Additionally, we were not able to identify predictors that explained a large percentage of this student-to-student variability. We have made slight adjustments to our instrument and increased our efforts to recruit nonusers of simulationbased curricula and instructors using other simulation-based curricula to participate in our assessment, which will include assessments of students' growth in understanding at different points of the term as well as a high-level transfer question. | 2018-12-17T20:44:57.433Z | 2016-09-01T00:00:00.000 | {
"year": 2022,
"sha1": "b61f37372aedbc28c51d92c2fa3cf5d53914d906",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1080/10691898.2016.1223529",
"oa_status": "GOLD",
"pdf_src": "TaylorAndFrancis",
"pdf_hash": "a343f4bbff699ec76bbc35c9bd55dc62020aa9e8",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Computer Science"
]
} |
3178522 | pes2o/s2orc | v3-fos-license | A set of microRNAs mediate direct conversion of human umbilical cord lining-derived mesenchymal stem cells into hepatocytes
In a previous study, we elucidated the specific microRNA (miRNA) profile of hepatic differentiation. In this study, we aimed to clarify the instructive role of six overexpressed miRNAs (miR-1246, miR-1290, miR-148a, miR-30a, miR-424 and miR-542-5p) during hepatic differentiation of human umbilical cord lining-derived mesenchymal stem cells (hMSCs) and to test whether overexpression of any of these miRNAs is sufficient to induce differentiation of the hMSCs into hepatocyte-like cells. Before hepatic differentiation, hMSCs were infected with a lentivirus containing a miRNA inhibitor sequence. We found that downregulation of any one of the six hepatic differentiation-specific miRNAs can inhibit HGF-induced hepatic differentiation including albumin expression and LDL uptake. Although overexpression of any one of the six miRNAs alone or liver-enriched miR-122 cannot initiate hepatic differentiation, ectopic overexpression of seven miRNAs (miR-1246, miR-1290, miR-148a, miR-30a, miR-424, miR-542-5p and miR-122) together can stimulate hMSC conversion into functionally mature induced hepatocytes (iHep). Additionally, after transplantation of the iHep cells into mice with CCL4-induced liver injury, we found that iHep not only can improve liver function but it also can restore injured livers. The findings from this study indicate that miRNAs have the capability of directly converting hMSCs to a hepatocyte phenotype in vitro.
A set of microRNAs mediate direct conversion of human umbilical cord lining-derived mesenchymal stem cells into hepatocytes L Cui 1,3 , Y Shi 1,3 , X Zhou 1,3 , X Wang 1 , J Wang 1 , Y Lan 1,2 , M Wang 1 , L Zheng 1 , H Li 1 , Q Wu 1 , J Zhang 1 , D Fan 1 and Y Han* ,1 In a previous study, we elucidated the specific microRNA (miRNA) profile of hepatic differentiation. In this study, we aimed to clarify the instructive role of six overexpressed miRNAs (miR-1246, miR-1290, miR-148a, miR-30a, miR-424 and miR-542-5p) during hepatic differentiation of human umbilical cord lining-derived mesenchymal stem cells (hMSCs) and to test whether overexpression of any of these miRNAs is sufficient to induce differentiation of the hMSCs into hepatocyte-like cells. Before hepatic differentiation, hMSCs were infected with a lentivirus containing a miRNA inhibitor sequence. We found that downregulation of any one of the six hepatic differentiation-specific miRNAs can inhibit HGF-induced hepatic differentiation including albumin expression and LDL uptake. Although overexpression of any one of the six miRNAs alone or liver-enriched miR-122 cannot initiate hepatic differentiation, ectopic overexpression of seven miRNAs (miR-1246, miR-1290, miR-148a, miR-30a, miR-424, miR-542-5p and miR-122) together can stimulate hMSC conversion into functionally mature induced hepatocytes (iHep). Additionally, after transplantation of the iHep cells into mice with CCL4-induced liver injury, we found that iHep not only can improve liver function but it also can restore injured livers. The findings from this study indicate that miRNAs have the capability of directly converting hMSCs to a hepatocyte phenotype in vitro. The goal of regenerative medicine is to replace the cells that are damaged or lost as we age, suffer disease or are exposed to environmental insults. For patients with liver disease, the ideal regeneration process is cell transplantation of hepatocytes generated by transdifferentiation to supplement or replace hepatocyte function. Rapid advances in the field of transdifferentiation have been made, particularly in hepatic transdifferentiation. Mouse tail-tip fibroblasts can be directly induced to functional hepatocyte-like cells by transduction of Gata4, Hnf1alpha and Foxa3 and inactivation of p19 (Arf); the induced hepatocytes show typical epithelial morphology, express hepatic genes and acquire hepatocyte functions. Notably, transplanted induced hepatocytes repopulate the livers of fumarylacetoacetate-hydrolase-deficient (Fah ( À / À )) mice and rescue almost half of the Fah ( À / À ) mice from death by restoring liver function. 1 Simultaneously, another research group demonstrated that expression of the transcription factor Hnf4alpha in combination with Foxa1, Foxa2 or Foxa3 can convert mouse embryonic and adult fibroblasts into cells that closely resemble hepatocytes in vitro. The induced hepatocyte-like cells have multiple hepatocyte-specific features and can also reconstitute damaged hepatic tissues after transplantation. 2 A surprising observation is that transcription factors are not the only molecules that can promote cell transdifferentiation; miRNAs can promote cell transdifferentiation as well. [3][4][5] Whether miRNA can mediate hepatic transdifferentiation is still unknown.
In the previous study, we tested the miRNA expression profile of the HGF-induced hepatic differentiation model using miRNA microarray at seven time points. A total of 61 miRNAs among 1205 human and 144 human viral miRNAs displayed consistent changes and were altered at least twofold between hUC-MSCs and hepatic differentiated hUC-MSCs. Then, 25 miRNAs were selected based on fold changes and expression level for further qRT-PCR analyses. By comparing this miRNA profile between osteogenic differentiated cells and hepatocyte differentiated cells, we found that miR-1246, miR-1290, miR-148a, miR-30a, miR-424 and miR-542-5p not only were consistently overexpressed during hepatic differentiation of hMSCs either by microarray or qRT-PCR analysis but also were hMSC hepatic differentiation specifically. 6 In this study, we aimed to clarify the instructive role of the six overexpressed miRNAs during hepatic differentiation of hMSCs and to test whether overexpression of any of these miRNAs alone is sufficient to induce differentiation of hMSCs into hepatocyte-like cells.
Overexpression of seven miRNAs together can stimulate hMSCs to possess the characterization of hepatocytes. Silencing any one of the six hepatic differentiation-related miRNAs can inhibit the HGF-induced ALB expression and LDL uptake during hepatic differentiation of hMSCs. To clarify the effects of overexpression of the six miRNAs on hMSCs, we synthesized the miRNA mimics and used the liver-enriched miRNA miR-122 as a control. To confirm that the synthesized miRNA mimics can effectively increase the relative expression level of miRNA in hMSCs, we tested the miRNA expression of hMSCs at 6 days post transfection. We found that the miRNA mimics miR-122, miR-1246, miR-1290, miR-148a, miR-30a, miR-424 and miR-542-5p increased the relative expression of their respective miRNAs 18163-, 2-, 8-, 87-, 11-, 117-and 16 542-fold, respectively ( Figure 2a). The fold change difference may be caused by the different expression level of the miRNA in hMSC. This overexpression was maintained for B12 days. However, overexpression of any of the seven miRNAs alone cannot stimulate hMSCs to express the hepatocyte marker gene ALB (Figure 2b). We combined the seven miRNA mimics and co-transfected them into hMSCs. Co-transfection increased the relative expression of all seven miRNAs simultaneously ( Figure 2c). Interestingly, after cotransfection of the seven miRNA mimics for 6 days, the hMSCs overexpressed ALB (Figure 2d). Moreover, the combination of the seven miRNAs can promote the conversion of hMSCs from a fibroblast-like morphology to an epithelial morphology ( Figure 2e and Supplementary Figure 1C).
Co-transfection with a combination of seven miRNAs can convert hMSCs into mature functional hepatocytes in vitro. Human liver multipotent stem/progenitor cells can give rise to hepatocytes, cholangiocytes and To confirm that the seven-miRNA combination converts hMSCs into hepatocytes but not into hepatic progenitor cells, pancreatic islets or cholangiocytes, we analyzed the expression of marker genes in hMSCs cotransfected with the seven-miRNA combination. qRT-PCR results showed that the seven-miRNA combination can stimulate hMSCs to express hepatocyte-specific genes, including the early genes HNF4A, AFP and albumin (which were increased 8.2-, 2.1-and 10.3-fold, respectively), the intermediate gene transferrin (which was increased 5.2-fold) and the late gene CYP3A4 (which was increased 4.6-fold). However, the seven-miRNA combination cannot stimulate the expression by hMSCs of the pancreatic islet marker gene PDX1, the cholangiocyte marker gene CK7 or the liver progenitor cell marker gene EpCAM (Figure 3a). We also analyzed whether hMSC mediated by seven-miRNA combination could induce hepatocellular carcinoma in vivo. We found that transplant tumor was not formed in nude mice after cells were vaccinated into subcutaneous tissue of nude mice (data not shown). To further confirm that the cells mediated by the seven-miRNA combination can successfully express hepatocyte marker genes, we analyzed the protein levels of HNF4A, ALB and CYP3A4 by western blot. The western blot results were consistent with the qRT-PCR results (Figure 3b and Supplementary Figure 1C).
To clarify whether the induced hepatocytes (iHeps) mediated by the seven-miRNA combination have hepatocyte function in vitro, we examined their glycogen-storing ability, ICG and LDL uptake ability, urea production ability and albumin-positive cells percentage. The ability of iHeps to produce urea was evaluated by exposing the cells to 10 mmol/l ammonium chloride for 24 h. The urea production ability of iHeps increased significantly after 6 days of transfection with the seven-miRNA combination (Figure 3c). After induction for 6 days, PAS staining demonstrated that 470% cells stored glycogen compared with the undifferentiated hUC-MSCs Figure 1D). Immunofluorescence results demonstrated that 490% cells were albuminpositive (Supplementary Figure 1A and 1D). These results indicated that hepatocyte induction mediated by the seven-miRNA combination can exert hepatocyte function in vitro.
iHep transplantation rescues CCL4-induced liver injury mice. Hepatocyte transplantation can reverse acute liver injury. Therefore, we examined whether human iHep cells could reconstitute hepatic tissues as hepatocytes in the livers of a CCL4-induced acute liver injury model. Mice treated with CCL4 for 4 weeks exhibited weight loss and loss of appetite. The histological evaluation of the CCL4-injured mice demonstrated that compared with normal mice (Figure 4 Ci), ballooning and necrosis of hepatocytes, infiltration of inflammatory markers and liver fibrosis increased significantly ( Figure 4Cii). Moreover, the liver function of CCL4-injured mice decreased as indicated by an increase in alanine transaminase (ALT) (from 21.3±3.4 UI/l to 243 ± 13 UI/l) and aspartate aminotransferase (AST) (from 108 ± 10 UI/l to 486 ± 20 UI/l) and a decrease in serum albumin (from 23±1.4 g/l to 15.7±0.6 g/l). We treated the liver-injured mice with saline, negative cells, hMSC or iHep cells injection. Interestingly, 1 day after iHep cell transplantation, the serum albumin level was significantly increased (from 15.7 ± 0.6 g/l to 24.4 ± 0.45 g/l) and a normal level was maintained during the observation period. Conversely, the serum albumin level in mice treated with hMSCs showed gradual improvement from days 1-14 (from 15.8 ± 0.5 g/l to 19.8 ± 0.28 g/l), and the serum albumin in mice treated with saline or negative cell was maintained at a low level (Figure 4b). Both iHep and hMSC cells repaired the injured liver architecture at 2 weeks post cell transplantation ( Figure 4c) and exhibited decreased ALT and AST levels ( Figure 4b). To further confirm that the decreased liver function was improved by transplantation of iHep cells, we traced the transplanted hMSC and iHep cells in liver sections from the mice using immuofluorescence. In the hMSC treatment group, the human-derived cells were mainly CD105-positive cells. However, in the iHep treatment group, the humanderived cells were mainly albumin-positive and CYP3A4positive cells. In the hMSC treatment group, the human CD105-positive MSCs in the liver gradually increased, reached the highest level on day 3 and then gradually decreased. Additionally, human albumin-positive and CYP3A4-positive hepatocytes began to increase at day 3. In the iHep treatment group, the human albumin-positive and CYP3A4-positive hepatocytes in the liver gradually increased and reached the highest level on day 3 and that level was then stably maintained ( Figure 5 and Supplementary Figure 2). More interestingly, human CD105-positive MSCs maintained a stable low level from days 1-14 ( Figure 5). These results indicated that iHep cell transplantation can repair liver injury in mice.
MicroRNAs (miRNA) comprise a group of non-coding small RNAs (17-25 nt) that are involved in post-transcriptional regulation and have been identified in various plants and animals. They have an important role in liver development.
MiRNA-deficient mice exhibited progressive hepatocyte damage with elevated serum ALT and AST levels between 2 and 4 months of age. Furthermore, the liver mass and expression of cellular markers of both proliferation and apoptosis were shown to increase. 7 Moreover, miRNAs control hepatocyte proliferation during liver regeneration 8,9 and have a significant role in modulating proliferation and cell cycle progression genes after partial hepatectomy. 10 In the previous study, we found that miR-1246, miR-1290, miR-148a, miR-30a, miR-424 and miR-542-5p were specifically overexpressed during hepatic differentiation of hMSCs. 6 Thus far, studies on these miRNAs have been limited. MiR-1246 takes part in DNA damage, 11 regulates chloride transport 12 and can be used as a circulating biomarker of malignant mammary epithelial cells. 13 MiR-1290 impairs cytokinesis and affects the reprogramming of colon cancer cells. 14 MiR-30a inhibits the epithelial-to-mesenchymal transition by targeting Snai1, 15 and the miR-30 family is required for vertebrate hepatobiliary development. 16 MiR-148a is a promising candidate for an early, stable and sensitive biomarker of rejection and hepatic injury after liver transplantation. 17 MiR-424 regulates human monocyte/macrophage differentiation. 18 Studies on miR-542-5p mainly focused on its important role in tumorigenesis 19,20 and its part in the cellular senescence program of human diploid fibroblasts. 21 In this study, we found that these six miRNAs not only have key roles in the HGF-induced hepatic differentiation of hMSCs but also influence hepatic gene expression in the HepG2 cell line. Moreover, we found that although upregulation of any one of the six miRNAs alone or miR-122 in hMSCs cannot initiate hepatic differentiation, ectopic overexpression of seven miRNAs together can convert hMSCs into mature functional hepatocytes. MiR-122 is highly expressed in the liver where it constitutes 70% of the total miRNA pool. MiR-122 participates in cholesterol metabolism and hepatocellular carcinoma formation and has an important role in promoting hepatitis C virus replication. 22 Overexpression of miR-122 enhances in vitro hepatic differentiation of fetal liver-derived stem/ progenitor cells, 23 but whether miR-122 can initiate hepatic differentiation of hMSCs is still unknown. Here, we showed that ectopic overexpression of miR-122 alone cannot initiate hepatic differentiation of hMSCs, but when combined with miR-1246, miR-1290, miR-148a, miR-30a, miR-424 and miR-542-5p, miR-122 can convert hMSCs into functional hepatocytes.
HMSCs are plastic. They not only possess osteogenic, chondrogenic and adipogenic differentiation potential but can also break through the limitation of their germ layer of origin and differentiate into hepatocyte-like cells under hepatogenic conditions. Until now, the mechanism of hepatic differentiation was unknown. In the past few years, much work has been done to understand the hepatogenic condition. The conventional and most frequently used hepatogenic medium is basic medium supplemented with growth factors including epidermal growth factor, bFGF, HGF, nicotinamide, oncostatin M, dexamethasone and ITS premix. 24 In this study, we directly converted hMSCs into hepatocytes using a seven-miRNA combination. Compared with conventional methods, the seven-miRNA combination promoted hepatic differentiation of hMSCs more quickly and more efficiently. For example, after 6 days of induction, growth factors upregulated albumin expression approximately fourfold, whereas the seven-miRNA combination upregulated albumin expression 10-fold. Furthermore, the uptake of Ac-Dil-LDL and the storage of glycogen by hMSCs were induced by the seven-miRNA combination after 6 days, whereas these hepatic functions were not observed in the conventional induction group until 14 days after induction. Additionally, the seven-miRNA combination can change the cell morphology from fibroblast-like to epithelial; this phenomenon was not observed in the conventional induction group. Therefore, compared with the conventional induction method, the seven-miRNA combination is a simpler, faster, more efficient method of hepatic induction. Recently, transcription factor-based cellular reprogramming has opened the door to the conversion of somatic cells 1,2 or stem cells 25,26 to hepatocytes; however, several limitations currently prohibit the use of this method in clinical settings, including the viral DNA delivery system and the exogenous overexpression of transcription factors. New strategies are therefore needed to ensure the safe and efficient production of hepatocytes. Accumulating evidence now implicates miRNAs as probable candidates for cellular reprogramming. These transient, non-coding small RNAs are fully or partially complementary to one or more mRNA molecules and induce the silencing of targeted genes without integration into the host genome.
Both MSC and hepatocyte transplantation can improve liver function in patients or in animals with liver disease. Studies demonstrated that MSCs repair livers injured by disease mainly by differentiating into hepatocytes and modulating the immune system. Hepatocyte transplantation improves injured liver function by functional hepatocyte replacement. In this study, we found that compared with MSC transplantation, miRNA-mediated conversion of hMSCs to hepatocytes L Cui et al seven-miRNA-mediated iHep transplantation improved liver function and increased serum albumin levels more efficiently (1 day and 7 days, respectively). More interestingly, transplanted iHeps not only supply functional hepatocytes but they also contain a small amount of CD105 þ cells that are maintained on a stable level, indicating that seven-miRNAmediated iHeps possess stemness and could supply mature hepatocytes persistently.
Materials and Methods
Cell culture and hepatic differentiation of mesenchymal stem cells. The isolation of human umbilical cord lining-derived mesenchymal stem cells was performed as previously described. 6 Hepatic differentiation of hMSCs, which were infected with lentivirus for 6 days, was performed with a Hepatogenic Differentiation Kit (Cyagen Bioscience Inc., Guangzhou, China) as previously described. 6 Lentivirus vector construction and infection. The siRNAs miR-1246, miR-1290, miR-148a, miR-30a, miR-424 and miR-542-5p were packaged in the eGFP-GV273 vector using a lentiviral system by Genechem Co., Ltd., (Shanghai, China). HMSCs were plated at 6 Â 10 3 cells/well in 24-well plates. After 24 h, the hMSCs were infected with 10 ml of 1 Â RNA isolation, cDNA synthesis and quantitative reverse transcription PCR (qRT-PCR). RNA isolation, cDNA synthesis and qRT-PCR were performed as previously described. 6 Briefly, the total RNA was isolated with Trizol at the indicated time points (Invitrogen Inc). A total of 300 ng of total RNA was used for cDNA synthesis with the PrimeScript RT Reagent Kit Perfect Real Time (TaKaRa Biotechnology Co. Ltd., Dalian, China). PCR amplification was performed with the SYBR Premix Ex Taq TM II (TaKaRa Biotechnology). For each sample, GAPDH expression was analyzed to normalize the target gene expression. For miRNA analysis, cDNA was synthesized with the One Step PrimeScript miRNA cDNA Synthesis Kit Perfect Real Time (TaKaRa Biotechnology). Human U6B was used to normalize target miRNA expression. The primers for qRT-PCR are shown in Table 1. In all of the miRNA analyses, the Uni-miR qPCR primer was used as the reverse primer (TaKaRa Biotechnology). Relative changes in gene and miRNA expression were determined with the 2 À DDCt method.
Periodic acid-Schiff staining (PAS staining) and urea assay. Glycogen storage by the induced hepatocytes was analyzed with a PAS staining kit (Baso Diagnostics Inc., Zhuhai, China). The cells were fixed with PBS containing 4% paraformaldehyde, incubated for 10 min in 1% periodic acid, washed with distilled water and incubated with Schiff's reagent for 15 min. After a 10-minute wash in tap water, the cells were visualized by light microscopy and images were acquired. For the urea assay, induced hepatocytes were cultured for 24 h in expansion medium in the presence or absence of 10 mmol/l NH 4 Cl. Then, the supernatants were collected and the urea concentrations in the supernatants were measured by a BUN assay (Nanjing Jiancheng Bioengineering Institute, Jiangsu, China) according to the manufacturer's instructions. Expansion medium served as a negative control. Finally, the plates were read at a wavelength of 640 nm in an automatic microplate reader (BIO-RAD 680/Bio-Rad Laboratories, Hercules, CA, USA).
ICG and LDL uptake. For ICG uptake analysis, cardiogreen (Sigma-Aldrich Inc., St. Louis, MO, USA) was dissolved in sterile ddH2O to produce a fresh 50 mg/ml stock solution that was then further diluted in DMEM to a final concentration of 1 mg/ml. After a 30-minute incubation and a 10-minute water wash, the cells were visualized by light microscopy. The LDL uptake ability of the Western blot. The cell lysates from the induced hepatocytes were extracted with RIPA lysis buffer (Beyotime Inc., Shanghai, China). The samples were resolved in a 10% SDS-PAGE gel and transferred to a PVDF membrane (Millipore Corporation, Billerica, MA, USA) using the semi-dry transfer method. After blocking in 10% non-fat dried milk in TBST for 2 h, the blots were incubated with primary antibody at 4 1C overnight. After washing with TBST, the blots were incubated with a horseradish peroxidase-conjugated secondary antibody (Santa Cruz Biotechnology, Santa Cruz, CA, USA, diluted 1 : 2000) at room temperature for 1 h. The blots were visualized by Femto (Pierce, Rockford, IL, USA) following the manufacturer's instructions using the following primary antibodies: human anti-ALB (Santa Cruz), human anti-HNF4A (Santa Cruz Biotechnology) and human anti-CYP3A4 (Santa Cruz Biotechnology).
Cell transplantation and labeling. Eight-week-old female BALB/c nude mice from the animal center of The Fourth Military Medical University received intraperitoneal injections of 10% CCl4 that were diluted in corn oil solution (2 ml/kg) three times a week for 4 weeks. The mice with CCl4-induced liver injury were divided randomly into three groups: Group A: treated with saline (n ¼ 15); Group B: treated with human negative cell, which was human blood-derived CD34-negative non-adherent cells (n ¼ 15); Group C: injected with human MSCs (n ¼ 15); Group D: injected with human induced hepatocyte derived from MSCs (n ¼ 15). Each mouse received 1 Â 10 6 cells via the tail vein. The mouse sera and liver tissues were harvested at 1, 2, 3, 7 and 14 days. Mouse serum parameters, including albumin (ALB), alanine aminotransferase (ALT) and aspartate aminotransferase (AST), were analyzed with an automatic chemistry analyzer Au560 (Olympus, Tokyo, Japan) in a clinical laboratory. Fresh liver segments were prepared for H&E staining and immunofluorescence.
Immunofluorescence. Fresh liver segments were fixed with 4% paraformaldehyde. After blocking with phosphate-buffered saline containing 1% BSA and 0.2% Triton X-100, the cells were incubated with anti-human-albumin antibody (Santa Cruz Biotechnology) or anti-human CD105-PE (eBioscience Inc., San Diego, CA, USA) at 41C overnight and then incubated with a secondary antibody that was labeled with Alexa Fluor 488 (Invitrogen Inc.) at room temperature for 1 h.
Statistical analysis. The data are expressed as the mean±standard deviation. To identify significant differences, a one-way analysis of variance was performed and the least significant difference t-test was used to analyze the differences between the groups. A P-value o0.05 was considered significant.
Conflict of Interest
The authors declare no conflict of interest. | 2016-05-12T22:15:10.714Z | 2013-11-01T00:00:00.000 | {
"year": 2013,
"sha1": "bbbdea1b2502d48f0ac02a909b19bbca07eacee3",
"oa_license": "CCBYNCND",
"oa_url": "https://www.nature.com/articles/cddis2013429.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "bbbdea1b2502d48f0ac02a909b19bbca07eacee3",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"extfieldsofstudy": [
"Medicine",
"Biology"
]
} |
221820301 | pes2o/s2orc | v3-fos-license | Bufei Decoction Alleviated Bleomycin-Induced Idiopathic Pulmonary Fibrosis in Mice by Anti-Inflammation
Objective This study aimed to investigate the mechanistic action and therapeutic effects of Bufei decoction on idiopathic pulmonary fibrosis (IPF) after inhalation of bleomycin. Methods Pulmonary fibrosis model in mice was prepared by atomization inhalation of bleomycin. Then, the mice were randomly divided into five groups (control group, model group, positive group, and treatment group) and administrated the drugs for 4 weeks. H&E and Masson's staining of lung tissues were used to observe the morphological changes and deposition of fibers, and the degree of fibrosis was evaluated by hydroxyproline content. The expression and activation of NF-κB were determined by western blotting and immunohistochemistry. The infiltration of macrophages was detected by immunostaining of CD45 and F4/80 in lung tissues. Results In mouse IPF, Bufei decoction alleviated the pathological changes and the deposition of fibrosis by decreasing the content of hydroxyproline of lung tissues. The antipulmonary fibrosis might rely on the effects of preventing the infiltration of inflammatory cells and inhibiting the expression and activation of NF-κB in lung tissue. Conclusion Bufei decoction improved the process of pulmonary fibrosis by regulating the activation and expression of the NF-κB signal transduction pathway, which provided a therapeutic option for IPF patients.
Introduction
Idiopathic pulmonary fibrosis (IPF) is a chronic, progressive, and irreversible interstitial lung disease and progresses to respiratory failure in most cases with a median survival of 3 to 5 years after diagnosis [1]. IPF is complicated in the pathophysiological process whose occurrence and development are still elusive [2]. e typical histopathological features of IPF are patchy injury and proliferation of alveolar epithelial cells, basement membrane exfoliation or injury, alveolar consolidation, and fibroblastic lesions, as well as an abnormal proliferation of mesenchymal cells [3,4]. e diagnosis criteria for IPF have been proposed with fibrosis of varying degrees [5,6]. Currently, only two anti-fibrosis agents, Pirfenidone and Nintedanib, have been approved by the FDA [7,8]. However, the clinical efficiencies are not ideal and the adverse reactions are obvious. erefore, the clinical specialists frequently combine anti-fibrosis agents with a variety of treatments in practice, including glucocorticoids, anti-inflammatory drugs, antioxidants, immune modulators, and traditional Chinese medicines.
Although the pathogenesis has yet to be fully elucidated, an array of triggers have been found to contribute to IPF, including chemicals, radiation, fibrogenic environmental toxins, or other unknown factors [9]. e inflammatory reactions following those triggers might be the key mechanism for the tissue damage and the accumulation of the extracellular matrix (ECM) proteins, especially proinflammatory cytokines derived from macrophages [10]. At present, a basic consensus about IPF is reached to be a major inflammatory disease due to the growth of inflammatory cells in the lungs [11]. Macrophages as major innate immune cells reside in the lung within alveolar spaces and interstitial tissue. Generally, macrophages have been categorized by function as the classical proinflammatory M1 subtype and the alternative anti-inflammatory M2 phenotype, and both M1 and M2 subtypes are closely associated with the different stages during the disease [12]. M1 macrophages are associated with 1 immune responses and activated by INF-c and toll-like receptor (TLR) ligands to maximize cytotoxic activity by IL-1β and TNF-α. e M2 phenotype is associated with tissue repair, angiogenesis, and tissue remodeling, resulting in ECM deposition [13], which means that the polarization of the M2 macrophage subtype is related to the increased severity of pulmonary fibrosis. Moreover, both proinflammatory macrophages and profibrotic macrophages are demonstrated in humans as well as in mouse models [12]. Generally, NF-κB is primarily considered an important regulator of the proinflammatory processes in the M1 macrophages through the production of cytokines such as TNFα and IL-6.
ere is strong evidence that NF-κB activation in subpopulations of macrophages may also represent an anti-inflammatory M2-like phenotype [14]. All these researches suggest that the inhibition of the macrophage activation and NF-κB pathways signaling can be a therapeutic target for the IPF.
Traditional Chinese formulae are under the guidance of the theory of traditional Chinese medicine, usually made up of several herbal medicines. e formulae of Bufei decoction containing Astragalus membranaceus, Polygonum cuspidatum, Salvia miltiorrhiza, Ligusticum Chuanxiong, and Ophiopogon japonicus have been used for decades in the second afflicted hospital of the Heilongjiang University of Chinese Medicine. Although the possible mechanism of Huogu injection has not yet been thoroughly investigated, some herbs in this formula have been proven to exert therapeutic effects on IPS. Polygonum cuspidatum downregulated the level of cytokine (TNF-a) and inhibited the progress of pulmonary fibrosis in rats. Salvia miltiorrhiza inhibited or delayed the occurrence and development of bleomycin-induced pulmonary fibrosis by increasing the activity of superoxide dismutase, reducing the content of malondialdehyde and hydroxyproline [15]. We herein examined the effects of Bufei decoction by intervening NF-κB signal transduction pathway in a bleomycin-induced pulmonary fibrosis mouse model [16,17]. To explore the mechanism of this prescription in the treatment of pulmonary fibrosis and provided a reliable experimental basis for clinical practice, we demonstrated that Bufei decoction could effectively inhibit the infiltration of macrophages and the activity of NF-κB in alveolar macrophages (AM) and reduce the content of hydroxyproline in lung tissue to attenuate the degree of pulmonary fibrosis. BCA protein assay kit and nucleoprotein extraction reagent (TDY Biotechnology Co., Ltd., China). DAB reagent kit and Rabbit two-step test kit (Zhongshan Jinqiao Biotechnology Co., Ltd., China).
Preparation of Bufei Decoction.
Chinese medicine formula granule was composed of 2 g Salvia miltiorrhiza, 1.5 g Astragalus membranaceus, 1 g Polygonum cuspidatum, 2 g Ligusticum Chuanxiong, and 3 g Ophiopogon japonicus. Before administration, warm water was prepared with a concentration of 0.247 g/ml for immediate use. e daily dose was 4.94 g/kg. e volume of administration is 20 ml/ kg.
Animal.
A total of 48 male ICR mice, aged 8-12 weeks, weighting 18-22 g, were acclimatized for three days and randomly divided into four groups (n � 12 per group): control group (control), bleomycin group (model), bleomycin + prednisone acetate group (positive), and bleomycin + Bufei decoction (treatment). All animals were purchased from the Experimental Animal Center of Liaoning Province.
Establishment of BLM-Induced Pulmonary Fibrosis and Drug Treatment.
e pulmonary fibrosis model in mice was prepared by atomization inhalation of bleomycin. When the mice were awake, they were put into a transparent plexiglass box with 30 cm × 30 cm × 20 cm connected with the atomizer and atomized 5 g/L (50%) bleomycin diluent was sprayed into the box through the atomizer tube. ree to four mice were put in the box at a time and exposed to bleomycin for a total of 3 hours and 15 minutes of bleomycin inhalation separated by 7 sessions of 5 minutes of rest. In the control group, mice received saline as a replacement for bleomycin inhalation [4,18]. On the second day after modeling, all mice except those in the control group and model group were orally treated with saline, and the mice in the positive group and treatment group were continuously administrated with prednisone acetate (at a dose of 0.0064 mg/g) or Bufei decoction (at a dose of 1.235 mg/g) for 4 weeks.
Determination of Hydroxyproline in Lung Tissue.
e contents of hydroxyproline were analyzed in lung tissue following the instruction of hydroxyproline assay kit. e pulmonary tissues of mice were ground and homogenized with 1 ml of 6 mol/L potassium chloride solution, hydrolyzed at 95°C for 5 hours, and the pH value was adjusted to 6.0-6.8. According to the instructions, the corresponding reagents were added to the reaction system and mixed thoroughly and then incubated for 15 minutes at 60°C. After cooling, the supernatants were collected after centrifuging at 3500 rpm for 10 minutes. e absorbance value of the supernatant from the samples was measured at 550 nm by a spectrophotometer and calculated for the contents of hydroxyproline on each group.
2.6. Histopathological Analysis. Mice were sacrificed and the lungs were harvested on Day 14 and Day 28 after the initial treatment.
e lung tissue specimens were fixed in 10% formaldehyde, embedded in paraffin, and cut into 3-5 μm thickness. e lung tissue sections were stained to assess for lung injury and morphological changes using Hematoxylin & Eosin (H&E) and Masson.
Immunohistochemical Staining of Lung Tissue Sections.
e tissue sections were dewaxed with xylene and rehydrated through a gradient of ethanol to water. For antigen retrieval, sections were immersed in 0.01 M citrate buffer (pH 6) and heated with a microwave oven. After cooling at room temperature, sections were then transferred into 3% H 2 O 2 for 15 min to block the endogenous peroxidase activity. After PBS washes, nonspecific antibodies binding to the tissue sections were blocked with 10% normal goat nonimmune serum at 37°for 30 min. After washing off the goat serum, the sections were incubated with primary antibodies (NF-κB and Collagen I) overnight at 4°C. After PBS washes again, sections were rinsed with PBS and incubated with goat antimouse/rabbit secondary antibodies for 15 min at room temperature. After rinsing with PBS, the sections were visualized with diaminobenzidine (DAB) and counterstained with hematoxylin. After sealing with neutral gum, all the sections were photographed under a light microscope.
Immunofluorescence Staining of Lung Tissue Sections for CD45 and F4/80.
e tissue sections were dewaxed with xylene and rehydrated through a gradient of ethanol to water. en, the sections were subjected to fetal bovine serum blocking solution at room temperature for 1-2 h followed by overnight primary antibody incubation at 4°C. Next, the sections were subsequently incubated with the fluorescent secondary antibody for 1 h at room temperature. After counterstaining with DAPI for the nucleus, the sections were sealed with neutral gum, and the fluorescent images were taken by a fluorescence microscope.
Preparation of Bronchoalveolar Lavage Fluid (BALF).
e right lung was perfused with 4°C sterile saline solutions through a 21G needle by the trachea, massaged gently to collect the BALF at Day 28. e collected BALF was centrifuged at 500 × g for 15 min at 4°C. After centrifugation to remove the supernatant, the precipitated cells were washed twice with sterile PBS and cultured in RPM1640 culture medium containing 10% fetal bovine serum and 0.1% penicillin-streptomycin solution at 37°C.
Western Blot for NF-κB and p-NF-κB.
e nucleoprotein extracts were prepared from the culture medium of BALF. Briefly, samples were placed in RIPA lysis and extraction buffer containing 0.1% phenylmethanesulfonyl fluoride (PMSF). Tissue protein extracts were then centrifuged and immediately frozen for further western blotting assays. e protein concentration was measured by the BCA kit. Equal amounts of each protein sample were separated on 4%-20% SDS/PAGE gels at 120 V for 1.5 h and transferred on polyvinylidene fluoride (PVDF) membranes at 80-100 V for 1 h. Blots were blocked in a 5% nonfat milk/TBS solution and incubated with the primary antibodies at 4°C overnight. After washing with TBS containing 0.1% Tween 20, fluorescent antibodies were used as secondary antibodies.
Statistical Analysis.
e results were presented as mean ± SD. All analyses were performed using SPSS 19.0 statistical software. e statistics and data evaluation were subjected to statistical analysis using one-way ANOVA. # P < 0.05, ## P < 0.01, * P < 0.05, and * * P < 0.01 were considered significant.
Exploration of Pathological Changes in the Lung Tissue.
To evaluate the effect of Bufei decoction in IPF, H&E staining was performed to observe the pathological changes among groups. As shown in the model group (Figure 1), the BLM-induced IPF at Day 14 and Day 28 were manifested by pulmonary congestion, emphysema of varying degrees, and infiltration of massive inflammatory cells. e degree of pulmonary congestion and emphysema in the treatment groups was less, in addition to a lower level of inflammatory cell infiltration at Day 14 after BLM-induced IPF. After 28 days, the pulmonary inflammation gradually receded, and the inflammatory cell infiltration in the treatment groups was significantly lower than that in the model group; particularly there was no difference between Bufei decoction and prednisone acetate. In the model group, the damaged alveolar structure, liquefaction, necrosis, and local pulmonary fibrosis appeared at Day 14. With the progress of IPF, the main lesion went through alveolar dilatation in the model group at Day 28. In the treatment groups of Bufei decoction and prednisone acetate, the alveolar structure was slightly preserved at Day 14. e pulmonary interstitial hyperplasia was inhibited, but the alveolar dilatation could be observed at Day 28. ese results indicated that Bufei decoction could inhibit the inflammatory response and improve the alveolar structure in BLM-induced IPF.
Bufei Decoction Inhibited the Pulmonary Fibrosis Induced with BLM.
e pulmonary fibrosis is the important manifestation of BLM-induced IPF in mice. To reveal the therapeutic effects of Bufei decoction, we evaluated the extent of pulmonary fibrosis among groups with Masson staining. As shown in Figure 2, the collagen fiber deposition was observed in the lung tissues of the model group at Day 14, mainly concentrated in the terminal bronchial wall and alveolar septum. With the progress of IPF, a large number of Evidence-Based Complementary and Alternative Medicine collagen fibers dyed blue still deposited in the pulmonary interstitium. In the treatment groups of Bufei decoction and prednisone acetate, a mild extent of pulmonary fibrosis was observed at Day 14 and Day 28. To quantify the extent of pulmonary fibrosis, the hydroxyproline content in lung tissue was measured in each group and is shown in Figure 3. Compared with the control group, inhalation of bleomycin significantly increased the content of hydroxyproline in the lung tissue of the model group at Day 14 and Day 28 (P < 0.01). e content of hydroxyproline in lung tissue of the positive group and the treatment group significantly decreased at Day14 and Day 28 (P < 0.01).
Bufei Decoction Modulated NF-κB Intranuclear Translocation and Collagen Deposition in BLM-Induced IPF.
To further reveal the underlying mechanism of Bufei decoction in BLM-induced IPF, we examined the inflammationspecific NF-κB p65 and fibrotic contributor of type 1 collagen in the lung tissues via the immunohistochemical staining [19]. Compared with the control group, the expression of NF-κB p65 significantly increased and was localized in cytoplasm and nuclei at Day 14 and Day 28 after BLM-induced IPF, especially in the IPF model group. Furthermore, as shown in Figure 4, the expression intensity in the nuclei of the IPF model group was higher than in the nuclei of Bufei decoction treatment group, which meant more NF-κB p65 complexes translocated into the nucleus and played their roles of the nuclear transcription factor to activate the inflammation. Additionally, the expression of type 1 collagen protein was identified as a direct marker during lung fibrosis. Compared with the control group, the expression intensity of type 1 collagen dramatically increased at Day 14 and Day 28 after BLM-induced IPF, as shown in Figure 5. Bufei decoction treatment could inhibit the expression of type 1 collagen and decreased the collagen deposition of lung tissues at Day 14 and Day 28 after BLMinduced IPF.
Bufei Decoction Inhibited the Inflammatory Cell
Infiltration.
e IF staining for the inflammatory cell surface maker of CD45 labelled the infiltrating leukocytes in the lung tissues and the cell surface maker of F4/80 specialized for the infiltrating macrophages. As shown in Figure 6, the results revealed that the infiltration of leukocytes and the subtype of macrophage was significantly increased in the model group. e treatment of Bufei decoction and prednisone acetate reduced the accumulation of inflammatory cells in the lung tissues, which suggested anti-inflammatory effects in the bleomycin-induced lung injuries.
Bufei Decoction Inhibited Expression of NF-κB and p-NF-κB Protein.
e western blot for expression of NF-κB and p-NF-κB protein in the BALF of mice at Day 28: as shown in Figure 7, the relative expression of each lane was normalized by the control group in the first three lanes. e results showed that the expression of NF-κB and p-NF-κB protein in the model group was higher than that in the control group, and the positive and treatment groups decreased the expression of NF-κB and p-NF-κB protein at Day 28 after BLM-induced IPF.
Discussion
IPF has been defined as a type of chronic fibrotic interstitial pneumonia, which is characterized by the progressive and irreversible destruction of pulmonary structures caused by Evidence-Based Complementary and Alternative Medicine the formation of pulmonary interstitial fibrosis deposition, and ultimately leads to organ dysfunction, gas exchange failure, and respiratory failure [20,21]. Accumulating evidence shows that IPF is associated with a distinct type of macrophage activation and a comprehensive panel of cytokines produced by the NF-κB signaling pathway. In our clinical practice, the Bufei decoction can moderate the dyspnea symptoms and relieve the progression of acute exacerbation of IPF. Although the clinical application of the Bufei Decoction has achieved satisfactory results, the pharmacological mechanism is still unclear. In the present study, we established a bleomycin-induced pulmonary fibrosis mouse model to demonstrate that Bufei decoction could inhibit the infiltration of macrophages and the activation of NF-κB in lung tissues. Several studies have proved that the Chinese herbal medicines in Bufei decoction can reduce the degree of pulmonary fibrosis effectively by decreasing the expression of many inflammatory factors in the lung, inhibiting oxidative stress response, and inhibiting the accumulation of extracellular matrix [22][23][24][25]. In this study, IPF induced with bleomycin is characterized by increased collagen production and deposition to a varying degree, which will make the alveolar wall thicker and the ventilation function significantly lower, thus leading to the occurrence of pulmonary fibrosis. Hydroxyproline (Hyp) is one of the main components of collagen, and the content of hydroxyproline represents the degree of lung tissue fibrosis. Up to 28 days of oral application, Bufei decoction had a great potential in inhibiting the production of Hyp induced by bleomycin. From the results of H&E staining and Masson staining, Bufei decoction could alleviate the degree of alveolar structural damage and fibroblast proliferation and indeed reduce the content of Hyp, suggesting that it significantly reduced the degree of bleomycin-induced pulmonary fibrosis.
Although the pathogenesis of IPF is not fully understood, there is no doubt that inflammatory injury plays an important role in it. e infiltration and activation of macrophages in the lung tissues play a pivotal role in the IPF. To study whether Bufei decoction could alleviate the inflammatory injury by inhibiting the infiltration of inflammatory cells, the leukocytes and macrophages were detected by immunofluorescent (IF) staining of CD45 and F4/80 in the paraffin section of lung tissue. We revealed that Bufei decoction decreased the number of leukocytes and macrophages, suggesting that Bufei decoction inhibited the inflammatory response in the lung tissues. NF-κB, as one of the main nuclear transcription factors regulating inflammation and immune response, plays an important role in signal transduction in pulmonary fibrosis and other fibrous proliferative diseases by the activation of macrophages [16,26]. In the process of pulmonary fibrosis, the expression of various cytokines increased in alveolar macrophages by activation of the NF-κB pathway, resulting in excessive fibroblasts proliferation, fibrosis deposition, and pulmonary fibrosis [27][28][29].
erefore, the activity of NF-κB in cells directly affects the process of pulmonary fibrosis. Results from the immunohistochemistry showed that Bufei decoction inhibited the staining density of NF-κB and reduced the number of the stained nuclei in the lung tissue sections. e western blotting assay also revealed that Bufei decoction decreased the expression of NF-κB and inhibited its phosphorylation. Given that the macrophages were abundant in the lung tissue of IPF, the expression and phosphorylation of NF-κB in lung tissues might indicate the potential of inactivation of macrophages and played an important role in the occurrence and development of IPF. All the results indicated that Bufei decoction had great potential in inhibiting the inflammatory response in the lung.
To the best of our knowledge, there are no promising treatments in IPF due to the complicated pathogenesis until now. e core principles of formulae are to find an interconnected, complementary, and interdependent relationship for each piece and combine them to yield more beneficial results in treating disease than in using them individually. In this study, we demonstrated that Bufei decoction improved the process of pulmonary fibrosis by regulating the activation and expression of the NF-κB signaling pathway, inhibiting pulmonary inflammation, and alleviating the pathological changes of pulmonary tissue. All these results provided the experimental evidence that Bufei decoction exerted its function by inhibiting inflammatory responses and offered a new therapeutic option for clinicians in the prevention of IPF.
Data Availability
e data used to support the findings of this study are available from the corresponding author upon request.
Conflicts of Interest
e authors declare that they have no conflicts of interest.
Authors' Contributions
Shanjun Yang and Wenwen Cui contributed equally to this work. | 2020-09-10T10:19:44.339Z | 2020-09-08T00:00:00.000 | {
"year": 2020,
"sha1": "2c88c7b4cdb6c9ff9037f85eacbb1887ecb3f2a4",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1155/2020/7483278",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "b012b432ac77949d1006f17eaf3252b5ee4779fb",
"s2fieldsofstudy": [
"Medicine",
"Environmental Science"
],
"extfieldsofstudy": [
"Medicine"
]
} |
245108074 | pes2o/s2orc | v3-fos-license | Longitudinal Analysis of Memory T-Cell Responses in Survivors of Middle East Respiratory Syndrome
Abstract Background Middle East respiratory syndrome (MERS) is a highly lethal respiratory disease caused by a zoonotic betacoronavirus. The development of effective vaccines and control measures requires a thorough understanding of the immune response to this viral infection. Methods We investigated cellular immune responses up to 5 years after infection in a cohort of 59 MERS survivors by performing enzyme-linked immunospot assay and intracellular cytokine staining after stimulation of peripheral blood mononuclear cells with synthetic viral peptides. Results Memory T-cell responses were detected in 82%, 75%, 69%, 64%, and 64% of MERS survivors from 1–5 years post-infection, respectively. Although the frequency of virus-specific interferon gamma (IFN-γ)–secreting T cells tended to be higher in moderately/severely ill patients than in mildly ill patients during the early period of follow-up, there was no significant difference among the different clinical severity groups across all time points. While both CD4+ and CD8+ T cells were involved in memory T-cell responses, CD4+ T cells persisted slightly longer than CD8+ T cells. Both memory CD4+ and CD8+ T cells recognized the E/M/N proteins better than the S protein and maintained their polyfunctionality throughout the period examined. Memory T-cell responses correlated positively with antibody responses during the initial 3–4 years but not with maximum viral loads at any time point. Conclusions These findings advance our understanding of the dynamics of virus-specific memory T-cell immunity after MERS-coronavirus infection, which is relevant to the development of effective T cell–based vaccines.
The Middle East respiratory syndrome coronavirus (MERS-CoV) is one of the newly discovered coronaviruses that can cause fatal pneumonia in humans. While the biological properties of this virus have been relatively well elucidated [1], therapeutic agents and preventive vaccines are not yet available.
A few studies on the immune responses to MERS-CoV infection in human patients have demonstrated that MERS-CoVspecific antibodies, including neutralizing antibodies (nAbs), were generated in most infected persons in proportion to disease severity [2,3]. While this humoral immune response plays an important role in preventing the spread of viral infection, it cannot stop proceeding to death in fatal infections [2,3]. In contrast, little information is available on human T-cell responses in MERS-CoV infection. We previously examined the cellular immune response in patients with MERS at the early stage of infection and showed that CD4+ T-cell responses were detected at the convalescent phase of infection in a clinical severity-dependent manner, while CD8+ T-cell responses were observed during the acute stage of infection when CD4+ T-cell responses were not yet detected in some severely ill patients [4]. The latter finding might infer the pathogenic role of CD8+ T cells in the acute phase of infection. However, it has not been resolved whether the presence of memory T-cell responses could prevent infection and/or alleviate clinical symptoms. Furthermore, few studies have reported the longitudinal analysis of virus-specific memory T cells after natural MERS-CoV infection in humans, which is pivotal for the development of effective control measures.
In the present study, we followed a cohort of patients who recovered from the epidemic infection in 2015 in South Korea for up to 5 years to investigate the magnitude, persistence, and functional features of MERS-CoV-reactive memory T-cell responses after recovery from infection. We also analyzed its relationship with virus-specific antibody responses, viral loads, and clinical severity.
Patients
A cohort of 59 patients who recovered from MERS-CoV infection during the 2015 outbreak in Korea participated in this study. These patients were recruited from 5 hospitals in Korea, and this study was approved by the ethical committee of the corresponding hospitals (National Medical Center; H-1510-059-007 and H-1712-085-005, Seoul National University Hospital; 1509-103-705 and 1511-117-723, Seoul Medical Center; 2015-12-102, Dankook University Hospital; 2016-02-014, and Chungnam National University Hospital; 2017-12-004). Their demographic characteristics are presented in Table 1. Samples from uninfected healthy donors were collected either before December 2014 (n = 12) or in April 2018-May 2018 (n = 30). All participants provided written informed consent.
All other experimental methods are available in the Supplementary Materials.
Cohort Characteristics
We enrolled and longitudinally analyzed 59 patients infected with MERS-CoV during the 2015 epidemic outbreak in South Korea. Participants were divided into 3 groups depending on the severity of the illness, as described in our previous study [4]. In brief, the severe group included patients who required mechanical ventilation (n = 17). The moderate group included patients with pneumonia but without respiratory failure (n = 28). The mild group consisted of patients without distinctive pulmonary lesions (n = 14). There was no difference in age and the presence of underlying diseases among the 3 groups, but male sex was dominant in the severe group in our cohort (Table 1). Some patients received antiviral treatment during admission (Table 1), but none received steroid therapy. The maximum viral loads [log 10 (copy/mL)] during their acute illness were 6.03 (5.19-7.61) (median and range), 7.39 (4.65-9.15), and 8.01 (4.54-9.61) for mild, moderate, and severe groups, respectively.
Kinetics of Memory T-Cell Responses
To evaluate the dynamics of MERS-CoV-specific T lymphocytes in recovered MERS patients, we performed an enzyme-linked immunospot (ELISPOT) assay using peripheral blood mononuclear cells (PBMCs) obtained from different time points after infection. When PBMCs were stimulated with synthetic viral peptides encompassing the 4 structural proteins, interferon gamma (IFN-γ)-producing T cells could be distinctively visualized in the first year after infection, especially in the moderate and severe groups, and these decreased gradually over time in most participants ( Figure 1A Supplementary Figures 1 and 2). When all groups were combined, the median frequency of antigen-reactive T cells per 2 × 10 5 PBMCs at the first, third, and fifth years were 90 (interquartile range [IQR], 49-167), 64 (30-116), and 46 , respectively ( Figure 1B). In a comparative analysis among groups classified per clinical severity, antigen-specific T lymphocytes tended to be observed in higher numbers in the moderate and severe groups than in the mild group at the early time points after infection, although it was not significant. Thus, the median frequency and IQR of IFN-γ-producing T cells observed in the severe, moderate, and mild groups in the first year after infection were 116 (59-258), 96 (60-188), and 44 (34-91), respectively. However, this apparent difference almost disappeared after 3 years of recovery from infection because the number of antigenreactive T cells decreased more rapidly in the severe/moderate group than in the mild group at this time interval (Figures 1A and B). In terms of the positivity rate of memory T-cell responses, 82%, 69%, and 64% of all participants maintained detectable levels of memory T cells in their peripheral blood in the first, third, and fifth years after infection, respectively ( Figure 1C). Importantly, the positivity rate decreased more rapidly in the mild group than in the moderate/severe group (P < .05), yielding 36% of mild patients who maintained positive memory T-cell responses in the fifth year after infection, while 70%-74% of moderate/severe patients did ( Figure 1C). Next, we determined the cytokine profile of MERS-specific T cells using intracellular cytokine staining (ICS) following stimulation with MERS-CoV peptide pools. Overall, even if there was no significant difference, a higher frequency of IFN-γ-secreting cells was detected in the CD8+ T cells than in the CD4+ T-cell compartment (0.081%, 0.021-0.1760 vs 0.061%, 0.028-0.108, median and IQR at first year) at the beginning of the follow-up period. However, antiviral CD8+ T cells decreased faster than CD4+ T cells, indicating that the frequency difference between the 2 T-cell subsets gradually decreased and reversed in the last year of observation ( Figure 2A Supplementary Figure 3). In comparison among groups, slightly higher frequencies of IFN-γ-producing CD4+ T cells were observed in the moderate/ severe group compared with the mild group, while IFN-γ-producing CD8+ T cells were observed at a lower frequency in the severe group than in the other 2 groups (Figure 2A). Based on individual study patients, both CD4+ and CD8+ T lymphocytes contributed to the IFN-γ-secreting T-cell compartment either alone or together ( Figure 2B). However, there was no significant correlation between the frequencies of virus-reactive CD4+ and CD8+ T cells across the entire study period ( Figure 2C). Although MERS-CoV-reactive CD4+ T-cell responses decreased over time in most patients, they were maintained or even slightly increased in some patients (12 of 56) regardless of the severity of illness (Supplementary Figure 4). A similar pattern of responses was observed in individual patients in the CD8+ T-cell compartment as in CD4+ T cells.
When we analyzed the responsiveness of memory T cells to different viral proteins, it was revealed that irrespective of T-cell subsets, more T cells were responsive to the E/M/N proteins than to the S protein at most time points. This preferential response to the E/M/N proteins was not associated with any specific severity group (Figure 3).
Functional Characteristics of MERS-CoV-Specific Memory T Cells
T cells that secrete multiple cytokines are considered superior in the control of viral infection [5]. Therefore, we addressed the proportion of single-or multiple-cytokine-secreting cells in MERS-CoV-reactive T cells. The functional subsets of virusreactive CD4+ T cells were more or less evenly distributed among single-, double-, and triple-cytokine secretors, while single-cytokine-secreting cells were slightly dominant in virusreactive CD8+ T cells (47%-72%); this distribution pattern tended to remain at all time points examined (Figures 4A and B). Approximately 2 of 3 virus-reactive CD8+ T cells produced IFN-γ alone or together with tumor necrosis factor alpha. Overall, there was no difference in the proportion of functional subsets in both antiviral CD4+ and CD8+ T cells among the groups with different clinical severities (Figures 4A and B).
Correlations Between Memory T-Cell Responses and Other Immune/ Clinical Modalities
We addressed whether the magnitude of viral load detected at the acute stage of MERS-CoV infection could influence the development of memory T-cell responses. Our analysis demonstrated that there was no association between the maximum viral titer and the frequency of virus-reactive T cells observed using either ELISPOT assay or ICS at any time point after infection ( Figure 5A Supplementary Figures 5A and 6A). In contrast, when memory T-cell frequencies were plotted with the level of serum antibodies, anti-S1 immunoglobulin G (IgG), or nAbs, we found a highly significant relationship between them in the first 3 years after infection. However, the degree of this correlation gradually decreased over time, and anti-S1 IgG and nAb titers did not correlate with memory T-cell frequencies after the fourth and fifth years post-infection, respectively ( Figure 5B). Interestingly, although virus-reactive CD4+ T-cell frequency correlated significantly with anti-S1 IgG responses longer than CD8+ T cells (3 years vs 2 years), virus-reactive CD8+ T-cell frequency correlated with nAb titers across all 5 years but CD4+ T cells only for 3 years after infection (Supplementary Figures 5B and 6B).
DISCUSSION
The fate of immune responses following an infection is critical for preparing effective control measures against a newly occurring viral infection. Our study demonstrates that functional memory T-cell responses following MERS infection lasted more than 5 years in 64% of the infected patients, and the maintenance of memory T cells was longer in patients with severe infection than in those with mild infection. Both CD4+ and CD8+ T cells, either alone or together, participated in this memory T-cell response to MERS-CoV, with the tendency that the initial magnitude of the response was slightly higher, but the longevity was shorter in CD8+ T cells than in CD4+ T cells. In addition, more T cells responded to the E/M/N viral proteins than to the S protein in both CD4+ and CD8+ T-cell subsets throughout the 5 years post-infection.
Human T-cell responses to acute viral infection were well explored in a longitudinal analysis of T cells responding to live yellow fever virus and smallpox vaccination [7,8]. According to these studies, antiviral T cells greatly expanded during the second week of infection and then contracted abruptly over the next 2 weeks, followed by a gradual reduction thereafter, with a halflife of approximately 8-15 years. The kinetics of virus-specific T cells elicited by MERS-CoV infection seem to be similar to those observed in these model infections. Our previous study demonstrated that a high frequency of virus-specific CD4+ and CD8+ Correlations of the magnitude of memory T-cell response measured using enzyme-linked immunospot assay with maximum viral loads during the acute stage of infection (A) and the level of specific antibodies (anti-S1 IgG titer and PRNT 50 ) in serum samples collected each year (B) (data already published in ref [6]) were accessed using linear regression and the Spearman rank test. The numbers of participants used for this analysis are shown in each graph. Abbreviations: IgG, immunoglobulin G; OD, optical density; PBMC, peripheral blood mononuclear cell; PRNT 50 , 50% plaque reduction neutralization test; SFC, spot-forming cell.
T cells (0.3% and 1.2% of mean percentage, respectively) were detected in most patients with MERS at the convalescent phase of acute infection (2-5 weeks after symptom onset) [4]. These cells greatly decreased at 9-12 months after infection (0.067% and 0.11% of the mean, respectively), as observed in the current study. Thereafter, a slow and gradual decline in virus-specific T cells was detected over the following 4 years. Interestingly, T-cell responses elicited by MERS vaccination in human clinical trials were also shown to follow kinetics similar to those induced by natural infection, at least during the first year after vaccination [9]. Based on the finding that memory T cells could be detected up to 17 years following infection with severe acute respiratory syndrome coronavirus (SARS-CoV) [10], which is a similar pathogenic coronavirus, memory T cells for MERS-CoV also seem to be long-lasting. Unexpectedly, a few participants maintained a relatively large number of memory T cells in their peripheral circulation throughout the entire observation period in our study. Although antigen persistence and memory inflation are known to cause the maintenance of high levels of memory T cells for a prolonged period [11], these mechanisms are not likely to work on our extraordinary observation because MERS-CoV does not cause persistent or chronic infection in humans. In contrast, cross-reaction eliciting from the infection with closely related human coronaviruses, such as OC43 and HKU1 [12], or a stochastic clonal expansion of memory T cells [13] could lead to the maintenance of high levels of or even rising T-cell memory responses. Further studies are needed to delineate the exact underlying mechanism of this observation.
Antibody response against MERS-CoV persisted in 86% of patients for at least 34 months [14] and in 36% in the fifth year after infection [6]. The magnitude and persistence of humoral immune responses in patients with MERS have been shown to correlate with the severity of viral infection [4,6,15]. Our study revealed that memory CD4+ T-cell responses also tended to have a positive relationship with clinical severity. However, antiviral CD8+ T cells showed the opposite behavior, a similar or lower frequency and shorter duration in the severe group than in the mild group. A previous study [16] indicated that antiviral T cells, particularly CD8+ T cells, could be detected even in mild MERS patients with undetectable antibody responses. A similar finding was reported in patients with COVID-19 [17]. Similarly, some patients (approximately 22%) showed positive CD8+ T-cell responses in the absence of antibody response in the first year after infection in our study. Nevertheless, it is of note that T-cell responses are longer-lasting than antibodies in MERS survivors, which is contributed mainly by the CD4+ T-cell subset rather than the CD8+ T-cell subset.
The maximum virus titer could be used as an indirect indicator of the amount of antigen load in viral infections and was associated with the severity of disease in our cohort (data not shown). The level of antibody responses 1 year after infection is positively correlated with the maximum viral loads during the acute stage of infection [18]. However, the level of memory T-cell responses did not correlate with the maximum viral loads at any time point, including 1 year after infection in our study, despite its positive correlation with antibody responses. Although we cannot rule out the possibility that the level of T-cell responses at the acute stage of infection could correlate with peak viral loads, our data revealed that exposure to high titers of the virus at the acute stage of infection does not necessarily guarantee the generation of high levels of long-term memory T lymphocytes.
Antiviral treatments with chemotherapeutic agents or immune modulators administered during the acute stage of infection could affect the persistence and magnitude of memory T-cell responses by either decreasing the viral burden or by preventing T-cell apoptosis, respectively [19,20]. Most of our patients with severe/moderate illness received interferon and ribavirin, whereas less than half of the patients with mild illness did. This differential application of antiviral treatment could produce a differential development of memory T-cell responses in patients with different clinical severities. However, our limited analysis did not support this possibility because there was no significant difference in the magnitude of T-cell responses between treated and nontreated patients in either the mild or moderate illness group. Furthermore, the seemingly different persistence of memory T-cell responses observed in the comparative analysis among different severity level groups including all patients was still detected in the same analysis targeting only patients who received the same treatments. The 3 severely ill patients who received convalescent serum therapy did not show any specific difference in memory T-cell responses compared with the remaining patients with severe illness.
Memory T-cell responses following MERS-CoV infection observed in our study are similar to those following SARS-CoV infection in several aspects. First, the frequency of virus-specific T cells, especially CD4+ T cells, was higher in the severe group than in the mild/moderate group [21,22]. In addition, the functionality of viral antigen-specific CD4+ and CD8+ T cells was comparable in recovered patients with various disease severities [21]. Furthermore, memory T cells could be detected in approximately 60% of SARS survivors at 6 years post-infection [22], implying similar longevity of memory T cells in patients with MERS and SARS. However, while SARS-CoV-specific CD4+ T cells were shown to respond predominantly to the S protein [21], MERS-CoV-reactive CD4+ T cells responded slightly higher to the E/M/N proteins than to the S protein. According to recent studies [23,24], the overall immune responses to severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) infection, which is causing the current pandemic, do not appear to be much different from those revealed in SARS or MERS virus infection. If so, our current findings suggest that long-lasting memory T-cell responses could be attainable from SARS-CoV-2 infection or vaccination.
The limitations of this study include an uneven sample size across all time points, including the small number of first-year samples, especially in the mild group, due to the poor quality of frozen cells and increasing dropout on the progression of follow-up. Together with the heterogeneity of the measures depending on individual participants, the variation in sample size makes it difficult to reach a clear conclusion. In addition, further research is required to define the exact nature of extraordinarily strong T-cell responses to MERS-CoV peptide pools observed in some MERS survivors, as this could have a great impact on the interpretation of memory T-cell responses. Nonetheless, this study provides valuable information on the longevity and characteristics of memory T-cell responses attained from MERS-CoV infection in humans.
Supplementary Data
Supplementary materials are available at Clinical Infectious Diseases online. Consisting of data provided by the authors to benefit the reader, the posted materials are not copyedited and are the sole responsibility of the authors, so questions or comments should be addressed to the corresponding author.
Notes
Financial support. This work was supported by the Korea Health Technology R&D Project (HI15C3227) through the Korea Health Industry Development Institute and grants from the Korea Center for Disease Control and Prevention(2017NER530700, 2019ER530200, and 2020ER530501), funded by the Ministry of Health and Welfare.
Potential conflicts of interest. All authors: No reported conflicts of interest. All authors have submitted the ICMJE Form for Disclosure of Potential Conflicts of Interest. Conflicts that the editors consider relevant to the content of the manuscript have been disclosed. | 2021-12-12T06:16:17.529Z | 2021-12-10T00:00:00.000 | {
"year": 2021,
"sha1": "a91c23a8ea88ec568e2d97a8281367612833823f",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "PubMedCentral",
"pdf_hash": "a7b6ef364a6136cc26d44e18bb539d29fbb6d9f9",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
256895687 | pes2o/s2orc | v3-fos-license | Agile Blocker and Clock Jitter Tolerant Low-Power Frequency Selective Receiver with Energy Harvesting Capability
In this article, a novel tunable, blocker and clock jitter tolerant, low power, quadrature phase shift frequency selective (QPS-FS) receiver with energy harvesting capability is proposed. The receiver’s design embraces and integrates (i) the baseband to radio frequency (RF) impedance translation concept to improve selectivity over that of conventional homodyne receiver topologies and (ii) broadband quadrature phase shift circuitry in the RF path to remove an active multi-phase clock generation circuit in passive mixer (PM) receivers. The use of a single local oscillator clock signal with a passive clock division network improves the receiver’s robustness against clock jitter and reduces the source clock frequency by a factor of N, compared to PM receivers using N switches (N≥4). As a consequence, the frequency coverage of the QPS-FS receiver is improved by a factor of N, given a clock source of maximum frequency; and, the power consumption of the whole receiver system can eventually be reduced. The tunable QPS-FS receiver separates the wanted RF band signal from the unwanted blockers/interferers. The desired RF signal is frequency down-converted to baseband, while the undesired blocker/interferer signals are reflected by the receiver, collected and could be energy recycled using an auxiliary energy harvesting device.
Method
Impedance Translation, Frequency Down-conversion and Passive Mixing. Impedance translation is usually defined as the translation of impedance (or transfer function) of a network present at one frequency to another, with the help of some periodically driven time variant system 27 . One of the simplest circuits that can perform a tunable impedance translation is a voltage mode impedance translation circuit, which is comprised of a parallel combination of two switches and driven periodically by two non-overlapping pulse waveforms of duty cycle D (usually 0.5), with one common RF terminal, and the other switch terminals connected to their respective baseband impedances, capacitors of value C B in parallel with resistor R B , as shown in Fig. 2. When a broadband ideal voltage signal source with output impedance R S (antenna with its input impedance of 50 Ω for receiver applications) is connected to this network, the voltage signal source sees an impedance translation of low-pass baseband impedance (C B ||R B in series with the voltage source impedance and the switch on-resistance) to a new bandpass input impedance of the network at RF [17][18][19] , as shown in Fig. 2a. It is assumed that the switches are identical with finite low on-resistance (R SW ), very high off-resistance, and the clock duty cycle (D) is such that effectively only one of the two switches is on at any given time.
The two-path impedance translation circuit, shown in Fig. 2a, exhibits two different input impedance values for the input RF signals in two different RF bands. The passband input impedance, defined as the input impedance of the impedance translation circuit at f RF = f LO is approximated by Eq. (1) 28 , where f LO is the switching frequency of the clock pulse waveforms. The stopband input impedance, which is defined as the input impedance of the network outside the passband of the impedance translation circuit, is approximated by Eq. (2) 28 . Tunable blocker tolerant frequency selective energy harvesting enabled receiver. A simplified block diagram of a tunable frequency selective receiver with energy harvesting capability is illustrated. The center frequency and bandwidth of operation for the receiver are tunable: only the RF band signal present at the frequency of operation in a set bandwidth is frequency down-converted by the receiver, while all the other unwanted blockers and interferer signals are separated from the desired band signal and used in an energy harvesting device (RF-to-DC) for converting ambient RF radiated power to direct current power for storage and usage. The receiver operating frequency, down-conversion bandwidth and other parameters are set by the control unit.
Scientific RepoRts | 7: 9658 | DOI: 10.1038/s41598-017-10023-8 In order to perform concurrent impedance translation from baseband to RF frequency (f RF = f LO ) and I/Q demodulation of a bandlimited RF signal present at f RF = f LO to baseband, a network of minimum four switches is required, where these four switches are driven periodically at speed f LO using four non-overlapping time delayed pulse waveforms shifted progressively in time by T LO /4. This approach of impedance translation and I/Q demodulation is used in the conventional passive mixer based receiver systems [17][18][19][20][29][30][31][32][33][34] . The four non-overlapping time delayed pulse waveforms are generated using an active multiphase clock generator circuit that takes a single higher speed clock signal at frequency 4f LO and converts it into four non-overlapping clock pulse waveforms of reduced speeds (f LO ) and duty cycles. The output voltages of the baseband capacitors are phase-shifted and combined to allow demodulation of a desired RF band signal and generate the demodulated I and Q components of the baseband signal 18,34 . In passive mixers, time jitters associated with clocks result in pulse overlaps, which contribute to conversion loss and also degraded output signal quality.
Proposed Receiver System. Electronic information radiated from modern wireless transmitters is modeled by Eq. (3), where I(t) and Q(t) are the baseband I and Q components, respectively, and f RF is the carrier frequency, of modulated transmitted RF signal r(t).
A fundamental function of the Rx is to faithfully recover the information contained in the I(t) and Q(t) signals from the received version of signal r(t) at frequency f RF .
We propose a new approach for concurrent I/Q demodulation and impedance translation that provides some advantages over the conventional passive mixers and homodyne receivers. The proposed approach generates four copies of phase-shifted versions of the received signal r(t) at carrier frequency f RF and utilizes a single clock pulse waveform of frequency f RF for sampling all the four versions of the RF signal as shown in Fig. 3a.
o is the phase shift introduced by the i th path of the received r(t) (assuming no noise/interference and distortion), the resultant i th path RF signal is provided by Eq. (4).
If the received signal is sampled at a sampling speed of f RF , the sampled version of the RF signal obtained from the i th path is given by Eq. (5). The bandwidths B ( ) of signals I t ( ) and Q t ( ) (and hence r(t)) are assumed to be very small compared to f B f ( ) RF RF , so that the effective sampling of signals I t ( ), −Q t ( ), −I t ( ) and Q t ( ) at a sampling rate of f RF do not introduce any significant problems with aliasing or noise Utilizing this sampling approach and the impedance translation approach using two switches described above, we propose a quadrature phase shift frequency selective (QPS-FS) receiver that alleviates some of the problems of conventional homodyne and passive mixer receivers. Moreover, the proposed receiver can be fitted with an auxiliary RF-to-DC rectifier to make it suitable for concurrent energy harvesting from ambient RF radiation. In the proposed QPS-FS receiver, the four phase-shifted versions of the RF signal are generated using a phase shift network comprising of one 180° hybrid coupler and two 90° quadrature hybrid couplers. The four output ports of this phase shift network are terminated with impedance translation circuits (ITC) using two switches and two capacitors. A block diagram of the proposed QPS-FS receiver is shown in Fig. 3b.
When the isolation port of a hybrid coupler is properly terminated (matched to its characteristic impedance), the input port impedance of the coupler is matched to the characteristic impedance of the system, given that the two output ports of the coupler are terminated with loads of equal impedance values, not necessarily the characteristic impedance of the system. In this case of identical, but non-characteristic, impedance termination of the output ports of an isolated coupler, the input port remains matched to the characteristic impedance; and, the phase relation between the two output ports is maintained throughout the frequency band of operation, although the output power transfers to the terminating loads may not be optimal. These properties of the couplers are exploited and used in the phase shift and clock division networks of the proposed receiver architecture.
The QPS-FS receiver utilizes the voltage mode impedance translation circuit shown in Fig. 2a. The impedance mismatch between the phase shift network output ports and the ITC inputs can be assessed as an advantage for voltage conversion gain, as only the input voltage is frequency down-converted and stored on the output capacitors. The current and power transfer become irrelevant parameters in this voltage mode mixing and impedance translation.
The output voltages of the capacitors (V V / BX BX ) are demodulated signals I t ( ), −Q t ( ), −I t ( ) or Q t ( ), depending on the RF phase shift path and the switching transistor where the output signal is taken. The eight output signals from the ITCs in Fig. 3b are further filtered and processed analogically (differential amplifiers) or digitally (high impedance analog-to-digital-converter (ADC) followed by a digital-signal-processor (DSP)), and the frequency down-converted and demodulated baseband signals I t ( ) (or I n [ ]) and Q t ( ) (or Q n [ ]) are obtained. As shown in Fig. 3b, the antenna is connected directly to the phase shift network, which allows for any interferers or blockers falling within the intended frequency band of coverage of the receiver to undergo similar amplitude and phase shifts by the phase shift network as originally planned for the desired RF band signal. When this phase-shifted combination of the desired RF signal and the undesired interferes reach the junction of the phase shift network output and the ITC input, the desired RF band signal is frequency down-converted into the output capacitor voltages as baseband signals, while the interferers are reflected back and absorbed into termination resistors T 1 , T 2 , and T 3 , as shown in Fig. 3b. The interferer signals reaching T 1 , T 2 and T 3 may be energy recycled by combining all the interferers and blockers and suppling them to a wideband energy harvesting system for DC power generation and storage.
Measurement Setup.
A real test setup was developed to empirically validate the proposed QPS-FS receiver. Figure 4a shows a basic test setup that was used to implement and verify the workings of the QPS-FS receiver concept. Appropriate modifications in this basic test setup were made to measure and characterize different behaviors (e.g., intercept points, blocker behavior) of the proposed receiver system. Measurement specific modifications to this basic test setup are described in the next section. ITCs were designed using enhanced mode pHEMT (pseudomorphic high-electron-mobility transistor) GaAs-FET (gallium arsenide -field-effective transistor) ATF55143 transistors from Avago Technologies, Inc. Multilayer ceramic RF capacitors from American Technical Ceramics (ATC), Inc. were used to hold the frequency down-converted baseband voltage signals.
The outputs from the ideal clock voltage source can be directly connected to the high input impedance gate terminals of the switching transistors. Due to the unavailability of a square wave clock voltage generator, a continuous wave (CW) clock source with 50 Ω output impedance was used in conjunction with a passive clock division circuit, which was a 10 Ω resistive signal divider network in combination with a 180° hybrid coupler, as shown in Fig. 3b.
The ITCs and the passive clock division network were designed on the same printed circuit board (PCB) using a fiberglass reinforced epoxy (FR4) substrate with a thickness of 1.6 mm. Commercially available 180° and 90° hybrid couplers were used to implement the phase shift network and the passive clock divider hybrid, and a differential amplifier evaluation board was used to change the output mode from single-ended to differential.
In the implemented test setup, the clock source was a Keysight Technologies, Inc. E4433B CW RF signal generator, instead of an ideal square pulse wave generator used in the theoretical modeling and analysis. After passing the clock signal to a 180° degree hybrid to get two out-of-phase signals, the out-of-phase signals were passed through two bias tees, as shown in Fig. 4a, so that the resultant sinusoids traveling to the gates of the switching transistors provide switching behaviors for the transistors that were as close as possible to the ideal switch behaviors driven by square pulses.
For modulated signal based measurements, the baseband modulated signals were generated on a desktop computer in MATLAB ® and downloaded to a Keysight Technologies, Inc. N5182A vector signal generator to generate an RF modulated signal for the proposed receiver test system. The baseband output signals ( + − I / and + − Q / ) from the receiver were captured using a four-channel oscilloscope (MSO9404A from Keysight Technolgies, Inc.) in the high input impedance mode. The captured baseband waveforms were processed and compared with the original transmitted baseband signals, and the receiver performance was evaluated in the DSP block shown in Fig. 4a. Other signal generators (E4433B/E4422B) from Keysight Technologies, Inc. were also used, and their outputs were combined using an off-the-shelf power combiner to generate two-tone CW RF signals and blockers for other tests and measurements. Figure 4b shows the simulated performance of an ideal differential four-phase conventional PM receiver compared with the proposed QPS-FS receiver for 1.0 GHz band of operation. The baseband output signal was 1.0 MHz for the RF signal at 1.001 GHz and the clock switching frequency f ( ) LO was 1.0 GHz in the Advanced Design System (ADS) simulation software from Keysight Technologies, Inc. All the system elements used in the simulations were ideal components, except the clock sources. The only non-ideal elements in the simulation were the associated clock signals, which were characterized by their respective jitter values (T j ) shown in Fig. 2c and their statistical distributions described according to Eq. (6). Each of the four clock signals involved in the conventional four-phase differential PM receiver had fixed 25% duty cycles with independent jitter values (σ j ) according to Eq. (6). The clock signal involved in the QPS-FS receiver in Fig. 3b had a duty cycle of 50%, and its associated jitter value was also described according to Eq. (6). The simulation result shown in Fig. 4b confirms the clock jitter tolerant behavior of the proposed QPS-FS receiver compared to the conventional differential four-path passive mixer receiver. ). The better output SNR for the proposed receiver confirms its jitter tolerant behavior.
Results
Scientific RepoRts | 7: 9658 | DOI:10.1038/s41598-017-10023-8 CW measurements on the proposed receiver were performed to ascertain its frequency conversion and selectivity behavior compared to homodyne receivers. Unless specified, all the measurement results provided are for the single-ended output mode of the receiver. In order to obtain optimal conversion from RF to baseband voltages at the output capacitors, the clock amplitude and bias levels were adjusted. Figure 5a shows the measured single-ended voltage conversion gain of the QPS-FS receiver system when the RF signal level was fixed at −33 dBm, the baseband IF frequency ( f IF ) was fixed at 0.1 MHz, and the CW RF signal was present at frequency The receiver input voltage value was derived from the input RF power level, assuming the receiver input was perfectly matched to the characteristic impedance of the phase shift network (50 Ω).
The frequency selectivity of the proposed receiver was measured for different RF bands of operation decided by f LO , using a CW RF signal at frequency IF , where f LO is fixed for the band of interest and f IF is swept so that the frequency down-converted baseband signals fall at frequency f IF . The amplitude level of the baseband signal was recorded and plotted against f IF . The measured output frequency down-conversion exhibits bandpass filter behavior in the RF band and low-pass filter behavior in the baseband, as shown in Fig. 5b. In order to compare selectivity for different RF bands, the normalized baseband voltage gain (output baseband voltage at f IF relative to that in the passband of the specific f LO ) was plotted against f IF , where f LO is fixed for the band of interest.
From the measured frequency selectivity characteristics of the proposed receiver system, the tunable RF bandpass selective behavior of the proposed QPS-FS receiver is confirmed. The bandwidth of the frequency down-conversion process is independent of the RF carrier frequency, depending only on the RF source impedance and the switch on-resistance values, baseband output capacitor value and the duty cycle of the clock pulse waveforms driving the switches.
Due to switching mixing, the RF signals present at
RF LO IF
for any clock frequency ( f LO ) of the receiver in the QPS-FS receiver system are also frequency down-converted to f IF for the single-ended output mode of the receiver, while all other RF signals are almost completely suppressed and absent from the output voltage signals, due to the receiver's frequency selective behavior. The desired RF band of interest is the first harmonic ( = n 1) for which the highest voltage conversion gain is obtained. For the desired RF band signal, the RF signals present at higher harmonics ( ≥ n 2) work as interferers that cannot be completely suppressed by the receiver from its down-converted version of the baseband output signal. In order for the receiver to remain free of interferers and blockers, the receiver is forced to operate only over one octave of the RF frequency range.
When the output is changed from single-ended to differential, the down-converted second harmonic of the RF signal can be cancelled from the output voltage signal; and, the receiver becomes tunable over a much wider RF frequency band. In an ideal situation, the frequency down-converted signal level from the desired RF band increases by 6 dB and the frequency down-converted signal level from the second harmonic gets completely rejected when the output mode is changed from single-ended to differential. Figure 5c provides the measured harmonic rejection of the receiver for 700 MHz band when the output mode was changed from single-ended to differential. All the output signal levels for different harmonics were normalized with respect to the single-ended output mode baseband value for the desired RF band signal (n = 1). For the 700 MHz band of operation, about 30 dB suppression for the second harmonic (n = 2) of the RF signal from the receiver output was achieved with the differential receiver output mode; and, the desired band (n = 1) voltage conversion gain improved by approximately 6 dB.
Ideally, the QPS-FS receiver reflects back all of the blockers outside the passband of the frequency band of operation, with no blockers appearing in the down-converted baseband voltage signal at the receiver output. However, due to hardware impairments and imperfections, some of the blockers do appear at the output of the receiver in real situations, resulting in the reduction of the voltage conversion gain of the receiver for the desired band of operation. Figure 5d shows the normalized measured receiver voltage conversion gain desensitization due to a CW blocker at a frequency 50 MHz away from the carrier frequency of the band of operation ( f LO ). In this measurement, the RF and blocker signals were CW signals at frequencies The receiver is made tunable to any frequency band of operation by changing its local oscillator (LO) frequency equal to the desired band RF signal carrier frequency. The tunability of the receiver is further confirmed experimentally by plotting the voltage conversion gains from RF to baseband frequencies for different LO frequencies shown in Fig. 6a. The receiver is tuned to different RF frequency bands (100 MHz, 400 MHz, 700 MHz and 1.0 GHz RF bands) by only setting the LO frequency (f LO ) equal to the desired band RF signal carrier frequency (f c ). In this case, the RF signal present at = + f f f RF c I F is frequency down-converted to f IF . When the input RF power was increased, the receiver started to behave nonlinearly, due to the nonlinear behavior of the switches. In order to measure the nonlinearity behavior of the receiver, the peak-to-peak output signal voltages for the baseband signals were recorded in dBmV (=20log 10 (V pp /1 mV) and plotted against the total input RF power. In-band and out-of-band receiver nonlinearities were characterized for the 700 MHz RF band of the QPS-FS receiver system. Figure 6a plots the measured output baseband signal levels at frequency f IF = 0.1 MHz when the two-tone in-band RF signals were sent at frequencies of 701.1 MHz and 701 MHz, so that the down-converted baseband signal due to second order in-band receiver nonlinearity fell at frequency f IF = 0.1 MHz. Third-order in-band receiver nonlinearity was characterized by two-tone RF signals at frequencies of 700.55 MHz and 701 MHz, so that the down-converted baseband signal due to third-order in-band receiver nonlinearity fell at frequency f IF = 0.1 MHz. The measured second-and third-order in-band receiver nonlinearity for the 700 MHz band, in terms of input intercept points (IIP2 and IIP3), for the proposed receiver were 11.6 dBm and 3.5 dBm respectively.
The out-of-band receiver nonlinearity for 700 MHz was characterized by two-tone out-of-band RF signals at frequencies of 900.1 MHz and 200 MHz, so that the RF signal due to second-order out-of-band receiver nonlinearity fell at a frequency of 700.1 MHz and the frequency down-converted baseband signal was obtained at f IF = 0.1 MHz. Similarly, the third-order out-of-band receiver nonlinearity was characterized by two-tone RF signals at frequencies of 550.5 MHz and 400 MHz, so that the RF signal due to third-order out-of-band receiver nonlinearity appeared at a frequency of 700.1 MHz and the frequency down-converted baseband output signal was measured at f IF = 0.1 MHz. The measured second-and third-order out-of-band receiver nonlinearity characteristics were plotted and are shown in Fig. 6b. The input intercept points for the out-of-band receiver nonlinearity in the 700 MHz band were estimated to be IIP 2 = 6.8 dBm and IIP 3 = 2.8 dBm. Figure 7a-d show the transmitted and received spectra and the transmitted and received constellation points, respectively, for 4-QAM and 16-QAM signals having a bandwidth of 0.1 MHz sent and received at a 700 MHz carrier frequency. The modulated RF signal at 700 MHz carrier frequency is obtained from the N5182A vector signal generator. The average approximate RF power for both the signals during measurement was −37 dBm. In this measurement setup, all eight baseband outputs shown in Figs 3b and 4a were directly captured using the sampling oscilloscope working in high impedance mode. The resultant output voltage signals were processed digitally to compensate for any amplitude or phase imbalance in the phase shift network or DC offset from the output signal according to Eq. (7). First 25% of the signal samples were used as training sequences to calibrate for imbalance parameters c ij s. The error vector magnitude (EVM) between the transmitted and received constellation points for both the test cases was approximately 4%. The total power consumption of the proposed receiver is comprised of dynamic power consumed, due to switching of the transistors' gates and the static power consumed by the differential amplifiers from the DC supplies. There is no additional power needed in generating multiple clocks of reduced speed and duty cycle from a high-speed clock signal using an active multiphase clock generation circuit. The differential amplifiers in the basic test setup were operated on ±2.5 V dual supplies. The dynamic power wasted due to switching of the transistor gates was obtained from simulation in the ADS software using ATF55143 transistor model. All the measured and simulated performance metrics of the complete proposed QPS-FS receiver are summarized in Table 1 for the 700 MHz band of operation. Table 2 provides theoretical comparisons of the conventional homodyne, PM, and the proposed QPS-FS receiver architectures. ), and third-order ( ) out-of-band receiver nonlinearity characteristics are plotted against the total input RF power; the clock switching frequency is at f LO = 700 MHz; the baseband output frequency at f IF = 0.1 MHz; and, the out-of-band two-tone RF signals are at 901.1 MHz and 200 MHz for the second-order receiver nonlinearity measurements and at 550.05 MHz and 400 MHz for the third order. band, in terms of input intercept points (IIP 2 and IIP 3 ), for the proposed receiver were 11.6 dBm and 3.5 dBm, respectively.
In summary, the conventional passive mixer receivers are known in the art to have high linearity (due to passive circuit element involvement, i.e. switch) and high selectivity (due to impedance translation property) compared to conventional homodyne architectures but lack performance in terms of gain and noise figure 19 . Indeed, the proposed QPS-FS receiver utilizes two transistors impedance translation circuit in each of the four paths of the phase shift network comprising of passive circuit elements (hybrid circuits). Due to this, the QPS-FS receiver suffers in terms of gain and noise figure but its linearity and selectivity is superior to the conventional homodyne architectures. These well-known architectural features of the conventional homodyne receiver architectures and the passive mixer architectures in comparison to the proposed QPS-FS receiver architecture have been summarized in Table 2. The conventional passive mixer receivers employ an additional multi-phase clock generator circuit that converts a single high frequency clock signal into multiple same speed clock signals with reduced duty cycles. This multi-phase clock generator circuit consumes additional power in the conventional passive mixer architecture. The need for high frequency clock signal and additional power consuming multi-phase clock Table 1. Performance summary of the proposed QPS-FS receiver. FC -receiver frequency coverage range. BW -RF down-conversion bandwidth. CG -total combined receiver voltage conversion gain. IB IIP 2 -in-band 2 nd order input intercept point. IB IIP 3 -in-band 3 rd order input intercept point. OOB IIP 2 -out-of-band 2 nd order input intercept point. OOB IIP 3 -out-of-band 3 rd order input intercept point. HS 2 -second RF harmonic suppression relative to fundamental harmonic band signal from the receiver output due to change in baseband output mode from single-ended to differential. CG D,−10dBm -receiver voltage conversion gain desensitization due to a CW blocker having −10 dBm power present at a frequency 50 MHz away from the desired RF band signal carrier frequency. P D -dynamic switching power consumed by the transistor switches obtained from simulation.
generator circuit have been eliminated from the QPS-FS receiver thus reducing the overall power consumption of the QPS-FS receiver and extending its frequency coverage in comparison to the conventional passive mixer receivers. In addition to that, the QPS-FS receiver is tolerant to clock jitter as confirmed through simulation. Table 3 provides performance comparison of some recent reported results of homodyne receivers, passive mixer receivers, and the proposed QPS-FS receiver.
Discussion
In the ideal proposed QPS-FS receiver, all of the blocker and interferer power outside the passband of the receiver band of operation is reflected, with no appearance in the output baseband voltages taken from the output capacitors. Half of the blockers' power is reflected back to the 180° hybrid, while the other half is dissipated in the matched isolation resistor terminations (T 1 and T 2 shown in Fig. 3b). Each of these terminations absorbs blocker/ interferer power that is 6 dB less than the total input blocker/interferer power in the receiver. For a CW blocker presented at 750 MHz having −10 dBm power at the receiver input, along with a desired RF signal at 700.1 MHz f ( ) RF having −33 dBm power, the actual measured blocker powers in T 1 and T 2 were −17.6 dBm and −18.2 dBm, respectively, at 750 MHz, which were 1.6 dB and 2.2 dB less than the interferer power levels that would have been obtained with an ideal receiver. This measurement confirmed the blocker tolerant behavior of the QPS-FS receiver system. Measured reflected blocker power that is less than the ideal value can be attributed to loss in the phase shift network, impedance mismatches and the finite (nonzero) stopband impedance of the ITCs.
Although it is outside the scope of this article to implement an actual energy harvesting system, the blocker/ interferer power reaching hybrid port terminations can be combined and supplied to an actual wideband RF-to-DC rectifier as an energy harvesting system, where the RF power is converted to DC power and stored for further usage.
Conclusions
A novel radio frequency (RF) blocker and local oscillator (LO) clock jitter tolerant receiver architecture has been proposed in this article. The receiver architecture is linear and it uses passive signal division networks in the RF and the LO paths of the receiver. The proposed quadrature phase shift frequency selective (QPS-FS) receiver is passive, frequency selective and tunable compared to conventional homodyne architectures, while requiring a much slower clock signal source than passive mixer (PM) based receivers, thus significantly reducing the needed power consumption of the proposed receiver architecture and extending the frequency coverage of the receiver to the maximum of the source clock frequency. The proposed QPS-FS receiver employs a linear and passive phase shift network in the RF path and slower speed complementary clock signals that can be directly connected to the switching transistors' gates. Elimination of an active multiphase clock generation circuit and reduction of the operating frequency decreases the overall power consumption of the proposed receiver system. Sharing of common clock signals by the switching transistors helps in reducing the effect of clock jitters on the overall receiver Table 2. Theoretical comparison summary of conventional homodyne receiver (H), passive mixer receiver (P) and the proposed QPS-FS (Q) receiver architectures. f C -RF signal carrier frequency. f LM -master LO clock source frequency. B -is the receiver blocker tolerant. J -is the receiver clock jitter tolerant. T -is the receiver frequency selective and tunable. P S/D -static and dynamic power consumed by the multi-phase clock generation circuit/transistors. P -total static/dynamic power consumed by the receiver. CG -receiver voltage conversion gain. NF -receiver noise figure. FC -frequency coverage of the receiver given a master LO clock source of fixed maximum frequency. Table 3. Performance comparison of some recent reported conventional homodyne receivers, passive mixer receivers and the proposed QPS-FS receiver architectures. | 2023-02-16T16:11:18.695Z | 2017-08-29T00:00:00.000 | {
"year": 2017,
"sha1": "a8b4fe45390fefbb78fa20f13cb522c71c493c74",
"oa_license": "CCBY",
"oa_url": "https://www.nature.com/articles/s41598-017-10023-8.pdf",
"oa_status": "GOLD",
"pdf_src": "SpringerNature",
"pdf_hash": "a8b4fe45390fefbb78fa20f13cb522c71c493c74",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": []
} |
232774730 | pes2o/s2orc | v3-fos-license | Immunogenicity and Neutralizing Activity Comparison of SARS-CoV-2 Spike Full-Length and Subunit Domain Proteins in Young Adult and Old-Aged Mice
Severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) continues to be expanding the pandemic disease across the globe. Although SARS-CoV-2 vaccines were rapidly developed and approved for emergency use of vaccination in humans, supply and production difficulties are slowing down the global vaccination program. The efficacy of many different versions of vaccine candidates and adjuvant effects remain unknown, particularly in the elderly. In this study, we compared the immunogenic properties of SARS-CoV-2 full-length spike (S) ectodomain in young adult and aged mice, S1 with receptor binding domain, and S2 with fusion domain. Full-length S was more immunogenic and effective in inducing IgG antibodies after low dose vaccination, compared to the S1 subunit. Old-aged mice induced SARS-CoV-2 spike-specific IgG antibodies with neutralizing activity after high dose S vaccination. With an increased vaccine dose, S1 was highly effective in inducing neutralizing and receptor-binding inhibiting antibodies, although both S1 and S2 subunit domain vaccines were similarly immunogenic. Adjuvant effects were significant for effective induction of IgG1 and IgG2a isotypes, neutralizing and receptor-binding inhibiting antibodies, and antibody-secreting B cell and interferon-γ secreting T cell immune responses. Results of this study provide information in designing SARS-CoV-2 spike vaccine antigens and effective vaccination in the elderly.
Introduction
Severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) has been continuing to rapidly spread across the globe [1] since the first outbreak in December 2019 [2][3][4]. SARS-CoV-2 causes pandemic coronavirus disease 2019 (COVID- 19), leading to acute respiratory distress syndrome, significant mortality, and lingering symptoms in some individuals after recovery [5]. The global confirmed cases were reported to be over 112 million individuals claiming 2.48 million deaths as of 24 February 2021 [1].
Pseudovirus Neutralization Assay
The neutralizing antibody titers were determined by a pseudovirus-based assay as previously described [28]. Briefly, immune sera were heat-inactivated at 56 • C for 30 min prior to neutralization assays. Lentiviruses pseudotyped with SARS-CoV-2 S were preincubated with an equal volume of serially diluted immune sera for 1 h at room temperature (RT), then virus-antibody mixtures were added to HEK293T-hACE2 cells in a 96-well plate. After a 2 h incubation, the inoculum was replaced with fresh medium. Cells were lysed 48 h later and luciferase activity was measured using luciferin-containing substrate (Promega, Durham, NC, USA). Controls included cell-only control, virus without any antibody control, and positive control sera.
hACE2 Receptor Binding and Inhibition Assay
To confirm the binding ability of recombinant proteins to receptor hACE2 protein expressed from HEK293 cells, the 96-well plates were coated with 0.8 or 2 µg of S (S1-S2) and S1 protein at 4 • C. One day later, serially diluted soluble hACE2 (0.5-2 µg/mL) in phosphate buffered saline with tween-20 (PBST) was added to the plates which were incubated for 2 h at room temperature (RT) after blocking for 1 h and washing the precoated plates. The binding amounts were determined by horseradish peroxidase (HRP)-conjugated antihuman IgG (Southern, Biotech, Birmingham, AL, USA) and 3,3 ,5,5 -tetramethylbenzidine (TMB, eBioscience, San Diego, CA, USA).
To detect whether the immune sera can block the binding between SARS-CoV-2 RBD and hACE2, the receptor binding inhibition activity was performed as previously described [29]. Briefly, the recombinant S1 (RBD) protein (400 ng/mL per well) was captured on ELISA plates. Boost immune sera at three-fold dilutions were added onto the plates. After 2 h incubation at RT, the plates were washed and applied with hACE2-Fc (0.5 µg/mL) in PBST at RT for 2 h. The inhibition activity was determined using anti-human IgG-HRP. Pre-immunized sera (naïve) were used as a negative control.
Enzyme-Linked Immunosorbent Assay (ELISA)
Antigen-specific antibody responses were determined from immune sera collected after immunization by ELISA. Briefly, serially diluted immune sera were applied to a 96-well plate precoated with full-length S, S1, and S2 protein (200 ng/mL per well). The levels of antibodies were determined by HRP-conjugated anti-mouse IgG, IgG1, IgG2a (Southern Biotech) and TMB (eBioscience).
Enzyme-Linked Immunospot Assay (ELISpot)
To determine antibody-secreting cells (ASC) specific for full-length S protein, spleen cells were prepared from mice with boost immunization and applied onto 96-well ELISpot plates precoated with full-length S protein (200 ng/mL per well). After 48 h, the antibody responses were determined by anti-mouse IgG and IgG isotypes (IgG1, IgG2a). For cytokine-secreting cells, splenocytes (10 6 cells per well) were cultured on 96-well ELISpot plates precoated with anti-mouse IFN-γ capture monoclonal antibody (mAb, BD Biosciences, San Diego, CA, USA) in the presence of pooled S peptides (5 µg/mL, BEI resources) and full-length S protein (1 µg/mL). The cytokine-secreting cell spots were developed with biotinylated anti-mouse IFN-γ detection antibody and alkaline phosphataselabeled streptavidin (BD Pharmingen, San Diego, CA, USA). The spots were visualized with a 3,3 -diaminobenzidine substrate and counted by an ELISpot reader (BioSys, Miami, FL, USA).
Flow Cytometry
For T cell immune responses specific for virus antigens, the splenocytes were harvested from boost immunized mice and were in vitro stimulated with pooled S peptides and fulllength S protein for 24 h. The lymphocytes were stained with anti-mouse CD4 (RM4-5, eBioscience), CD8 (53-6.7, eBioscience), and CD3 (17A2, BioLegend) monoclonal antibodies. A BD Cytofix/Cytoperm TM Plus kit was used to fix and permeabilize cells prior to staining with anti-mouse IFN-γ (XMG1.2, eBioscience) monoclonal antibody. All samples were analyzed on a Becton-Dickinson LSR-II/Fortessa flow cytometer (BD Biosciences) and analyzed using Flowjo software (Tree Star Inc., Ashland, OR, USA).
Statistical Analysis
All results are presented as mean ± standard errors of the mean (SEM). The statistical significance for all experiments was performed by one-or two-way analysis of variance (ANOVA). Prism software (GraphPad Software, Inc., San Diego, CA, USA) was used for all data analysis. The comparison used to generate a p value is indicated by horizontal lines (*; p < 0.05, **; p < 0.01, ***; p < 0.001).
Results
3.1. SARS-CoV-2 Full-Length Spike S1-S2 Protein Is More Immunogenic Than S1 Subunit Protein It is important to determine the functional integrity of vaccine candidates and their correlation with immunogenicity. The receptor hACE2 binding activity of the full-length ectodomain spike (S: S1-S2) was compared with that of the S1 subunit protein containing the RBD (Figure 1). The full-length S coated ELISA plates at low (0.8 µg/mL, 5.95 nM, Figure 1B) and high (2 µg/mL, 14.89 nM, Figure 1C) concentrations showed less hACE2 binding reactivity values than those in the S1 subunit plates (10.46 and 16.1 nM, respectively). These results suggest that the S1 subunit containing RBD has similar or slightly higher receptor binding activity against hACE2 compared to the full-length S, possibly due to higher molarity.
Initially, we compared the immunogenicity of the full-length ectodomain S (aa#16-1213) protein in comparison with the S1 subunit (aa#16-685) protein ( Figure 2). Young adult BALB/c mice (6-8 weeks old, n = 5) were intramuscularly (IM) immunized with a low dose (0.8 µg) of S (S1-S2) or S1 protein in the presence of AS01-like adjuvant (QS-21 + MPL) at weeks 0 and 4. To determine the aging effects on immunogenicity, 15 months (M) old-aged mice (n = 5) were immunized with the same low dose (0.8 µg) of S plus QS-21 + MPL adjuvant. At 3 weeks after prime, high levels of IgG antibodies specific for S protein were induced in the S young adult group whereas no or low levels of S specific IgG antibodies were observed in the young adult S1 and aged S (S1-S2) groups (Figure 2A). At 3 weeks after boost, the aged S group induced substantial levels of S-specific IgG antibodies although at lower levels compared to those in the young adult S group ( Figure 2B). Substantial levels of S1-specific IgG antibodies were induced in the young adult age S group but not in the aged mouse S group with a 0.8 µg dose ( Figure 2C). The old-aged mice vaccinated with a 0.8 µg dose showed more defects in inducing IgG2a than IgG1 isotype antibodies specific for full-length S compared to those of the young adult S group ( Figure 2D). These results suggest that SARS-CoV-2 full-length spike protein is more immunogenic than the S1 subunit protein and higher doses of protein vaccines might be needed to induce comparable IgG antibodies in aged populations. The receptor binding properties were determined using serially diluted soluble hACE2-Fc (0.5-2 µg/mL) on the 96-well plates precoated with 0.8 µg (B) or 2 µg (C) of S (S1-S2) and S1 subunit proteins. Due to different molecular masses of S and S1 proteins despite the same concentration, molarity in nanomoles (nM) is indicated for each protein coated.
Initially, we compared the immunogenicity of the full-length ectodomain S (aa#16-1213) protein in comparison with the S1 subunit (aa#16-685) protein ( Figure 2). Young adult BALB/c mice (6-8 weeks old, n = 5) were intramuscularly (IM) immunized with a low dose (0.8 µg) of S (S1-S2) or S1 protein in the presence of AS01-like adjuvant (QS-21 + MPL) at weeks 0 and 4. To determine the aging effects on immunogenicity, 15 months (M) old-aged mice (n = 5) were immunized with the same low dose (0.8 µg) of S plus QS-21 + MPL adjuvant. At 3 weeks after prime, high levels of IgG antibodies specific for S protein were induced in the S young adult group whereas no or low levels of S specific IgG antibodies were observed in the young adult S1 and aged S (S1-S2) groups ( Figure 2A). At 3 weeks after boost, the aged S group induced substantial levels of S-specific IgG antibodies although at lower levels compared to those in the young adult S group ( Figure 2B). Substantial levels of S1-specific IgG antibodies were induced in the young adult age S group but not in the aged mouse S group with a 0.8 µg dose ( Figure 2C). The old-aged mice vaccinated with a 0.8 µg dose showed more defects in inducing IgG2a than IgG1 isotype antibodies specific for full-length S compared to those of the young adult S group ( Figure 2D). These results suggest that SARS-CoV-2 full-length spike protein is more immunogenic than the S1 subunit protein and higher doses of protein vaccines might be needed to induce comparable IgG antibodies in aged populations. The receptor binding properties were determined using serially diluted soluble hACE2-Fc (0.5-2 µg/mL) on the 96-well plates precoated with 0.8 µg (B) or 2 µg (C) of S (S1-S2) and S1 subunit proteins. Due to different molecular masses of S and S1 proteins despite the same concentration, molarity in nanomoles (nM) is indicated for each protein coated.
Aged
Mice and S1 and S2 Subunit Domain Proteins Need a High Vaccine Dose for Effective Induction of SARS-CoV-2 Spike-Specific IgG Antibodies Since a low dose (0.8 µg) of S1 subunit protein vaccine was not immunogenic even in young adult mice and the full-length 0.8 µg S in aged mice, we determined whether a high dose would be required for effective induction of SARS-CoV-2 spike-specific IgG antibodies for aged mice and subdomain S1 protein ( Figure 2E,F). The aged mouse group that was immunized with a high dose (4 µg) of S protein plus adjuvant induced comparable levels of S and S1 specific IgG antibodies as those in the low dose (0.8 µg) S young adult group ( Figure 2E,F). Moreover, to determine the adjuvant effects, the same 0.8 µg dose S full-length with and without QS-21 + MPL adjuvant was included in the young adult age mouse groups ( Figure 2E,F). The S (S1-S2) group without adjuvant induced lower levels of S-specific IgG and lowest levels of S1 (RBD) specific IgG antibodies, compared to those in S vaccination with adjuvant. The adjuvanted S1 (4 µg) young adult mice showed the highest level of S1 specific IgG antibody responses ( Figure 2F). These results indicate that effective induction of S-specific IgG antibody responses was observed by a high vaccine dose (4 µg) of adjuvanted full-length S in aged mice and subunit S1 vaccine in young adult mice. Furthermore, these results support the significant roles of adjuvants in inducing S1 and full-length S specific IgG antibody responses. Full-length S protein is more immunogenic than S1 subunit protein. Young and aged BALB/c mice (n = 6 to 8) were intramuscularly (IM) immunized twice with S1 (0.8 µg) and S (0.8 µg or 4 µg) in presence of adjuvants (MPL + QS-21, 1 µg + 10 µg), and adjuvant only control (mock). Antigen-specific antibody responses were determined by ELISA. (A) IgG specific for full-length S in prime sera (100x dilution) collected at 3 weeks after prime immunization. (B,C) IgG specific for full-length S and S1 subunit protein in boost sera. Data were compared with mock control. (D) Antibodies specific for full-length S. (E,F) Comparison of low and high dose vaccines inducing IgG antibodies specific for full length S and S1 protein in boost sera. Data were compared between S (0.8 µg) alone without adjuvant and adjuvanted S (0.8 µg, ***; p < 0.001) or adjuvanted S1 (4 µg, +++; p < 0.001, ++; p < 0.01) in young (y) age mice (n = 6), and adjuvanted S (4 µg, ###; p < 0.001) in old aged (a) mice (n = 8). S-0.8 (y): S 0.8 µg vaccination of young adult mice, S-0.8 + adj (y): S 0.8 µg + adjuvant vaccination of young adult mice, S-0.8 + adj (a): S 0.8 µg + adjuvant vaccination of old aged mice, S1-4 + adj (y): S1 4 µg + adjuvant vaccination of young adult mice, S-4 + adj (a): S 4 µg + adjuvant vaccination of old aged mice. Adj: adjuvants (MPL + QS-21, 1 µg + 10 µg). Statistical significance was calculated using one-or two-way ANOVA and a Bonferroni's multiple-comparison test. Error bars indicate the mean ± standard errors of the mean (SEM). **; p < 0.01, ***; p < 0.001 compared to mock control.
Aged Mice and S1 and S2 Subunit Domain Proteins Need a High Vaccine Dose for Effective Induction of SARS-CoV-2 Spike-Specific IgG Antibodies
Since a low dose (0.8 µg) of S1 subunit protein vaccine was not immunogenic even in young adult mice and the full-length 0.8 µg S in aged mice, we determined whether a high dose would be required for effective induction of SARS-CoV-2 spike-specific IgG antibodies for aged mice and subdomain S1 protein ( Figure 2E,F). The aged mouse group that was immunized with a high dose (4 µg) of S protein plus adjuvant induced comparable levels of S and S1 specific IgG antibodies as those in the low dose (0.8 µg) S young adult group ( Figure 2E,F). Moreover, to determine the adjuvant effects, the same 0.8 µg dose S full-length with and without QS-21 + MPL adjuvant was included in the young adult age mouse groups ( Figure 2E,F). The S (S1-S2) group without adjuvant induced lower levels of S-specific IgG and lowest levels of S1 (RBD) specific IgG antibodies, compared to those in S vaccination with adjuvant. The adjuvanted S1 (4 µg) young adult mice showed the highest level of S1 specific IgG antibody responses ( Figure 2F). These results indicate that effective induction of S-specific IgG antibody responses was observed by a high vaccine dose (4 µg) of adjuvanted full-length S in aged mice and subunit S1 vaccine Adj: adjuvants (MPL + QS-21, 1 µg + 10 µg). Statistical significance was calculated using one-or two-way ANOVA and a Bonferroni's multiple-comparison test. Error bars indicate the mean ± standard errors of the mean (SEM). **; p < 0.01, ***; p < 0.001 compared to mock control.
Adjuvanted Spike Protein Vaccinations Effectively Induce Subunit S Domain Specific IgG1 and IgG2a Isotype Antibodies
In the additional experimental setting to determine IgG isotypes, the groups of young adult mice (n = 5) were IM prime boost immunized with full-length S (0.8 µg) +/-adjuvant QS-21 + MPL (Figure 3), S1 subunit (4 µg) + adjuvant, or S2 subunit (4 µg) + adjuvant ( Figure 4). The aged mice (n = 5) were IM prime boost immunized with S (4 µg) + adjuvant for comparison ( Figure 3). Both adjuvanted 4 µg S immunized 15M old-aged mice and 0.8 µg S immunized young adult mice showed similar levels of IgG1 and IgG2a antibodies specific for full length S, subunits S1 and S2 (Figure 3). The unadjuvanted S immunized young adult mice induced lower levels of IgG1 antibody for S and S2 ( Figure 3A,C), further lower levels of S1 recognizing IgG1 antibody ( Figure 3B), and undetectable levels of IgG2a isotype antibody, compared to those in adjuvanted S immunization ( Figure 3). It is interesting to note that 4 µg of S2 vaccination induced higher levels of S specific IgG1 and IgG2a antibodies than S1 vaccination ( Figure 4A), suggesting that S2 might be more immunogenic than S1. As expected, the adjuvanted 4 µg S1 group induced S and S1 specific IgG1 and IgG2a isotype antibodies whereas the adjuvanted 4 µg S2 group induced S and S2 specific IgG1 and IgG2a isotype antibodies ( Figure 4). Overall, a dose of 4 µg S protein was highly immunogenic in aged mice, and 4 µg S1 or S2 proteins were comparably immunogenic in young adult mice. AS01-like adjuvant QS-21 + MPL was effective in enhancing S vaccine specific IgG1 and IgG2a isotype antibodies in young and aged mice. . IgG isotype antibody responses to S1 or S2 (4 µg) vaccination in young adult age mice. Young BALB/c mice (n = 6) were IM immunized twice with 4 µg of S1 (S1-4 + adj) and S2 (S2-4 + adj) with adjuvants (MPL + QS-21, 1 µg + 10 µg) and adjuvant only (mock). Antigen-specific IgG isotype antibody responses were determined in boost sera by ELISA. IgG1 and IgG2a isotype antibodies specific for full length S protein (A), for S1 subunit protein (B), for S2 subunit protein (C). (y): indicates young adult age mice in the group. Statistical significance was calculated using two-way ANOVA and a Bonferroni's multiple-comparison test. Error bars indicate the mean ± SEM. *; p < 0.05, **; p < 0.01, ***; p < 0.001 and compared with the mock control.
Vaccination with Adjuvanted S or S1 Proteins Induced Pseudovirus Neutralizing and Receptor Inhibiting Antibodies
Induction of neutralizing antibodies after vaccination is considered a major correlate with protection against SARS-CoV-2. Boost sera from the adjuvanted 0.8 µg S-immunized young mice and 4 µg-S immunized old mice showed similarly high levels of SARS-CoV-2 pseudovirus neutralizing antibody titers (810) of 50% reduction ( Figure 5A). Boost sera from the unadjuvanted 0.8 µg S young adult mice could not neutralize SARS-CoV-2 pseudovirus at a meaningful level ( Figure 5A). Adjuvanted 4 µg S1 vaccination induced high neutralizing titers of approximately 2430-7290 in boost sera, whereas the adjuvanted 4 µg S2 group induced low neutralizing titers of 270, similar to the unadjuvanted S1 group ( Figure 5B). The second boost with 0.8 µg S was found to increase neutralizing titers to a range of 7290, which was retained for over 4 months ( Figure 5C). Furthermore, the 15M aged mice with secondary boost S induced high neutralizing titers (~7290) ( Figure 5C). Inactivated SARS-CoV-2 (0.8 µg prime, 10 µg boost 2 times) immune sera showed a lower range of neutralizing titers (90 to 270, Figure 5C). Error bars indicate the mean ± SEM. **; p < 0.01, ***; p < 0.001 compared to the mock or no adjuvant control.
A major mechanism of neutralizing immunity might be antibody mediated interference with binding of the SARS-CoV-2 spike RBD to the hACE2 receptor. Low levels of hACE2 binding inhibition (50%) titers of ~30 were observed with boost sera from 0.8 µg S young and 4 µg aged mice, respectively ( Figure 5D). A high titer (over 810) of hACE2 binding inhibition antibodies was induced in adjuvanted 4 µg S1 but not S2 young mice ( Figure 5E). Notably, 4 µg S1 vaccination of young adult mice could induce a low level of receptor inhibiting activity (~90) titers even in the absence of adjuvant ( Figure 5E). Moreover, a second boost with adjuvanted S (0.8 µg) but not with inactivated SARS-CoV-2 in young adult mice resulted in a high titer (~810) of hACE2 binding inhibition ( Figure 5F). These results suggest that vaccination with adjuvanted S or S1 proteins effectively induces high titers of pseudovirus neutralizing and receptor inhibiting antibodies. It is likely that antibodies to the S2 subunit can partially contribute to neutralizing SARS-CoV-2 pseudovirus in a different mechanism. Inhibition percentage (%) of hACE2 binding to RBD was measured after incubation with serially diluted immune sera in the plate precoated with hACE2 protein. Immune sera of groups are the same as in (A-C). Statistical significance was calculated using two-way ANOVA and a Bonferroni's multiple-comparison test. Error bars indicate the mean ± SEM. **; p < 0.01, ***; p < 0.001 compared to the mock or no adjuvant control.
A major mechanism of neutralizing immunity might be antibody mediated interference with binding of the SARS-CoV-2 spike RBD to the hACE2 receptor. Low levels of hACE2 binding inhibition (50%) titers of~30 were observed with boost sera from 0.8 µg S young and 4 µg aged mice, respectively ( Figure 5D). A high titer (over 810) of hACE2 binding inhibition antibodies was induced in adjuvanted 4 µg S1 but not S2 young mice ( Figure 5E). Notably, 4 µg S1 vaccination of young adult mice could induce a low level of receptor inhibiting activity (~90) titers even in the absence of adjuvant ( Figure 5E). Moreover, a second boost with adjuvanted S (0.8 µg) but not with inactivated SARS-CoV-2 in young adult mice resulted in a high titer (~810) of hACE2 binding inhibition ( Figure 5F). These results suggest that vaccination with adjuvanted S or S1 proteins effectively induces high titers of pseudovirus neutralizing and receptor inhibiting antibodies. It is likely that antibodies to the S2 subunit can partially contribute to neutralizing SARS-CoV-2 pseudovirus in a different mechanism.
Adjuvanted Vaccination Enhances S-Specific Cellular Immune Responses
Adjuvanted S vaccination could maintain neutralizing and receptor binding inhibition antibodies for over 4 months ( Figure 5C,F). At 19 weeks after boost, S-specific antibody-secreting cells (ASCs) in splenocytes were determined ( Figure 6A). S-specific IgG, IgG1, and IgG2a ASC responses were induced at significantly higher levels in spleen cells from adjuvanted S vaccinated young (y) adult and aged (a) mice, compared to those from unadjuvanted S only vaccinated mice ( Figure 6A). IFN-γ producing splenocytes were also determined after boost. Upon in vitro stimulation of splenocytes with S peptides or S protein, IFN-γ secreting cell spots were detected at the highest level in the adjuvanted S vaccinated young adult mice ( Figure 6B). The aged mice with adjuvanted S vaccination induced IFN-γ secreting cell spots at low levels as observed in the young adult mice with unadjuvanted S vaccination ( Figure 6B). Consistently, IFN-γ + CD4 T and IFN-γ + CD8 T splenocytes were induced at the highest level in the S vaccinated young adult mice ( Figure 6C,D). The number of IFN-γ + CD4 T cells were higher than IFN-γ + CD8 T cells. Both adjuvanted S immunized aged mice and unadjuvanted S immunized young adult mice induced similarly moderate levels of IFN-γ + CD4 T and IFN-γ + CD8 T splenocytes, which were higher than those from mock control mice ( Figure 6C,D).
Adjuvanted Vaccination Enhances S-Specific Cellular Immune Responses
Adjuvanted S vaccination could maintain neutralizing and receptor binding inhibition antibodies for over 4 months ( Figure 5C,F). At 19 weeks after boost, S-specific antibody-secreting cells (ASCs) in splenocytes were determined ( Figure 6A). S-specific IgG, IgG1, and IgG2a ASC responses were induced at significantly higher levels in spleen cells from adjuvanted S vaccinated young (y) adult and aged (a) mice, compared to those from unadjuvanted S only vaccinated mice ( Figure 6A). IFN-γ producing splenocytes were also determined after boost. Upon in vitro stimulation of splenocytes with S peptides or S protein, IFN-γ secreting cell spots were detected at the highest level in the adjuvanted S vaccinated young adult mice ( Figure 6B). The aged mice with adjuvanted S vaccination induced IFN-γ secreting cell spots at low levels as observed in the young adult mice with unadjuvanted S vaccination ( Figure 6B). Consistently, IFN-γ + CD4 T and IFN-γ + CD8 T splenocytes were induced at the highest level in the S vaccinated young adult mice ( Figure 6C,D). The number of IFN-γ + CD4 T cells were higher than IFN-γ + CD8 T cells. Both adjuvanted S immunized aged mice and unadjuvanted S immunized young adult mice induced similarly moderate levels of IFN-γ + CD4 T and IFN-γ + CD8 T splenocytes, which were higher than those from mock control mice ( Figure 6C,D).
Discussion
SARS-CoV-2 full-length S mRNA and recombinant adenovirus vector vaccines have been recently approved for emergency use authorization for human vaccination. Many other different SARS-CoV-2 vaccine candidates by diverse vaccine modalities are under preclinical and clinical development. It is of high significance to better understand the immunogenic differences of the different subdomains of SARS-CoV-2 spike protein. There are controversial studies reporting the immunogenicity of different spike subunit domains. The first approved use in human vaccination, SARS-CoV-2 mRNA vaccine (BNT162b2), encodes the full-length spike [30]. SARS-CoV-2 RBD encoding mRNA vaccine (BNT162b1) was also assessed in Phase I and II clinical trial studies [15,30]. The immunogenicity of BNT162b1 and BNT162b2 was comparable in healthy individuals [30]. In the overall safety assessments, BNT162b1 RBD mRNA vaccination was associated with a higher incidence and severity of systemic reactions than BNT162b2 full-length S mRNA now on the market for human vaccination [15,30]. A prior preclinical study reported that the 1 µg dose of mRNA-1273 pre-fusion stabilized SARS-CoV-2 S vaccine in lipid nanoparticles induced comparable levels of S-specific antibodies as raised with 1 µg of S trimer in TLR4 (MPL) agonist adjuvant in mice [31]. We compared the immunogenicity of full-length S and subunit S1 at a low dose (0.8 µg) in AS01-like adjuvant and found that S was more immunogenic than S1 in inducing spike-specific IgG antibodies. With 4 µg dose vaccination, S1 subunit was highly effective in inducing S-specific binding IgG, neutralizing and receptor-binding inhibiting antibodies. A second boost strategy with a low dose (0.8 µg) S was also highly effective in further enhancing neutralizing antibodies, which lasted for at least 4 months. Consistent with this study, recent studies reported that modified vaccinia Ankara and live-attenuated YF17D vectors expressing SARS-CoV-2 S but not S1, induced strong neutralizing antibody responses in mice [32] and hamster animals [33], respectively. A lentiviral vector vaccine expressing full-length S was more immunogenic in mice than the lentivirus expressing the S1 subunit [21]. The SARS-CoV-2 S DNA vaccine was also moderately more effective in inducing neutralizing antibodies than S1 or RBD DNA vaccine in monkeys [7]. S1 subunit protein was shown to be superior to RBD only in inducing neutralizing antibodies as a SARS-CoV-2 subunit vaccine in mice immunized with 10 µg dose vaccine in alum adjuvant, but full-length S was not included for comparison in this previous study [22]. In contrast, another study reported comparable binding IgG antibodies and a higher neutralizing titer by 50 µg SARS-CoV-2 RBD protein mixed with Emulsigen adjuvant, compared to 50 µg SARS-CoV-2 S1 in rabbits, both of which were more effective in inducing S specific IgG and neutralizing antibodies than full-length S [23]. Vaccine doses, animal species, and adjuvants are the factors that can contribute to the differential outcomes among the different studies although it is unclear about these differing results in comparing immunogenic properties of the S and its subunit domains.
A recent study reported that the glycosylation patterns were found to be similar on the SARS-CoV-2 spike (S) ectodomain proteins expressed in insect and human cells [34]. Du et al. (2009) compared the immunogenicity and protective immunity of recombinant SARS-CoV RBD proteins expressed in mammalian cells, insect cells, and Escherichia coli [35]. Intact conformation and authentic antigenicity were retained in all recombinant RBD proteins expressed in mammalian cells, insect cells, and E. coli [35]. Both recombinant SARS-CoV RBD proteins expressed in 293T mammalian and Sf9 insect cells maintained similar immunogenicity to induce RBD-specific antibodies in vaccinated mice, which was higher than that produced in E. coli [35]. Mammalian 293T cell-expressed RBD protein vaccination induced higher levels of neutralizing antibodies than those induced by Sf9 insect or E. coli expressed RBD [35]. Therefore, these prior studies suggest that the low immunogenicity of the S1 protein is not because of the mammalian expression system compared to the insect cell expressed full-length S protein but rather a difference in immunogenic conformation between the subunit S1 and full-length S.
The presentation of RBD in nanoparticles was reported to be effective in inducing neutralizing antibodies, although the RBD itself is poorly immunogenic as a monomer [36,37]. Nonetheless, RBD-specific immunity alone would not provide protection against distantly related viruses due to substantial sequence variation among the different coronavirus S RBDs and its limited T cell epitopes [7]. The S2 subunit domain is known to have the sequence and structural conservation among the different coronaviruses and conserved epitopes for potential neutralizing antibodies [8,9]. We found that S2 was as immunogenic as S1 and could induce a low level of neutralizing antibodies independent of hACE2 binding inhibition, which might provide advantages in broadening the protection against viruses with mutations in RBD. Whereas S and S1 vaccination induced higher levels of neutralizing antibodies correlating with titers of hACE2 binding inhibition. Meanwhile, the immunogenicity of inactivated SARS-CoV-2 to induce neutralizing antibodies was lower than the S or S1 subunit, although further studies are needed for this conclusion. AS01 (QS-21 +MPL) liposome adjuvant is approved for use in herpes Zoster vaccination (shingles, recommended for ≥50 years old) [24][25][26][27]. The effects of AS01-like adjuvant were found to be significant on enhancing Th1 type-IgG2a isotype and S1-specific IgG antibodies, neutralizing and hACE2 binding inhibition activity titers to S vaccination in young and aged mice. In addition, cellular responses of S-specific IgG secreting cells, and IFN-γ+ CD4 and CD8 T cells were significantly enhanced by AS01-like adjuvant in SARS-CoV-2 S vaccination. In contrast, alum adjuvanted SARS-CoV-2 S subunit vaccines were demonstrated to induce Th2-biased immune responses [22]. Overall, adjuvanted SARS-CoV-2 S vaccination is expected to enhance protective IgG and cellular immune responses to S1 and S2 domains in young and elderly populations.
Conclusions
This study investigated the immunogenic properties of SARS-CoV-2 full-length spike ectodomain (S: S1-S2) in young adult and old-aged mice, S1 with RBD, S2 with fusion domain, and the effects of AS01-like adjuvant (QS-21 + MPL). At a low dose (0.8 µg), full-length S was more immunogenic than the S1 subunit domain. With a 4-µg vaccine dose, S1 was highly effective in inducing neutralizing and receptor-binding inhibition antibodies, although both S1 and S2 subunit domain vaccines were similarly immunogenic. Aged mice required a high vaccine dose (4 µg) of S for effective induction of SARS-CoV-2 spike-specific IgG antibodies. Adjuvant was required to effectively induce IgG1 and IgG2a isotypes, neutralizing antibodies, and ASC and IFN-γ T cell immune responses. Results of this study provide insights into designing SARS-CoV-2 spike vaccine antigens. | 2021-04-04T06:16:27.051Z | 2021-03-29T00:00:00.000 | {
"year": 2021,
"sha1": "e4f553aaf0da8f5576c6b05ed2e11c97cf1d138a",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2076-393X/9/4/316/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "35ed6fe4781680d8c01916b342ce3a840660128c",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
230604060 | pes2o/s2orc | v3-fos-license | Hybrid Fixed Point Theorem with Applications to Forced Damped Oscillations and Infinite Systems of Fractional Order Differential Equations
In this manuscript, hybrid common fixed point results in the setting of a
b
-metric space are established. Our results generalized the results of Fisher, Khan, and Piri et al. for set-valued mapping in
b
-metric spaces. Applications to forced damped oscillations, infinite systems of fractional order differential equations, and system of functional equations are also studied. We construct an example to support our main result.
Introduction and Preliminaries
The idea of a metric space was generalized by Czerwik [1] and Bakhtin [2]. They presented metric spaces called b-metric spaces. Several researchers took the idea of Czerwik and illustrated interesting results. For details, see [3][4][5][6][7]. For recent generalizations to b-metric spaces by employing control functions in the triangle inequality to replace the constant of the b-metric triangle inequality, we refer to [8][9][10][11][12] and the references therein.
In 1973, the contraction was introduced by Geraghty [13] in which the contraction constant was quite changed by mapping owing to its interesting properties. After that, several papers for rational Geraghty contractive mappings have appeared (for details, see [14][15][16][17][18]). Khan [19] introduced one of the best works in this line, and Fisher [20] modified it. By rational expressions, Khan [19] and Fisher [20] results were lately extended by Piri et al. [21] by introducing a new general contractive condition. Fixed point results via F-Khan contractions were studied by Piri et al. [22] on complete metric spaces, and they discuss their application to integral equations. In [23], Ullah et al. established fixed point results and discuss the application to an infinite system of fractional order differential equations.
Nadler [24] elaborated and extended the Banach contraction principle [25] to set-valued mapping by using the Hausdorff metric. After different generalizations of the Nadler contraction principle, Wardowski [26] introduced a contraction called F-contraction. In this way, Wardowski generalized the Banach contraction principle (BCP) in a different manner from the known results of literature. Following this direction, Sgroi and Vetro [27] studied set-valued F-contractions and discussed their application on certain functional and integral equations.
Cosentino and Vetro [25] extended F-contraction in the setting of b-metric spaces and proved some fixed point results. Ali et al. [28] studied the fixed point, generalize the result of Cosentino et al. [25] for a new class of F-contractions in the set-ting of b-metric spaces, and apply the result to obtain existence results for Volterra-type integral inclusion in b-metric spaces. Several authors generalized F-contraction by combining it with some existing contractive conditions (see [27,[29][30][31][32]).
In the current work, we derive a hybrid (single and multivalued) common fixed point result for the F-Khan-type contraction in the b-metric space. Also, we shall provide an example and applications for the validity of the established result. Throughout this paper, CBðΛÞ indicates the family of nonempty subsets of Λ, which is bounded and closed. ℝ + , ℕ 0 , and ℕ signify the set of any nonnegative real numbers, the set of nonnegative integers, and the set of positive integers. Now, we recall a few basic results and definitions.
Definition 2 [1]. Assume ðΛ, d, sÞ is a b-metric space, where s ≥ 1: Let σ n be a sequence in Λ. Then, σ ∈ Λ is said to be the limit of the sequence σ n if and the sequence σ n is said to be convergent in Λ.
Definition 3 [1]. If for each ϵ > 0, there is a positive integer ℕ such that dðσ n , σ m Þ < ϵ for all n, m > ℕ, then a sequence σ n is said to be a b-Cauchy sequence.
Definition 4 [1]. A b-metric space ðΛ, d, sÞ is said to be complete (or a b-complete metric space) if every Cauchy sequence in ðΛ, d, sÞ is convergent in Λ.
Definition 5 [33]. Assume s ≥ 1 is a real number and F represents the family of functions F : ℝ + ⟶ ℝ, with the below conditions: (F 1 ) For each fγ n g ⊂ ℝ + which is a positive term sequence, F is strictly increasing.
then, λ has a fixed point in Λ.
Main Results
Definition 11. Let ðΛ, d, sÞ be a b-metric space. (i) θ and Θ have a common fixed point if θθα = θα, and θ is occasionally Θ -weakly commuting at α. Then, θ and Θ have a common fixed point Proof. Let ζ 0 ∈ Λ and ζ 1 ∈ Θζ 0 . Let ζ 2 ≔ θζ 1 . By Lemma 8, there exists ζ 3 ∈ Θζ 2 such that dðζ 3 , ζ 2 Þ ≤ ℍðΘζ 2 , fθζ 1 gÞ. Inductively, we let ζ 2n ≔ θζ 2n−1 , and by Lemma 8, we choose ζ 2n+1 ∈ Θζ 2n such that Using equation (8), we have which implies We deduce that Let Q n = dðζ 2n+1 , ζ 2n+2 Þ > 0, ∀n ∈ ℕ. It follows from (12) and axiom F 4 that Thus, by equation (13), which implies that Applying limit n ⟶ 1, we have lim n→∞ n s n p n ð Þ k = 0: From (21), there exists n 1 ∈ ℕ such that nðs n Q n Þ k < 1 such that To show that fζ n g is a b-Cauchy sequence, consider m, n ∈ ℕ such that m > n > n 1 , using triangular inequality, and using (18) 3 Journal of Function Spaces By taking limit, we get dðζ n , ζ m Þ ⟶ 0. Hence, fζ n g is a b -Cauchy sequence, but a b-metric space ðΛ, d, sÞ is a complete space so there exists ζ ∈ Λ such that ζ n ⟶ ζ as n ⟶ ∞: The next step is to show that ζ is a common fixed point of the mapping Θ and θ. We have which implies that Since F is strictly increasing, therefore Adding τ to both sides and using equation (7), we have Since τ ∈ ℝ + , we have Since F is strictly increasing, therefore Applying limit n ⟶ ∞, we get By taking limit and using continuity of θ, we have which implies θζ ∈ Θζ. Since θθζ = θζ and θζ ∈ Θθζ, therefore γ = θγ ∈ Θγ. By putting θ = Θ in Theorem 12, we get the following. (i) θ and Θ have a common fixed point if θθα = θα, and θ is occasionally Θ -weakly commuting at α. Then, θ and Θ have a common fixed point Remarks. Our result extended the results of (i) Fisher [20] for set-valued mapping in the setting of b-metric spaces (ii) Khan [19] for set-valued mapping in b-metric spaces (iii) Piri et al. [21,22] for set-valued mapping in b-metric spaces
Journal of Function Spaces
Example 17. Consider the sequence ffS q g: q ∈ f1, 2 ⋯ 100gg as follows: Let Λ = fS q : q ∈ f1, 2,⋯,100gg and d : Λ × Λ ⟶ ½0, ∞Þ be defined by Then, ðΛ, d, sÞ is a complete b-metric space. Define the mapping Θ : Λ ⟶ CBðΛÞ by and θ : Λ ⟶ Λ by Let us consider the following calculation. First, observe that max fdðx q , θx p Þ, dðθx q , For each p ∈ ℕ, p > 2, we have For each p, q ∈ N, p > q > 1, we have Multiplying 1.01 on both sides and taking log to the base e on both sides, we get inequality (7) and also find that τ = 0:004365 ⋯ .Therefore, θ and Θ have a fixed point.
Assume that an object of mass m moves to and fro on the x-axis around an equilibrium position x = 0 (see Figure 1). The object has position xðtÞ at time t. It undergoes a force due to a spring: Furthermore, a damping force that resists the movement of the object is shown: Now, by the second law of motion, where m, k, and b are all positive constants. Up to that feature, the system is simply the damped harmonic oscillator. Now, suppose the additional time-dependent force f ðtÞ is applied to the object. Then, by Newton's second law, The problem (43) can be written in the form of the Fredholm integral equation: Here, Green's function for critically damping oscillation is defined as where τ can be found in terms of m, b, and k. Let Λ = C½0, 1 be the set of all continuous functions defined on ½0, 1. For u ∈ Cð½0, 1Þ, define the supremum norm as v k k = sup Let Cð½0, 1, ℝÞ be endowed with the b-metric: With these contexts, Cð½0, 1, R, k:kÞ becomes the Banach space. We give the following theorem.
Theorem 18. Assume the assumptions given below hold.
Similarly, we can apply our theorem for the existence of underdamping oscillation and overdamping oscillation.
Application to Infinite Systems of Fractional Order Differential Equations
Now, we have to derive sufficient conditions for the solutions in space c to the following nonlinear infinite systems of fractional order differential equations: with the initial condition ϑ 0 = ϑ 0 i , where t ∈ J, i, j = 1, 2 ⋯ , and τ is the positive real number. J is any fixed interval on the real line. Let Λ = c be the space of all real sequences whose limit is finite. | 2020-12-10T09:04:13.464Z | 2020-12-07T00:00:00.000 | {
"year": 2020,
"sha1": "f6cd4526b99d70aa2badb3bf50b47320ede09a01",
"oa_license": "CCBY",
"oa_url": "https://downloads.hindawi.com/journals/jfs/2020/4843908.pdf",
"oa_status": "GOLD",
"pdf_src": "Adhoc",
"pdf_hash": "6b019272984353567a2afb09516c357185e4e8eb",
"s2fieldsofstudy": [
"Mathematics"
],
"extfieldsofstudy": [
"Mathematics"
]
} |
244027601 | pes2o/s2orc | v3-fos-license | INDIGENOUS SOCIAL MOVIMENT AND THE CONQUEST OF THE INTERCULTURAL SCHOOL O MOVIMENTO SOCIAL INDÍGENA E A CONQUISTA DA ESCOLA INTERCULTURAL MOVIMIENTO SOCIAL INDÍGENA Y LA CONQUISTA DE LA ESCUELA INTERCULTURAL
Received: 10.12.2020. Accepted: 11.20.2020. Published: 01.01.2021. ABSTRACT: Social movements had great participation as an agent of political changes throughout the 20th century, it is from this articulation (especially in Latin America) that the perspective of interculturality strengths in the indigenous school education, which seeks to understand the school within the post-colonial inequalities. The Brazilian Indigenous Movement began to organize itself in the 1970s, with the Union of Indigenous Nations (UNI) playing a major role in the 1988 constitutional charter, which will underpin the educational rights related to the intercultural school within a specific social Project.
Introduction
It is from the organized indigenous action as a social movement, articulated with the perspective of the Latin American context, that the political, cultural and educational goals will be present in the concept of intercultural school, which has as a principle the protagonismo of the indigenous people.
A landmark that represents the legal overcoming of the idea of assimilation of indigenous peoples and the reaffirmation of the right to school education that values cultural diversity and the corporate project of each people. It is the 1988 constitutional the first prevew the decisive participation by the movement indigenous.
So, the school, historically used as a form of doctrination and assimilation of the indigenous within a process of deterritorialization and denial of their culture, proves to be an important instrument for valuing ethnic and cultural identity.
Development
The twentieth century saw several social movements linked to popular struggles flourishing in Brazil and Latin America. Under the climate of the 1948 Declaration of Human Rights, which recognized rights and freedoms, a series of flags and demands e-ISSN nº 2447-4266 Palmas, v. 7, n. 1, p. 1-13, jan.-mar., 2021Palmas, v. 7, n. 1, p. 1-13, jan.-mar., http://dx.doi.org/10.20873/uft.2447 2 related to ethnic identities and cultural minorities was emerging. At the international level, it was in 1975 that, for the first time in the history of the indigenous movement, indigenous peoples of 19 nationalities met in Canada to discuss problems that affected them and ways of organizing strategies to address these issues. (BICALHO, 2010).
In Latin America, according to Fleuri (2003, p. 20), "a diversidade cultural foi historicamente relegada e deixada à margem das propostas e práticas educativas que, a exemplo do que ocorreu na Europa, se pautaram no ideal homogeinizador do Estado-Nação". Educational policies were, therefore, formed from this ideal of a unique culture in the primordial formation of these policies in Latin American countries, which had, as one of its consequences, the low academic performance of students whose mother tongue was different from that of their mother tongue. used in official education systems.
For Fleuri (2003), movements linked to popular culture and popular education began to appear in the 1950s, being silenced and subjected to strict controls from the military dictatorship. These movements started to gain strength again in the 1970s, when several movements (indigenous, blacks, homosexuals, women) emerged that questioned, based on their identity, the current economic and political plans, leading the debate to various conceptions of socio-cultural relations of identity processes, characterized as intercultural.
For Fleuri (2003), the concept of intercultural education gains strength from its political-pedagogical dimension in overcoming the bi-cultural, or multicultural perspective. According to the author (2003, p. 21), "interculturalidade, além de expressar a coesão étnica de um grupo social, proporcionando condições para o fortalecimento da identidade cultural, vai também estimular a aquisição do conhecimento cultural dos outros povos" This new concept resulted in a major change in the treatment of cultural differences within the school environment, related to the emergence of indigenous identities in the struggle for their rights. Indigenous uprisings in Mexico, Bolivia, Ecuador, Chile, Colombia, Brazil, among others, sought the right of possession over their lands and, consequently, the revaluation of their languages and traditions, which required educational programs adapted to this reality. Fleuri (2003, p. 22) characterizes as "uma nova perspectiva epistemológica que aponta para a compreensão do hibridismo e da ambivalência, que constituem as identidades e relações interculturais". For Fleuri (2003, p. 23): A intercultura vem se configurando como um objeto de estudo interdisciplinar e transversal, no sentido de tematizar e teorizar a complexidade (para além da pluralidade ou da diversidade) e a ambivalência ou o hibridismo (para além da reciprocidade ou da evolução) dos processos de elaboração de significados nas relações intergrupais e intersubjetivas, constitutivos de campos identitários em termos de etnias, de gerações, de gênero e de ação social.
Original cultures are not subordinate to hegemonic movements, and as it is possible to recognize in globalization and in contemporary times, they resist and assert themselves through indigenous social movements. It is importante to understand that the process of cultural Exchange does not make these peoples less indigenous. Many peoples throughout the course of the contact were inclined in favor of mastering techniques and other knowledge that entered the Amerindian continente through western colonialismo, however, not denying their ethnic identity and traditional knowledge. According to Fravre (1998), indigenism itself must be understood not as a pure expression of indigenous thought, but as na eminently syncretic reflection of a resistance generated in this contact.
Based on Bergamaschi (2014), it is considered an understanding of the indigenous as a producer of scientific knowledge, an understanding that does not rank the constitutions of culture, established by intercultural experience that is also of an ethical nature. As science is not free from ideological and utopian disputes related to its results, it is necessary that these intellectuals, representatives of their social movements, interfere in the state in question, since science serving capitalism seeks to increase production and make it cheaper in order to ensure greater profits and the expansion of its markets.
It is necessary to counter this pro-capital epistemology and think about science not for the full exploitation of a limited planet, but for there to be a balance with the preserved nature, so that traditional cultures can exercise their freedom. According to Passos (2010, p. 28): A mercadoria incidiu sobre a vida dos seres humanos, agora destituídos de sua liberdade e autodeterminação, em favor do mercado que se torna vivo e tem a palavra da ordem da vida e da morte dos seres humanos, induzindo-os a pura reprodução material de sua existência, expropriados que foram por uma cultura da dominação que minou a liberdade deles.
This results in an epistemological perception that the function of schools in indigenous societies is to assimilate this population and adapt it to the hegemonic culture in its obsession with material production and accumulation. Silva and Herbetta (2017) According to Bicalho (2010), in Brazil, a founding landmark of the Indigenous Movement can be considered the Indigenous Assemblies, organized by the Indigenous Missionary Council (CIMI) that had the opportunity to bring together several indigenous leaders of peoples that covered a great geographical distance between them. Ethnic groups that, without the assistance and funding of CIMI, would not be able to meet. Thus, there is the year 1974 as the year of the first Indigenous Assembly that took place in Diamantino -MT, two years after the creation of CIMI, where the Bororo, Xavante, Apiaka, Kaiabí, Rikbaktsa, Iranxe, Pareci and Nambikwara groups attended. The Assemblies are spaces where, for the first time, interethnic contacts allowed a perception as a collective in the struggle for recognition, providing a greater willingness to resist social action, which started to bother the Military Regime guidelines. "At the time the indigenous would rarely be showed as a historical actor, but only as endagered beings, yet to be integrated to the nacional comunity -, the indigenous would have to get their version of brazil through the Assembly" (BICALHO, 2010, p. 157).
As an organization linked to the National Conference of Bishops of Brazil (CNBB), CIMI would have created spaces for articulation based on a type of pan-indigenous association, which aimed at the self-determination of these peoples. This new role of the e-ISSN nº 2447-4266 Palmas, v. 7, n. 1, p. 1-13, jan.-mar., 2021Palmas, v. 7, n. 1, p. 1-13, jan.-mar., http://dx.doi.org/10.20873/uft.2447 5 Catholic Church through CIMI is the result of its recognition in relation to mistaken evangelization policies since colonial times, and thus, based on changes made in the Second Vatican Council held from 1962 to 1965, the General Conference of the Episcopate Latin American met in Medellín, Colombia, seeking from there a new dialogue with indigenous peoples that was inspired by Liberation Theology (BICALHO, 2010).
According to Cunha (2012), in the 1970s, countless new indigenous and nonindigenous organizations emerged in support of the causes of indigenous peoples. In the following decade, the Indians were able to mobilize and organize fronts of action at the national level, which proved to be fundamental for the great historical turn that represents the rights conquered in the 1988 Constitution that legally ends the assimilationist view affirming the indigenous peoples their historical rights, including land tenure.
Two moments when the indigenous articulation was put to the test were in 1978, in an attempt to emancipate the Indians by Decree and in the beginning of the 80s when FUNAI designated to it contestable ways of stipulating who is and who is not indigenous.
"A relação com a FUNAI é de extrema indisposição. As falas indígenas são quase unânimes ao se posicionarem sobre a Fundação, o que denota uma total descrença quanto aquele que é o órgão principal, encarregado de nossas questões" (BICALHO, 2010, p. 166). It is in this context that in 1980 the Union of Indigenous Nations (UNI) emerges, the first truly indigenous national organization, which displeased FUNAI, which did not recognize the right to the organization outside its responsibility. According to Bicalho (2010, p. 144
If, in the first centuries of cultural contact, the school was inserted into indigenous communities as a strange institution, disconnected from local culture and ethnocidal in its mission to Christianize indigenous peoples, today it is a powerful tool for the preservation of their traditional culture. This contradiction would be inherent in school education, "a escola é percebida ao mesmo tempo como instrumento de empoderamento para a 'autonomia' e também como uma armadilha para a domesticação de conhecimentos" (GALLOIS, 2016, p. 509). One of the effects of the appropriation of culture in the context of school education is "fazer delas [escolas] um espaço para o exercício da política indígena, que consiste inclusive em aprender a política dos brancos, para daí participar mais ativamente do movimento indígena" (GALLOIS, 2016, p. 512). e-ISSN nº 2447-4266 Palmas, v. 7, n. 1, p. 1-13, jan.-mar., 2021Palmas, v. 7, n. 1, p. 1-13, jan.-mar., http://dx.doi.org/10.20873/uft.2447 9 The ethnocide present in the ideology of integration of indigenous peoples into the mercantilist and Christian order of the West is represented in a conception of education supposedly redemptive of the civilizing backwardness of indigenous peoples.
Thus denounces Munduruku (2017), as he points out that this education would have the ultimate objective of destroying indigenous ethnic identity and giving capital dominion over their lands. The author affirms that the Western being understands nature as external to him, ignoring his writing when trying to take possession of it. This idea is inserted in a logic of domination of the European and in how it imposed its view of the world from colonial domination. According to Munduruku (2017, p. 2-3): Foi-se criando uma necessidade nos jovens nativos de apreender conceitos e teorias que não cabem no pensar holístico e circular de seus povos. Esta agressão ao sistema mental indígena, fruto de uma história da qual não somos culpados, mas sobre qual temos responsabilidade, acaba se perpetuando nas novas políticas inclusivistas levados a efeito por governos nas três esferas. Conclusão: nossos jovens se vêm obrigados a aceitar como inevitável à necessidade de ler e escrever códigos das quais prefeririam não aprender e não lhes é dado o direito de recusar sob a acusação de preguiça ou descaso para com a 'boa vontade' dos governos e governantes.
The school that has the indigenous as its protagonist and the knowledge that will be taught there emerges from the important performance of the indigenous social movements. For Zart (2012, p. 25 is also attentive to this political character of education, by associating resistance to the struggle for indigenous rights in their recognition of themselves as a heterogeneous group. "A revolta é um símbolo de luta, sobretudo quando nasce da descoberta da injustiça, e que necessita para além do instinto de classe a consciência de classe" (PASSOS, 2010, p. 34).
The school resignified itself from the conception of the right to a corporate project based on its autonomy and protagonism, thus seeking the Brazilian state, to act in the formation of indigenous teachers who can teach in their original language. This position of the school in the indigenous society is linked to the recognition of the indigenous right to their culture and the understanding that public policies must be based on the autonomy and protagonism of the indigenous. For Zoia (2010), the school will become fundamental in the process of valuing and preserving indigenous culture, acting in the emancipation of these peoples from the hegemonic cultural order. The result of this is the strengthening, for example, of the language, where school education becomes an instrument for reaffirming the ancestral language. According to Zoia (2010, p. 70): O caminho da educação escolar indígena em sua singularidade é a esperança dos povos indígenas para conquista definitiva dos seus direitos e de sua terra, tendo como referencial a autonomia e sua luta na construção de uma política indígena na educação escolar, que enfatize a formação e a valorização da sua cultura e práticas educacionais.
This meaning and sense is highlighted by Zoia (2010), because as the researcher indicates, many indigenous communities started to see in the school a space that is conducive both to access to general knowledge arising from post-colonial needs, and to the valorization of historically constructed knowledge, a space that would be constituted based on the cultural specificities of each people in their ethnic affirmation.
Conclusion
From the readings carried out through bibliographic research, issues of paramount importance were covered for indigenous school education in Brazil, from the organization as a movement to the conquest of rights on which the importance of intercultural school in indigenous communities was based. The history of the indigenous people in Brazil is a history of violence, genocide, ethnocide and epistemicide, but also of great cultural resistance that has lasted for more than five centuries. As a new paradigm, in the relationship with the surrounding society and the Brazilian national state, the performance of the indigenous movement organized at national level is manifested in the constitutional charter.
Intercultural education represents a possibility of resignification in relation to school with indigenous peoples, and may represent an instrument of social, cultural and political empowerment. Despite these advances, it is important to consider the situation of marginalization that generally characterizes the situation of indigenous people in the post-colonial American continent. Indigenous peoples are still very underrepresented in representative political instances, and their hard-won rights live with constant threats from different socioeconomic interests. RESUMO: This text presents a critical analysis of Enem's official dissemination campaigns in the years 2019 and 2020 amidst a scenario of political tensions between government and federal universities, also marked by the coronavirus pandemic. The study shows a partial representation, in the scope of Brazilian government advertising, sciences and higher education, characterized mainly by the overvaluation of courses in Health Sciences and courses traditionally valued by the labor market, such as Law and Engineering, in contrast to the relative invisibility of Human, Social Sciences and other areas of knowledge. | 2021-11-12T16:18:01.233Z | 2021-01-01T00:00:00.000 | {
"year": 2021,
"sha1": "f4a716b5461c35a88f46e809b2cc57e61454e0a2",
"oa_license": "CCBYNC",
"oa_url": "https://sistemas.uft.edu.br/periodicos/index.php/observatorio/article/download/12082/18987",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "c0a13718cc6088160f12285b5aba11041176beb6",
"s2fieldsofstudy": [
"Education",
"Sociology",
"History",
"Political Science"
],
"extfieldsofstudy": []
} |
119409316 | pes2o/s2orc | v3-fos-license | Is the Cosmological Constant Problem Properly Posed?
In applications of Einstein gravity one replaces the quantum-mechanical energy-momentum tensor of sources such as the degenerate electrons in a white dwarf or the black-body photons in the microwave background by c-number matrix elements. And not only that, one ignores the zero-point fluctuations in these sources by only retaining the normal-ordered parts of those matrix elements. There is no apparent justification for this procedure, and we show that it is precisely this procedure that leads to the cosmological constant problem. We suggest that solving the problem requires that gravity be treated just as quantum-mechanically as the sources to which it couples, and show that one can then solve the cosmological constant problem if one replaces Einstein gravity by the fully quantum-mechanically consistent conformal gravity theory.
I. ORIGIN OF THE COSMOLOGICAL CONSTANT PROBLEM
Despite their familiarity, it is the very way in which the standard Einstein gravitational equations with matter source M, viz.
are used in astrophysics and cosmology that actually creates the cosmological constant problem. To establish this we start by noting that since the two sides of the equation are to be equal to each other, the two sides must either both be quantum-mechanical or must both be classical. However, since the gravity side is not well-defined quantum-mechanically, one takes the gravity side to be classical. Now at the time the Einstein equations were first introduced the energy-momentum tensor side was taken to be classical too. However, with electron degeneracy being able to stabilize a white dwarf star up to an expressly quantummechanically-dependent Chandrasekhar mass of order (hc/G) 3/2 /m 2 p , and with the cosmic microwave background being a black-body with energy density equal to π 2 k 4 B T 4 /15h 3 c 3 , it became clear not just that quantum mechanics is relevant on large distance scales, but that gravity is aware of this, and that the quantum-mechanical nature of its macroscopic sources is relevant to gravitational astrophysics and cosmology.
To try to get round the fact that the gravity side of the Einstein equations is classical (CL) while the matter side is quantum-mechanical, one replaces the quantum-mechanical T µν M by its c-number matrix elements in appropriate states |ψ , and thus replaces (1) by 1 8πG Now since the matter term in (2) consists of products of quantum fields at the same spacetime point, the matter term has an infinite zero-point contribution (∼h d 3 kk µ k ν /ω k ). But with the gravity side of (2) being finite, it cannot be equal to something that is infinite. Thus one must find a mechanism to cancel infinities on the matter side, and must find one that does so via the matter side alone. However instead, in the literature one commonly ignores the fact that the hallmark of Einstein gravity is that gravity is to couple to all forms of energy density rather than only to energy density differences, and subtracts off (i.e. normal orders away) zero-point infinities by hand, and replaces (2) by the finite (FIN) Thus in treating the contribution of the electron Fermi sea to white dwarf stars or the contribution of black-body photons to cosmic evolution, one uses an energy operator of the generic form H = (a † (k)a(k) + 1/2)hω k , and then by hand discards the H = hω k /2 term. And then, after all this is done, the finite parts of ψ|T µν M |ψ and the vacuum Ω|T µν M |Ω still have an uncanceled and as yet uncontrolled cosmological constant contribution (T µν M ∼ Λg µν ) that needs to be dealt with. Because of their differing structure, the zero-point and cosmological constant terms are distinct, with both problems thus needing to be dealt with.
There would not appear to be any formal derivation of (3) in the literature that starts from a consistent quantum gravity theory [1], and since it is (3) that is conventionally used in astrophysics and cosmology, it would not appear to yet be on a fully secure footing.
While a derivation of (3) might eventually be forthcoming, in the current gravity literature one starts with (3) as a given, and then tries to solve the cosmological constant problem associated with the fact that quantum-field-theoretic contributions to the right-hand side of (3) are at least 60 orders of magnitude larger than the cosmology associated with (3) could possibly tolerate [2]. It appears to us that, as currently understood, the standard gravity cosmological constant problem is not properly posed, as it is based on trying to make sense of a starting point for which there would not appear to be any clear justification. Moreover, as written, (3) entails that gravity itself is to play no role in solving the cosmological constant problem as all it can do is respond to whatever energy density the right hand side of (3) provides it with. To give gravity a role it would need to be as quantum-mechanical as the source to which it couples, something that one should anyway want of a physical theory.
On making gravity quantum mechanical, below we find that the zero-point problem and the cosmological constant problem are then tied together and solved together.
II. ON THE NATURE OF QUANTUM CONFORMAL GRAVITY
In quantum electrodynamics (QED) one also has to deal with both classical and quantummechanical equations of motion. However, because QED is renormalizable, one can derive classical electrodynamics (CED) from QED by taking matrix elements of quantum fields in configurations with an indefinite number of photons. One thus has no need to posit an electrodynamic analog of an equation such as (2) since in QED one can actually derive it [3].
In order to address the cosmological constant problem we would first need to get a justifiable starting point that could then be used for both classical and quantum gravity. Thus, just as in QED, we would need to begin with a renormalizable quantum gravity theory. We are this naturally led to consider the renormalizable conformal gravity theory (see e.g. [4-7] for some recent reviews). Moreover, a conformal structure is also natural for the matter sector to which gravity couples, since the matter sector will also be locally conformal invariant at the level of the action if there are no fundamental mass scales and all mass is generated by spontaneous symmetry breaking.
In conformal gravity the gravitational sector action is taken to be of the form where the coupling constant α g is dimensionless and C λµνκ is the Weyl conformal tensor.
The I W action is the only pure gravity action in four spacetime dimensions that is left invariant under local conformal transformations of the form g µν (x) → exp[2α(x)]g µν (x) [8], and it is because α g is dimensionless that conformal gravity is renormalizable. Now since its equations of motion are fourth-order it had been thought that the theory has states of negative norm. However, on explicitly constructing the quantum Hilbert space it was found [9,10] that that there were no states of negative norm (and no states of negative energy either) [11]. Conformal gravity is thus offered as a fully consistent quantum theory of gravity, one with no need for the extra dimensions required of string theory.
III. SOLUTION TO THE COSMOLOGICAL CONSTANT PROBLEM
With conformal gravity being consistent at the quantum level, if we introduce a conformal invariant matter action I M , we can take the action of the universe to be the fully conformal invariant I UNIV = I W + I M . In the same way that we define the variation of I M with respect to the metric to be T µν M , we can define the variation of I W with respect to the metric to be the gravitational energy-momentum tensor T µν GRAV , and can define the variation of I UNIV with respect to the metric to be T µν UNIV . Stationarity of I UNIV with respect to the metric yields T µν UNIV = 0, and thus With both I W and I M being renormalizable, the stationarity condition T µν UNIV = 0 is not modified by radiative corrections, and thus, in analog to QED, the relation T µν GRAV = −T µν M holds both for quantum fields and their c-number matrix elements. Now we had noted above that T µν M possesses a zero-point term, possessing one even if the matter fields are massless and the vacuum is unbroken. Thus on quantizing the gravitational field, T µν GRAV must not only possess one too, it must be quantized so that T µν GRAV + T µν M is zero-point free [12]. Now when particle masses are generated dynamically, a cosmological constant term is induced, and at the same time the matter source zero-point fluctuations readjust as they are now due to vacuum loop diagrams with massive fields. However, since the condition All of the vacuum energy density infinities are taken care of by (6), and for astrophysics and cosmology we can then use the completely infinity-free (7). In this way, for studying white dwarfs or the cosmic microwave background, in (7) we can now use H = a † (k)a(k)hω k alone after all, as the zero-point contribution has already been taken care of by gravity itself and does not appear in (7) one has to take care of the zero-point problem as well, and when one has a renormalizable theory of gravity, via an interplay with gravity itself one is then able to bypass an equation such as (3) and take care of both of the two problems at one and the same time.
[1] If one starts with a path integral over both metric paths and matter field paths, formally (2) would correspond to a path integration over the matter fields and a stationary variation on the metric. However, when one performs the path integration over all the other metric paths one finds that the path integral does not actually exist, with there being infinities not just in the vacuum zero-point sector but in scattering amplitudes as well.
[2] An initially attractive feature of a theory such as supersymmetry is that it partners fermions with bosons, and if the supersymmetry is exact zero-point fermion and zero-point boson contributions will identically cancel each other. However, this cancellation is lost once the supersymmetry is broken, and the lack to date of the discovery of any superparticles with masses below a few hundred or so GeV leads to a contribution to the vacuum energy density that is at least 60 or so orders of magnitude larger than (3) is able to handle.
[3] As long as one does not couple to gravity one is free to normal order. Thus for flat space electrodynamics as long as one can get to an analog of (2), which one can, one can proceed directly to an analog of (3). Since the electrodynamic analogs of (2) and (3) do follow from QED we see that there is no independent theory of CED. Rather, CED is output to QED. [6] P. D. Mannheim, Living Without Supersymmetry -the Conformal Alternative and a Dynam- | 2017-03-27T19:47:17.000Z | 2017-03-27T00:00:00.000 | {
"year": 2017,
"sha1": "9c9d05540dc6ca0db7047ea6e771a00f735b5a9a",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/1703.09286",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "9c9d05540dc6ca0db7047ea6e771a00f735b5a9a",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
257280083 | pes2o/s2orc | v3-fos-license | Unsupervised Meta-Learning via Few-shot Pseudo-supervised Contrastive Learning
Unsupervised meta-learning aims to learn generalizable knowledge across a distribution of tasks constructed from unlabeled data. Here, the main challenge is how to construct diverse tasks for meta-learning without label information; recent works have proposed to create, e.g., pseudo-labeling via pretrained representations or creating synthetic samples via generative models. However, such a task construction strategy is fundamentally limited due to heavy reliance on the immutable pseudo-labels during meta-learning and the quality of the representations or the generated samples. To overcome the limitations, we propose a simple yet effective unsupervised meta-learning framework, coined Pseudo-supervised Contrast (PsCo), for few-shot classification. We are inspired by the recent self-supervised learning literature; PsCo utilizes a momentum network and a queue of previous batches to improve pseudo-labeling and construct diverse tasks in a progressive manner. Our extensive experiments demonstrate that PsCo outperforms existing unsupervised meta-learning methods under various in-domain and cross-domain few-shot classification benchmarks. We also validate that PsCo is easily scalable to a large-scale benchmark, while recent prior-art meta-schemes are not.
INTRODUCTION
Learning to learn (Thrun & Pratt, 1998), also known as meta-learning, aims to learn general knowledge about how to solve unseen, yet relevant tasks from prior experiences solving diverse tasks. In recent years, the concept of meta-learning has found various applications, e.g., few-shot classification , reinforcement learning (Duan et al., 2017;Houthooft et al., 2018;Alet et al., 2020), hyperparameter optimization (Franceschi et al., 2018), and so on. Among them, few-shot classification is arguably the most popular one, whose goal is to learn some knowledge to classify test samples of unseen classes during (meta-)training with few labeled samples. The common approach is to construct a distribution of few-shot classification (i.e., N -way K-shot) tasks and optimize a model to generalize across tasks (sampled from the distribution) so that it can rapidly adapt to new tasks. This approach has shown remarkable performance in various few-shot classification tasks but suffers from limited scalability as the task construction phase typically requires a large number of human-annotated labels.
To mitigate the issue, there have been several recent attempts to apply meta-learning to unlabeled data, i.e., unsupervised meta-learning (UML) (Hsu et al., 2019;Khodadadeh et al., 2019;2021;Lee et al., 2021;. To perform meta-learning without labels, the authors have suggested various ways to construct synthetic tasks. For example, pioneering works (Hsu et al., 2019;Khodadadeh et al., 2019) assigned pseudo-labels via data augmentations or clustering based on pretrained representations. In contrast, recent approaches (Khodadadeh et al., 2021;Lee et al., 2021; utilized generative models to generate synthetic (in-class) samples or learn unknown labels via categorical latent variables. They have achieved moderate performance in few-shot learning benchmarks, but are fundamentally limited as: (a) the pseudo-labeling strategies are fixed during meta-learning and impossible to correct mislabeled samples; (b) the generative approaches heavily rely on the quality of generated samples and are cumbersome to scale into large-scale setups.
PRELIMINARIES 2.1 PROBLEM STATEMENT: UNSUPERVISED FEW-SHOT LEARNING
The problem of interest in this paper is unsupervised few-shot learning, one of the popular unsupervised meta-learning applications. This aims to learn generalizable knowledge without human annotations for quickly adapting to unseen but relevant few-shot tasks. Following the meta-learning literature, we refer to the learning phase as meta-training and the adaptation phase as meta-test.
Formally, we are only able to utilize an unlabeled dataset D meta train := {x i } during meta-training our model. At the meta-test phase, we transfer the model to new few-shot tasks {T i } ∼ D meta test where each task T i aims to classify query samples {x q } among N labels using support (i.e., training) samples S = {(x s , y s )} N K s=1 . We here assume the task T i consists of K support samples for each label y ∈ {1, . . . , N }, which is referred to as N -way K-shot classification. Note that D meta train and D meta test can come from the same domain (i.e., the standard in-domain setting) or different domains (i.e., cross-domain) as suggested by Chen et al. (2019).
CONTRASTIVE LEARNING
Contrastive learning (Oord et al., 2018;He et al., 2020;Khosla et al., 2020) aims to learn meaningful representations by maximizing the similarity between similar (i.e., positive) samples, and minimizing the similarity between dissimilar (i.e., negative) samples on the representation space. We first describe a general form of contrastive learning objectives based on the temperature-normalized cross entropy He et al., 2020) and its variant for multiple positives (Khosla et al., 2020) as follows: where {q i } and {k j } are 2 -normalized query and key representations, respectively, A ∈ {0, 1} N M represents whether q i and k j are positive (A i,j = 1) or negative (A i,j = 0), and τ is a hyperparameter for temperature scaling.
Based on the recent observations in the self-supervised learning literature, we also describe a general scheme to construct the query and key representations using data augmentations and a momentum network. Formally, given a random mini-batch {x i }, the representations can be obtained as follows: where Normalize(·) is 2 normalization, t i,1 ∼ A 1 and t i,2 ∼ A 2 are random data augmentations, f is a backbone feature extractor like ResNet (He et al., 2016), g and h are projection and prediction MLPs, 1 respectively, and φ is an exponential moving average (i.e., momentum) of the model parameter θ. 2 Since a large number of negative samples plays a crucial role in contrastive learning, one can re-use the key representations of previous mini-batches by maintaining a queue (He et al., 2020).
Note that the above forms (1) and (2) can be formulated as various contrastive learning frameworks. For example, SimCLR ) is a special case of no momentum φ and no predictor h. In addition, self-supervised contrastive learning methods He et al., 2020) often assume that k i is only the positive key of q i , i.e., A i,j = 1 if and only if i = j, while supervised contrastive learning (Khosla et al., 2020) directly uses labels for A.
METHOD: PSEUDO-SUPERVISED CONTRASTIVE META-LEARNING
In this section, we introduce Pseudo-supervised Contrast (PsCo), a novel and effective framework for unsupervised few-shot learning. Our key idea is to construct few-shot classification pseudotasks using the current and previous mini-batches with the momentum network and the momentum queue. We then employ supervised contrastive learning (Khosla et al., 2020) for learning the pseudotasks. The detailed implementations of our task construction, meta-training objective, and meta-test scheme for unsupervised few-shot learning are described in Section 3.1, 3.2, and 3.3, respectively. Our framework is illustrated in Figure 1 and its pseudo-code is provided in Algorithm 1. Note that we use the same notations described in Section 2 for consistency.
ONLINE PSEUDO-TASK CONSTRUCTION
We here describe how to construct a few-shot pseudo-task using unlabeled data D meta train = {x i }.
To this end, we maintain a queue of previous mini-batches. Then, we treat the previous and current mini-batch samples as training (i.e., shots) and test (i.e., queries) samples for our few-shot pseudotask. Formally, let B := {x i } N i=1 be the current mini-batch randomly sampled from D meta train , and Q := {x j } M j=1 be the queue of previous mini-batch samples. Now, we treat B = {x i } N i=1 as queries of N different pseudo-labels and find K (appropriate) shots for each pseudo-label from the queue Q. Remark that this approach to utilize the previous mini-batches encourages us to construct more diverse tasks.
To find the shots efficiently, we utilize the momentum network and the momentum queue described in Section 2.2. For the current mini-batch samples, we compute the momentum query representations with data augmentations t i,2 ∼ A 2 , i.e., z i := Normalize(g φ • f φ (t i,2 (x i ))). Following He et al. (2020), we store only the momentum representations of the previous mini-batch samples instead of raw data in the queue Q z , i.e., Q z := {z j } M j=1 . Remark that the use of the momentum network is not only for efficiency but also for improving our task construction strategy because the momentum network is consistent and progressively improved during training. Following He et al. (2020), we randomly initialize the queue Q z at the beginning of training. Now, the remaining question is as follows: How to find K appropriate shots from the queue Q for each pseudo-label using the momentum representations? Before introducing our algorithm, we first discuss two requirements for constructing semantically meaningful few-shot tasks: (i) shots and queries of the same label should be semantically similar, and (ii) all shots should be different. Based on these requirements, we formulate our assignment problem as follows: Obtaining the exact optimal solution to the above assignment problem for each training iteration might be too expensive for our purpose (Ramshaw & Tarjan, 2012). Instead, we use an approximate algorithm: we first apply a fast version (Cuturi, 2013) of the Sinkhorn-Knopp algorithm to solve the following problem: which is an entropy-regularized optimal transport problem (Cuturi, 2013). Its optimal solutionà * can be obtained efficiently and can be considered as a soft assignment matrix between the current mini-batch {z i } N i=1 and the queue Q z = {z j } M j=1 . Hence, we select top-K elements for each row of the assignment matrixà * and finally construct an N -way K-shot pseudo-task consisting of (a) query samples Figure 1 shows an example of a 5-way 2-shot task. We empirically observe that our task construction strategy satisfies the above requirements (i) and (ii) (see Section 4.3).
META-TRAINING: SUPERVISED CONTRASTIVE LEARNING WITH PSEUDO TASKS
We now describe our meta-learning objective L PsCo for learning our few-shot pseudo-tasks. We here use our model θ to obtain query representations: where t i,1 ∼ A 1 is a random data augmentation for each i. Then, our objective L PsCo is defined as follows: where S z := {z s } N K s=1 is the support representations and A ∈ {0, 1} N ×N K is the pseudo-label assignment matrix, which are constructed by our task construction strategy described in Section 3.1.
Since our framework PsCo uses the same architectural components as a self-supervised learning framework, MoCo (He et al., 2020), the MoCo objective L MoCo can be incorporated into our PsCo without additional computation costs. Note that the MoCo objective can be written as follows: ) as described in Section 3.1. We optimize our model θ via all the objectives, i.e., L total := L PsCo + L MoCo . Remark again that φ is updated by exponential moving average (EMA), i.e., φ ← mφ + (1 − m)θ.
Weak augmentation for momentum representations. To successfully find the pseudo-label assignment matrix A, we apply weak augmentations for the momentum representations (i.e., A 2 is weaker than A 1 ) as Zheng et al. (2021) did. This reduces the noise in the representations and consequently enhances the performance of our PsCo as A becomes more accurate (see Section 4.3).
META-TEST
At the meta-test stage, we have an N -way K-shot task T consisting of query samples {x q } and support samples S = {(x s , y s )} N K s=1 . 3 We here discard the momentum network φ and use only the online network θ. To predict labels, we first compute the query representation q q := Normalize(h θ • g θ •f θ (x q )) and the support representations z s := Normalize (g θ • f θ (x s ))). Then we predict a label by the following classification rule:ŷ := arg max y q q c y where c y := Normalize( s 1 ys=y · z s ) is the prototype vector. This is inspired by our L PsCo , which can be interpreted as minimizing distance from the mean (i.e., prototype) of the shot representations. 4 Further adaptation for cross-domain few-shot classification. Under cross-domain few-shot classification scenarios, the model θ should further adapt to the meta-test domain due to the dissimilarity from meta-training. We here suggest an efficient adaptation scheme using only a few labeled samples. Our idea is to consider the support samples as queries. To be specific, we compute the query representation q s := Normalize(h θ •g θ •f θ (x s )) for each support sample x s , and construct the label assignment matrix A as A s,s = 1 if and only if y s = y s . Then we simply optimize only g θ and h θ via contrastive learning, i.e., L Contrast ({q s }, {z s }, A ; τ PsCo ), for few iterations. We empirically observe that this adaptation scheme is effective under cross-domain settings (see Section 4.3).
3 Note that N and K for meta-training and meta-test could be different. We use a large N (e.g., N = 256) during meta-training to fully utilize computational resources like standard deep learning, and a small N (e.g., N = 5) during meta-test following the meta-learning literature.
EXPERIMENTS
In this section, we demonstrate the effectiveness of the proposed framework under standard fewshot learning benchmarks (Section 4.1) and cross-domain few-shot learning benchmarks (Section 4.2). We provide ablation studies regarding PsCo in Section 4.3. Following Lee et al. (2021), we mainly use Conv4 and Conv5 architectures for Omniglot (Lake et al., 2011) and miniImageNet (Ravi & Larochelle, 2017), respectively, for the backbone feature extractor f θ . For the number of shots during meta-learning, we use K = 1 for Omniglot and K = 4 for miniImageNet (see Table 6 for the sensitivity of K). Other details are fully described in Appendix A. We omit the confidence intervals in this section for clarity, and the full results with them are provided in Appendix F.
STANDARD FEW-SHOT BENCHMARKS
Setup. We here evaluate PsCo on the standard few-shot benchmarks of unsupervised meta-learning: Omniglot (Lake et al., 2011) and miniImageNet (Ravi & Larochelle, 2017). We compare PsCo's performance with unsupervised meta-learning methods (Hsu et al., 2019;Khodadadeh et al., 2019;2021;Lee et al., 2021;, self-supervised learning methods , and supervised meta-learning methods on the benchmarks. The details of the benchmarks and the baselines are described in Appendix D. Few-shot classification results. Table 1 shows the results of the few-shot classification with various (way, shot) tasks of Omniglot and miniImageNet. PsCo achieves state-of-the-art performance on both Omniglot and miniImageNet benchmarks under the unsupervised setting. For example, we obtain 5% accuracy gain (67.07 → 72.22) on miniImageNet 5-way 20-shot tasks. Moreover, the performance is even competitive with supervised meta-learning methods, ProtoNets , and MAML as well.
CROSS-DOMAIN FEW-SHOT BENCHMARKS
Setup. We evaluate PsCo on cross-domain few-shot classification benchmarks following Oh et al. (2022). To be specific, we use ( test the previous state-of-the-art unsupervised meta-learning (Lee et al., 2021;, self-supervised learning , and supervised meta-learning . We here use our adaptation scheme (Section 3.3) with 50 iterations. The details of the benchmarks and implementations are described in Appendix E.
Small-scale cross-domain few-shot classification results. We here evaluate various Conv5 models meta-trained on miniImageNet as used in Section 4.1. Table 2 shows that PsCo outperforms all the baselines across all the benchmarks, except ChestX, which is too different from the distribution of miniImageNet . Somewhat interestingly, PsCo competitive with supervised learning under these benchmarks, e.g., PsCo achieves 88% accuracy on CropDiseases 5-way 5-shot tasks, whereas MAML gets 77%. This implies that our unsupervised method, PsCo, generalizes on more diverse tasks than supervised learning, which is specialized to in-domain tasks.
Large-scale cross-domain few-shot classification results. We also validate that our meta-learning framework is applicable to the large-scale benchmark, ImageNet (Deng et al., 2009). Remark that the recent unsupervised meta-learning methods (Lee et al., 2021;Khodadadeh et al., 2021) rely on generative models, so they are not easily applicable to such a large-scale benchmark. For example, we observe that PsCo is 2.7 times faster than the best baseline, Meta-SVEBM , even though Meta-SVEBM uses low-dimensional representations instead of full images during training. Hence, we compare PsCo with (a) self-supervised methods, MoCo v2 and BYOL (Grill et al., 2020), and (b) the publicly-available supervised learning baseline. We here use the ResNet-50 (He et al., 2016) architecture. The training details are described in Appendix E.4 and we also provide ResNet-18 results in Appendix F. Table 3: 5-way 5-shot classification accuracy (%) on cross-domain few-shot benchmarks. We transfer ImageNet-trained ResNet-50 models to each benchmark. We report the average accuracy over 600 few-shot tasks. Table 3 shows that (i) PsCo consistently improves both MoCo and BYOL under this setup (e.g., 67% → 82% in CUB), and (ii) PsCo benefits from the large-scale dataset as we obtain a huge amount of performance gain on the benchmarks of large-similarity with ImageNet: CUB, Cars, Places, and Plantae. Consequently, we achieve comparable performance with the supervised learning baseline, except Cars, which shows that our PsCo is applicable to large-scale unlabeled datasets.
ABLATION STUDY
Component analysis. In Table 4, we demonstrate the necessity of each component in PsCo by removing the components one by one: momentum encoder φ, prediction head h, Sinkhorn-Knopp algorithm, top-K sampling for sampling support samples, and the MoCo objective, L MoCo (6). We found that the momentum network φ and the prediction head h are critical architectural components in our framework like recent self-supervised learning frameworks (Grill et al., 2020;. In addition, Table 4 shows that training with only our objective, L PsCo (5), achieves meaningful performance, but incorporating it into MoCo is more beneficial. To further validate that our task construction is progressively improved during meta-learning, we evaluate whether a query and a corresponding support sample have the same true label. Figure 2a shows that our task construction is progressively improved, i.e., the task requirement (i) described in Section 3.1 satisfies. Table 4 also verifies the contribution of the Sinkhorn-Knopp algorithm and Top-K sampling for the performance of PsCo. We further analyze the effect of the Sinkhorn-Knopp algorithm by measuring the overlap ratio of selected supports between different pseudo-labels. As shown in Figure 2b, there are almost zero overlaps when using the Sinkhorn-Knopp algorithm, which means the constructed task is a valid few-shot task, satisfying the task requirement (ii) described in Section 3.1.
Adaptation effect on cross-domain.
To validate the effect of our adaptation scheme (Section 3.3), we evaluate the few-shot classification accuracy during the adaptation process on miniImageNet (i.e., in-domain) and CropDiseases (i.e., cross-domain) benchmarks. As shown in Figure 2d, we found that the adaptation scheme is more useful in cross-domain benchmarks than in-domain ones. Based on these results, we apply the scheme to only the cross-domain scenarios. We also found that our adaptation does not cause over-fitting since we only optimize the projection and prediction heads g θ and h θ . The results for the adaptation effect on the whole benchmarks are represented in Appendix C.
Augmentations. We here confirm that weak augmentation for the momentum network (i.e., A 2 ) is more effective than strong augmentation unlike other self-supervised learning literature He et al., 2020). We denote the standard augmentation consisting of both geometric and color transformations by Strong, and a weaker augmentation consisting of only geometric transformations as Weak (see details in Appendix A). As shown in Table 5, utilizing the weak augmentation for A 2 is much more beneficial since it helps to find an accurate pseudo-label assignment matrix A. (d) Cross-domain adaptation Figure 2: (a) Pseudo-label quality, measuring the agreement between pseudo-labels and true labels, (b) Shot overlap ratio, measuring whether the shots for each pseudo-label are disjoint, during meta-training. (c,d) Performance while adaptation on in-domain (miniImageNet) and cross-domain (CropDiseases) benchmarks, respectively. We obtain these results from 100 random batches. Training K. We also look at the effect of the training K, i.e. number of shots sampled online. We conduct the experiment with K ∈ {1, 4, 16, 64}. We observe that PsCo performs consistently well regardless of the choice of K as shown in Table 6. The proper K is suggested to obtain the best-performing models, e.g., K = 4 for miniImageNet and K = 1 for Omniglot are the best.
RELATED WORKS
Unsupervised meta-learning. Unsupervised meta-learning (Hsu et al., 2019;Khodadadeh et al., 2019;Lee et al., 2021;Khodadadeh et al., 2021) links meta-learning and unsupervised learning by constructing synthetic tasks and extracting the meaningful information from unlabeled data. For example, CACTUs (Hsu et al., 2019) cluster the data on the pretrained representations at the beginning of meta-learning to assign pseudo-labels. Instead of pseudo-labeling, UMTRA (Khodadadeh et al., 2019) and LASIUM (Khodadadeh et al., 2021) generate synthetic samples using data augmentations or pretrained generative networks like BigBiGAN (Donahue & Simonyan, 2019). Meta-GMVAE (Lee et al., 2021) and Meta-SVEBM represent unknown labels via categorical latent variables using variational autoencoders (Kingma & Welling, 2014) and energy-based models (Teh et al., 2003), respectively. In this paper, we suggest a novel online pseudo-labeling strategy to construct diverse tasks without help from any pretrained network or generative model. As a result, our method is easily applicable to large-scale datasets.
Self-supervised learning. Self-supervised learning (SSL) (Doersch et al., 2015) has shown remarkable success for unsupervised representation learning across various domains, including vision (He et al., 2020;), speech (Oord et al., 2018, and reinforcement learning (Laskin et al., 2020). Among SSL objectives, contrastive learning (Oord et al., 2018;He et al., 2020) is arguably most popular for learning meaningful representations. In addition, recent advances have been made with the development of various architectural components: e.g., Siamese networks (Doersch et al., 2015), momentum networks (He et al., 2020), and asymmetric architectures (Grill et al., 2020;. In this paper, we utilize the SSL components to construct diverse few-shot tasks in an unsupervised manner.
CONCLUSION
Although unsupervised meta-learning (UML) and self-supervised learning (SSL) share the same purpose of learning generalizable knowledge to unseen tasks by utilizing unlabeled data, there still exists a gap between UML and SSL literature. In this paper, we bridge the gap as we tailor various SSL components to UML, especially for few-shot classification, and we achieve superior performance under various few-shot classification scenarios. We believe our research could bring many future research directions in both the UML and SSL communities.
ETHICS STATEMENT
Unsupervised learning, especially self-supervised learning, often requires a large number of training samples, a huge model, and a high computational cost for training the model on large-scale data to obtain meaningful representations because of the absence of human annotations. Furthermore, finetuning the model for solving a new task is also time-consuming and memory-inefficient. Hence, it could raise environmental issues such as carbon generation, which could bring an abnormal climate and accelerate global warming. In that sense, meta-learning should be considered as a solution since its purpose is to learn generalizable knowledge that can be quickly adapted to unseen tasks. In particular, unsupervised meta-learning, which benefits from both meta-learning and unsupervised learning, would be an important research direction. We believe that our work could be a useful step toward learning easily-generalizable knowledge from unlabeled data.
REPRODUCIBILITY STATEMENT
We provide all the details to reproduce our experimental results in Appendix A, D, and E. The code is available at https://github.com/alinlab/PsCo. In our experiments, we mainly use NVIDIA GTX3090 GPUs.
A IMPLEMENTATION DETAILS
We train our models via stochastic gradient descent (SGD) with a batch size of N = 256 for 400 epochs. Following ; , we use an initial learning rate of 0.03 with the cosine learning schedule, τ MoCo = 0.2, and a weight decay of 5 × 10 −4 . We use a queue size of M = 16384 since Omniglot (Lake et al., 2011) and miniImageNet (Ravi & Larochelle, 2017) has roughly 100k meta-training samples. Following Lee et al. (2021), we use Conv4 and Conv5 for Omniglot and miniImageNet, respectively, for the backbone feature extractor f θ . We describe the detailed architectures in Table 7. For projection and prediction MLPs, g θ and h θ , we use 2-layer MLPs with a hidden size of 2048 and an output dimension of 128. For the hyperparameters of PsCo, we use τ PsCo = 1 and a momentum parameter of m = 0.99 (see Appendix B for the hyperparameter sensitivity). For the number of shots during meta-learning, we use K = 1 for Omniglot and K = 4 for miniImageNet (see Table 6 for the sensitivity of K). We use the last-epoch model for evaluation without any guidance from the meta-validation dataset. Training procedures. To ensure the performance of PsCo and self-supervised learning models, we use three independently-trained models with random seeds and report the average performance of them.
B ANALYSIS ON HYPERPARAMETER SENSITIVITY
For the small-scale experiments, we use a momentum of m = 0.99 and a temperature of τ PsCo = 1. We here provide more ablation experiments with varying the hyperparameters m and τ PsCo . Table 9 and 10 show the sensitivity of hyperparameters on the miniImageNet dataset. We observe that PsCo achieves good performance even for non-optimal hyperparameters.
C EFFECT OF ADAPTATION
We measure the performance with and without our adaptation scheme on various domains using miniImageNet-pretrained PsCo. Table 11 shows that our adaptation scheme enhances the way to adapt to each domain. In particular, the adaptation scheme is highly suggested for cross-domain few-shot classification scenarios.
D SETUP FOR STANDARD FEW-SHOT BENCHMARKS
We here describe details of benchmarks and baselines in Section D.1 and D.2, respectively, for the standard few-shot classification experiments (Section 4.1).
D.1 DATASETS
Omniglot (Lake et al., 2011) is a 28 × 28 gray-scale dataset of 1623 characters with 20 samples each. We follow the setup of unsupervised meta-learning approaches (Hsu et al., 2019). We split the dataset into 120, 100, and 323 classes for meta-training, meta-validation, and meta-test respectively. In addition, the 0, 90, 180, and 270 degrees rotated views for each class become the different categories. Thus, we have a total of 6492, 400, and 1292 classes for meta-training, meta-validation, and meta-test respectively.
For training self-supervised learning methods in our experimental setups, we use the same architecture and hyperparameters. For the hyperparameter of temperature scaling, we use the value provided in each paper: τ SimCLR = 0.5 for SimCLR, τ MoCo = 0.2 for MoCo v2, and τ SwAV = 0.1 for SwAV. For evaluation, we use K-Nearest Neightobrs (K-NN) for self-supervised learning methods since their classification rules are not specified.
E.4 LARGE-SCALE SETUP
Here, we describe the setup for large-scale experiments. For evaluating, we use the same protocol with the small-scale experiments, except the scale of images is 224 × 224.
Augmentations. For large-scale experiments, we use 224 × 224-scaled data. Thus, we use similar yet slightly different augmentation schemes with small-scale experiments. Following the strong augmentation used in a), we additionally apply GaussianBlur as a random augmentation. We use the same configuration for weak augmentation. For evaluation, we resize the images into 256 × 256 and then apply the CenterCrop to make 224 × 224 images by following .
ImageNet pretraining. We pretrain MoCo v2 , BYOL (Grill et al., 2020), and our PsCo of ResNet-18/50 (He et al., 2016) via SGD with a batch size of N = 256 for 200 epochs. Following , we use an initial learning rate of 0.03 with the cosine learning schedule, τ MoCo = 0.2 and a weight decay of 0.0001. We use a queue size of M = 65536 and momentum of m = 0.999. For the parameters of PsCo, we use τ PsCo = 0.2 and K = 16 as the queue is 4 times bigger. For supervised pretraining, we use the the model checkpoint officially provided by torchvision (Paszke et al., 2019). | 2023-03-03T02:16:18.393Z | 2023-03-02T00:00:00.000 | {
"year": 2023,
"sha1": "704fac2b3a69da67b7e742147f1c4cf4b315bd4d",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "704fac2b3a69da67b7e742147f1c4cf4b315bd4d",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Computer Science"
]
} |
246759472 | pes2o/s2orc | v3-fos-license | Solving Integral Equations by Means of Fixed Point Theory
Division of Applied Mathematics, Thu Dau Mot University, Binh Duong Province, Vietnam Department of Medical Research, China Medical University Hospital, China Medical University, 40402, Taichung, Taiwan Department of Mathematics, Çankaya University, 06790, Etimesgut, Ankara, Turkey Department of Mathematics and Computer Sciences, Universitatea Transilvania Brasov, Brasov, Romania Department of Mathematics, Faculty of Science, King Abdulaziz University, P.O.B. 80203, Jeddah 21589, Saudi Arabia Department of Statistics and Operations Research, University of Granada, Granada, Spain
Introduction
Nowadays, nonlinear analysis is one of the most active branches of mathematics. Its applications to real-life contexts have attained great success. Physics, engineering, chemistry, biology, and economy are some of the scientific areas that have benefited the most from the techniques developed in nonlinear analysis. In this context, fixed point theory has played an important role in the development of new methodologies for the determination of solutions of certain equations of several types, such as matrix equations, integral equations, and differential equations.
In principle, the elements used by fixed point theory are few and very simple to handle a nonlinear operator for which we want to find its possible fixed points, a real metric that endows the underlying space with a complete character, and an inequality (called the contractivity condition) that is strong enough to ensure the existence of fixed points. With these three ingredients, it is possible to propose good fixed point theorems, as has been done for the last seventy years (see, for instance, Boyd and Wong [1], Caristi [2], Chatterjea [3], Hardy and Rogers [4], Kannan [5,6], Ćirić [7], Geragthy [8], Meir and Keeler [9], Samet et al. [10], Khojasteh et al. [11], Kutbi et al. [12], and Jleli and Samet [13]).
Based on these three initial tools, the possibilities that this field of study has shown have been practically endless. On the one hand, researchers have worked with increasingly abstract metric spaces. In some of these cases, the object associated with the distance between two points has not been a single real number but much more general abstract objects. On the other hand, the operators involved in these studies have been increasingly general, including the possibility of studying multidimensional fixed points (see [14]). Finally, the contractivity condition is the part that has received the most attention within the field of fixed point theory.
In recent times, major efforts have been done in order to introduce as weak as possible contractivity conditions. For instance, it is usual to find auxiliary functions that help to consider extremely weak inequalities. Having this aim in mind, we would like to highlight here two possible extensions.
(i) On the one hand, although the first contractivity conditions only considered a small quantity of terms, after the appearance of the Ćirić theorem [7], the current versions involve more and more terms in their developments. This is the case, for instance, of Karapnar's interpolative-type contractions [15], but many other results can be cited in this line of research (see [16,17]) (ii) On the other hand, in general, notice that the good and reasonable properties that an operator T : M ⟶ M satisfies are usually inherited by the selfcomposition T 2 = T ∘ T, but it is possible that T 2 enjoys those good properties without T doing it. This is the case, for instance, of continuity: it is possible for T 2 to be continuous without T being continuous. In this sense, some results (like Istrăţescu's fixed point theorem; see [18,19]) employing T 2 are more general than their corresponding ones with T One of the powerful applications of fixed point theory can be found in the context of integral equations, whose recent numerical treatments have made great scientific advances in this field (see, for instance, collocation methods [20], operational matrix methods [21][22][23], Galerkin methods [24,25], and Krylov subspace methods [26]).
In this paper, we introduce a new family of contractive mappings that we call hybrid-interpolative Reich-Istrăţescutype contractions because they are inspired by the previous classes of contractive operators. The main advantage of this new family of contractive mappings is that they allow us to present, at the same time, contractivity conditions that involve a large number of terms, including some with the selfcomposition T 2 of the operator, and which are placed either adding or multiplying to the other terms. Furthermore, we introduce some fixed point results that confirm that this kind of operators is appropriate in this field of study. Finally, we illustrate the utility of the novel theorems by introducing a novel application in the setting of integral equations.
This work is organized as follows. Section 2 is dedicated to presenting some notations, preliminaries, and related results in the field of fixed point theory. In Section 3, we describe a complete study about the behavior of the Picard sequences that we will handle in the following sections. The main results of this paper can be found in the Section 4, and direct consequences are placed in Section 5. The application of the main statements is developed in Section 6. Finally, some conclusions and prospect works are discussed in Section 7.
Background on Fixed Point Theory
In this work, we denote by ℝ and ℕ = f0, 1, 2, ⋯g the set of all real numbers and the set of nonnegative integers, respectively. Let ðM, dÞ be a metric space and let T : M ⟶ M be a mapping. A point u ∈ M is a fixed point of T if Tu = u. We will denote by * Fix T ðMÞ the set of all fixed points of T in M.
Given n ∈ ℕ, the mapping T n = T ∘ T ∘ ⋯ ðnÞ ∘ T : M ⟶ M is the n-th iterate of T (as convention, we agree that T 0 is the identity mapping on M). Given z 0 ∈ M, the sequence fz n g n∈ℕ defined by z n = T n z 0 for all n ∈ ℕ is the Picard sequence of T based on z 0 . Such sequence can recursively be defined as z n+1 = Tz n for all n ∈ ℕ. A mapping T is called a Picard operator if each Picard sequence of such operator converges to one of its fixed points. A binary relation on the set M is a nonempty subset R of the Cartesian product M × M. We will write zRt when two points z, t ∈ M verify ðu, tÞ ∈ R.
is known as an α-orbital admissible mapping.
One of the first results in fixed point theory in which the contractivity condition was stated in terms of its selfcomposition T 2 = T ∘ T rather than in terms of T was due to Istrăţescu (see [18,19]).
Theorem 2 (Istrăţescu [18,19]). Given a complete metric space ðM, dÞ, every continuous map T : M ⟶ M is a Picard operator provided that there exist a, c ∈ ð0, 1Þ such that a + c < 1 and for all z, t ∈ M.
Notice that the good properties (like continuity) of an operator T are usually inherited by T 2 , but it is possible that T 2 enjoys those good properties without T doing it. In this sense, some results employing T 2 are more general that their corresponding ones with T. Some generalizations of this result in different abstract metric spaces (b-metric spaces, ordered metric spaces, cone metric spaces, etc.) were presented in recent papers (see [27][28][29]).
In other lines of research, inspired by Kannan's theorem [5,6], Karapnar introduced in [15] a family of contractions in which the distances of the right-hand side of the contractivity condition are multiplying instead of adding up. Also, notice that his contractivity condition must only be verified by pairs of points in the metric space that are not fixed points of the considered nonlinear operator, which avoids any kind of indetermination of the involved powers. Journal of Function Spaces Theorem 3 (Karapnar [15]). Let ðM, dÞ be a complete metric space and let T : M ⟶ M be a mapping such that there exist constants k ∈ 0, 1Þ and λ ∈ ð0, 1Þ satisfying for all z, t ∈ M \ * Fix T ðMÞ. Then, T has a unique fixed point in M.
In the previous result, as z and t are not fixed points of T , then dðz, TzÞ > 0 and dðt, TtÞ > 0. Furthermore, as λ > 0 and 1 − λ > 0, then the expressions dðz, TzÞ λ and d ðt, TtÞ 1−λ are well defined. However, in the main results that we will introduce later, we will employ expressions such as which we would like to explain for the sake of clarity: on the one hand, (4) means the product a 4 · ½dðTz, TtÞ λ , where a 4 and λ are nonnegative real numbers (notice that the exponent λ only affects the distance dðTz, TtÞ, and we avoid to write the brackets); on the other hand, the power d ðTz, TtÞ λ is not well defined when the base and the exponent take the value 0 at the same time. However, for our purposes, we must advise the reader that, when the base and the exponent are 0 at the same time, we will use the convention 0 0 = 1 .
Study of the Behavior of Some Picard Sequences
In this section, we describe the behavior of some sequences that will be of importance in the proofs of the main results of this work.
Proof. For n = 0 in (5), and for n = 1 in (5), This means that inequalities (6) hold for n = 1. Suppose that (6) holds for some n ∈ ℕ, that is, Therefore, This completes the induction.☐ Lemma 5. Let fz n g n∈ℕ be a sequence on a metric space ðM, dÞ. Suppose that there is c ∈ ½0, 1Þ such that Then, fz n g n∈ℕ is a Cauchy sequence in ðM, dÞ.
Proof. Let us consider the sequence fr n g defined by r n = dð z n , z n+1 Þ for all n ∈ ℕ. By the hypothesis, this sequence verifies (5). Then, Proposition 4 guarantees that where Δ = max fr 0 , r 1 g. In particular, If c = 0 or Δ = 0, then fz n g n≥2 is a constant sequence, so it is a Cauchy sequence. Suppose that c > 0 and Δ > 0. In order to prove that the sequence fz n g n∈ℕ is a Cauchy sequence in ðM, dÞ, let ε > 0 be arbitrary. Since ε/ð2ΔÞ > 0 and 0 < c < 1, there is a natural number n 0 > 1 such that In particular,
Journal of Function Spaces
Let n, m ∈ ℕ such that m > n ≥ 2n 0 . Let p be a natural number such that p ≥ n 0 + 1 and 2p ≥ m. Therefore, by (14), This proves that fz n g n∈ℕ is a Cauchy sequence in ðM, dÞ .☐ Remark 6. Taking into account that, in general, the notation Δ α n is not well defined because the number Δ ðα n Þ is distinct to ðΔ α Þ n , we clarify that, in the next statement, we use the convention: Proposition 7. Given c ∈ ½0,∞Þ and α ∈ ð0, 1Þ, let fr n g ⊂ ½0, ∞Þ be a sequence such that Let Δ = max fr 0 , r 1 , 1g. Then, Therefore, Proof. If c = 0, the announced properties are trivial. Suppose that c > 0. Since α ∈ ð0, 1Þ, we know that Therefore, as Δ ≥ 1, then Using n = 0 in (19), and if n = 1 in (19), using (23), The previous two inequalities mean that (20) holds for n = 1. Suppose that (20) is fulfilled for some n ∈ ℕ, and we are going to prove it for n + 1. Indeed, and using (23), which completes the induction. Then, (20) holds. Taking into account that fα n g n∈ℕ ⟶ 0, we know that fΔ α n g n∈ℕ ⟶ Δ 0 = 1. On the other hand, Hence, In order to check how different the conditions (19), where α ∈ ð0, 1Þ, and (5), where α = 1, are, let us consider the following example.☐ 4 Journal of Function Spaces Example 8. Let fr n g n∈ℕ ⊂ ð0,∞Þ be the sequence defined by where c = α = 0.5. Then, it can be easily proven by induction that fr n g is the constant sequence given by r n = 0.25 for all n ∈ ℕ. Indeed, if r n = r n+1 = 0.25 for some n ∈ ℕ, then r n+2 = c · max r n , r n+1 f g α = 0:5 · max 0:25,0:25 f g 0:5 = 0:25: As a consequence, fr n g ⟶ 0:25, but it does not converge to zero, as in Proposition 4.
Corollary 9.
Let fz n g n∈ℕ be a sequence on a metric space ð M, dÞ. Suppose that there are c ∈ ½0, 1Þ and α, β ∈ ½0, 1 verify- Then, fz n g n∈ℕ is a Cauchy sequence in ðM, dÞ.
Proof. Notice that, for all n ∈ ℕ, Then, Lemma 5 is applicable.☐
Fixed Point Theorems for Hybrid-Interpolative Reich-Istrăţescu-Type Contractions
In this section, we introduce the novel class of contractive mappings based on Reich and Istrăţescu's approaches.
Definition 10. Let ðM, dÞ be a metric space and let α : M × M ⟶ ½0,∞Þ be a function. A mapping T : M ⟶ M is a hybrid-interpolative Reich-Istrăţescu-type contraction in the case that for some λ ∈ 0,∞Þ, there exist a constant k ∈ 0 , 1Þ and six numbers a 1 , a 2 , a 3 , a 4 , where Remark 11.
(1) As we have commented in the second section, when λ > 0, the expression dðz, tÞ λ is well defined even if the base is zero. However, although z ≠ t in (36), in the case λ = 0, it is possible that we can find the indetermination 0 0 in the expression of I λ ðz, tÞ. In such a case, we will use the convention 0 0 = 1 to avoid such indetermination. In other words, if some exponent in the expression of I λ ðz, tÞ is zero, then its correspondent power will take the value 1. Notice that in this case, it is impossible that all exponents are zero because ∑ 5 i=1 a i + δ = 1 (2) We can believe that the cases are equivalent because if ∑ 5 i=1 a i + δ < 1, then we can replace δ by δ ′ ∈ 0, 1Þ such that ∑ 5 i=1 a i + δ ′ = 1, and the mapping T is also a hybrid-interpolative Reich-Istrăţescu-type contraction by considering the new parameters a 1 , a 2 , a 3 , a 4 , a 5 , δ′ ≥ 0. However, as we will show later, when λ = 0, we cannot permit the sum of the exponents to be less than 1 because, in 5 Journal of Function Spaces such case, the sequence of distances between a term and its consecutive might not converge to zero (so it is not Cauchy) (3) Although the definition of I λ ðz, tÞ, for λ > 0, is very different to the definition of I 0 ðz, tÞ (λ = 0) because the first case uses additions and the second case involves products, there is a particular case in which both algebraic expressions lead to the same contractivity condition. It corresponds to the choice: In this case, if λ > 0, and if λ = 0, Notice that this case corresponds to the Banach contractivity condition particularized to T 2 instead of T: which appears when αðz, tÞ = 1 for all z, t ∈ M. Other similar cases will be discussed in Remark 23.
The first main theorem of this work is the following one.
Theorem 12. Let ðM, dÞ be a complete metric space. A continuous hybrid-interpolative Reich-Istrăţescu-type contraction T : M ⟶ M has at least a fixed point provided that the mapping T is α-orbital admissible and there exists z 0 ∈ M such that αðz 0 , Tz 0 Þ ≥ 1.
Proof. From the hypothesis, we know that there exists z 0 ∈ M such that αðz 0 , T 0 Þ ≥ 1. Since T is α-orbital admissible, αðT z 0 , T 2 z 0 Þ ≥ 1, and by an inductive reasoning, we get that αð T n z 0 , T n+1 z 0 Þ ≤ 1 for any n ∈ ℕ. Starting from this point z 0 ∈ M, we define the sequence fz n g in M as follows: If there is some n ∈ ℕ satisfying that z n = z n+1 , then z n is a fixed point of T, and the proof finishes here. On the contrary case, suppose that z n is distinct to z n+1 for all n ∈ ℕ.☐ We will divide the proof into two cases, namely, λ > 0 and λ = 0. In both cases, we prove that the sequence fz n g is Cauchy.
Case A. For the first case, λ > 0, given any n ∈ ℕ, taking in (35) z = z n and t = z n+1 , we have Using the power of λ, Therefore, which means that, for all n ∈ ℕ, or, equivalently, Let us denote Journal of Function Spaces Clearly, This proves that there is c ∈ 0, 1Þ such that Lemma 5 concludes that fz n g n∈ℕ is a Cauchy sequence in ðM, dÞ.
In both cases (λ > 0 and λ = 0), we have demonstrated that fz n g n∈ℕ is a Cauchy sequence in ðM, dÞ. As it is complete, then there exists a point u ∈ M such that fz n g ⟶ u as n ⟶ ∞. Moreover, due to the continuity of the mapping T, we conclude that Tu = u; that is, u is a fixed point of T.
Remark 14.
Notice that in the previous proof we have shown, without using the continuity of the mapping T; that is, in both cases (λ > 0 and λ = 0), the Picard sequence fz n g n∈ℕ is Cauchy in the metric space ðM, dÞ. Using its completeness, it follows that there exists a point u ∈ M such that f z n g ⟶ u as n ⟶ ∞. Then, we can use this argument in the next results because the continuity of the mapping T is only used in the last part of the proof.
When λ = 0 and ∑ 5 i=1 a i + δ < 1, the statement given by Theorem 12 is false. For instance, if a 1 = a 2 = a 3 = a 4 = a 5 = δ = 0, the contractive condition (36) does not provide any kind of control on dðT 2 z, T 2 tÞ because the value of I 0 ðz, t Þ is always 1 (all exponents are zero). However, even if all constants a 1 , a 2 , a 3 , a 4 , a 5 , and δ are strictly positive, the operator T could be fixed point free, as we show in the next example. following table: ð61Þ As a consequence, if α is given by αðz, tÞ = 1 for all z, t ∈ M, then the mapping T satisfies for all distinct z, t ∈ M. Therefore, T is a hybrid-interpolative Reich-Istrăţescu-type contraction. Furthermore, ðM, dÞ is complete, T is α-orbital admissible, and there exists z 0 ∈ M such that αðz 0 , Tz 0 Þ ≥ 1. However, T is fixed point free.
Remark 16. The reason why Theorem 12 could fail when λ = 0 and ∑ 5 i=1 a i + δ < 1 is the following one: a sequence fz n g n∈ℕ satisfying could be non-Cauchy when α + β < 1. For instance, let fz n g n∈ℕ be the sequence defined by z n = 0:25n for all n ∈ ℕ: If M = fz n : n ∈ ℕg ⊂ ½0,∞Þ is endowed with the Euclidean distance, then Therefore, if we take then, for all n ∈ ℕ, However, the sequence fz n g n∈ℕ positively diverges, so it is not Cauchy.
If the operator T is continuous on M, then the composition T 2 = T ∘ T also is. However, the mapping T 2 could be continuous even if T is not continuous. In this case, we can replace the continuity of the mapping T with a weaker condition, namely, the continuity of T 2 , whose set of fixed points is nonempty, as it is shown in the next statement. Proof. Let fz n = T n z 0 g n∈ℕ be the Picard sequence of T whose initial point is z 0 . In Remark 14, we commented that this sequence has a limit u ∈ M. Since the mapping T 2 is continuous, then Thereby, T 2 u = u; that is, u is a fixed point of T 2 . As a result, the mapping T 2 has at least one fixed point; that is, the set * Fix T 2 ðMÞ is nonempty. Furthermore, T 3 u = Tu. In order to check that u is also a fixed point of T, suppose, by contradiction, that u ≠ Tu. In this case, Tu is not a fixed point either because, in such a case, Tu = T 2 u = u, which is false. Then, u, Tu ∈ M \ * Fix T ðMÞ.☐ Case A. If λ > 0, then Since αðu, TuÞ ≥ 1, then from (35), which is a contradiction. Then, Tu = u, so u is a fixed point of T.
Journal of Function Spaces
Case B. If λ = 0, then Proof. Theorem 17 guarantees that the set of fixed points of T is nonempty. Suppose that T has two distinct fixed points u, v ∈ * Fix T ðMÞ.☐ Case A. If λ > 0, replacing such points in (35), we obtain Therefore, which is impossible. Then, T cannot have two distinct fixed points. Case B. If λ = 0, then If a 2 > 0 or a 3 > 0 or a 5 > 0 or δ > 0, then I 0 ðu, vÞ = 0, so the contractivity condition (35) leads to which is false because u and v are distinct points. On the contrary, if a 2 = a 3 = a 5 = δ = 0, we agreed that 0 a 2 = 0 a 3 = 0 a 5 = 0 δ = 1, so The argument shown in (73) proves that this case is also impossible, so the mapping T cannot have two distinct fixed points in any case.
Also, a particular result holds for the case λ = 0; more exactly, we can remove the continuity conditions of T or T 2 .
Theorem 19. Let ðM, dÞ be a complete metric space, and let T : M ⟶ M be a hybrid-interpolative Reich-Istrăţescu-type contraction for λ = 0 such that a 1 > 0 or a 2 > 0 or a 5 > 0. If we suppose that (i) T is α-orbital admissible (ii) there exists z 0 ∈ M such that αðz 0 , Tz 0 Þ ≥ 1 (iii) for any sequence fz n g n∈ℕ in M such that αðz n , z n+1 Þ ≥ 1 for n ∈ ℕ and fz n g ⟶ u as n ⟶ ∞, we have that αðz n , uÞ ≥ 1 for all n ∈ ℕ then T has a fixed point.
Proof. Following the proof of Theorem 12, we considered the Picard sequence z n = T n z 0 for all n ∈ ℕ. If this sequence contains a fixed point, the proof is finished. On the contrary case, we have shown that it is a Cauchy sequence on ðM, d Þ, so it converges to a point u ∈ M. To prove that u is a fixed point of T, suppose, by contradiction, that u ≠ Tu. Without loss of generality, we can assume that fz n g n∈ℕ satisfies z n ≠ z m for all n, m ∈ ℕ such that n ≠ m. In this case, there is n 0 ∈ ℕ such that z n and u are distinct and they are not fixed points of T for all n ≥ n 0 . Let us check that u is a fixed point of T 2 . Indeed, for all n ≥ n 0 , Since a 1 > 0 or a 2 > 0 or a 5 > 0, letting n ⟶ ∞ in (77), we find out that dðu, T 2 uÞ = 0; that is, u is a fixed point of T 2 . Now, following the lines from Theorem 17, we obtain a contradiction that proves that u is also a fixed point of T.☐ Its self-composition is given by Clearly, T and T 2 are not continuous mappings. Next, let us show that T is a hybrid-interpolative Reich-Istrăţescutype contraction w.r.t. the function α : M × M ⟶ ½0,∞Þ defined as follows: Let us take k = 3 4 , a 1 = a 4 = 1 2 , a 2 = a 3 = a 5 = δ = 0: ð81Þ (b) For z = −1 and t = 1, we have (c) For all other cases, Hence, from Theorem 19, we conclude that T has a fixed point.
Consequences
In the field of fixed point theory, it is commonly accepted that a contractivity condition is all the more general if it has more possibilities of being particularized, giving rise to versions of already known theorems. Therefore, in order to show the power of the main introduced results, in this section we are going to illustrate several contexts in which they can be applied.
The first important framework appears when the mapping α : M × M ⟶ ½0,∞Þ constantly takes the value 1; that is, αðz, tÞ = 1 for each z, t ∈ M. In this case, the hypotheses about the α-orbital admissibility and the existence of a point z 0 ∈ M such that αðz 0 , Tz 0 Þ ≥ 1 are trivial.
Corollary 21.
Let ðM, dÞ be a complete metric space, and let T : M ⟶ M be a mapping such that (i) either T or T 2 is continuous (ii) for some λ ∈ 0,∞Þ, there exist a constant k ∈ 0, 1Þ and six numbers a 1 , a 2 , a 3 , a 4 , a 5 , δ ∈ ½0, 1 such that, for all distinct z, t ∈ M \ * Fix T ðMÞ, where I λ ðz, tÞ is defined by (36) and Then, T has a fixed point.
Journal of Function Spaces
In addition to this, for the case λ > 0, if we suppose that the contractivity condition (85) holds for all distinct points u, v ∈ M, then T has a unique fixed point.
Proof. It follows from Theorems 12, 17, and 18 applied to the case in which αðz, tÞ = 1 for each z, t ∈ M.☐ The following result follows by choosing in the set of constants fa 1 , a 2 , a 3 , a 4 , a 5 , δg one of them as 1 and the other ones as 0. Notice that six corollaries are being summarized into only one. (ii) there exists a constant k ∈ 0, 1Þ such that at least one of the following conditions is fulfilled for all distinct z, t ∈ M \ * Fix T ðMÞ: Then, T has a fixed point.
Proof. This result corresponds to the case λ > 0 in Theorem 12 (T continuous) or Theorem 17 (T 2 continuous) when the contractivity condition (35) is considered by using the following respective choices for constants: Then, T has a fixed point.
Particular cases are especially interesting, like in the following case, in which it is not necessary to assume the continuity of the mapping T.
Corollary 25 (Istrăţescu [18,19]). Let ðM, dÞ be a complete metric space, and let T : M ⟶ M be a continuous mapping such that there exist a, b ∈ ð0, 1Þ with a + b < 1 satisfying d T 2 z, T 2 t À Á ≤ a · d z, t ð Þ+ b · d Tz, Tt ð Þfor all z, t ∈ M: ð90Þ Then, T has a unique fixed point.
Proof. Let us consider the choices λ = 1 and αðz, tÞ = 1 for each z, t ∈ M. Let k = a + b ∈ ð0, 1Þ, and let a 1 = a k , a 4 = b k , a 2 = a 3 = a 5 = δ = 0: Then, for each distinct z, t ∈ M, which means that the contractivity condition α z, t ð Þd T 2 z, T 2 t À Á ≤ k · I 1 z, t ð Þfor all distinct z, t ∈ M \ * Fix T M ð Þ holds because of (90). Under this framework, the proof of Theorem 12 (using any initial point z 0 ∈ M) guarantees that the Picard sequence fz n = T n z 0 g n∈ℕ converges to a point 11 Journal of Function Spaces u 0 ∈ M. Since T is continuous, then fz n+1 = Tz n g n∈ℕ converges to Tu 0 , so Tu 0 = u 0 .
Furthermore, T has a unique fixed point because if u 0 and v 0 were two distinct fixed points of T, then which is impossible. In the following result, we employ a binary relation for controlling the pairs of points that must satisfy the contractivity condition. Let R be a binary relation on the set M. A mapping T : M ⟶ M is R-orbital admissible if TzRTt for all z, t ∈ M such that zRt.☐ Corollary 26. Let ðM, dÞ be a complete metric space endowed with a binary relation R, and let T : M ⟶ M be a continuous mapping. Assume that for some λ ∈ 0,∞Þ, there exist a constant k ∈ 0, 1Þ and six numbers a 1 , a 2 , a 3 , a 4 , a 5 , δ ≥ 0 satisfying (37) such that, for all distinct z, t ∈ M \ * Fix T ðMÞ such that zRt, If T is R-orbital admissible and there is z 0 ∈ M such that z 0 RTz 0 , then T has at least one fixed point.
Proof. Let us consider the function α R : M × M ⟶ ½0,∞Þ defined by Then, T is α R -orbital admissible and there is z 0 ∈ M such that α R ðz 0 , Tz 0 Þ = 1. The contractivity condition (95) is equivalent to (35) under the assumptions (37). Hence, Theorem 12 is applicable.☐ In the previous corollary, when M is endowed with a binary relation R, we can replace the completeness of the metric space by the weaker version: the metric space ðM, d Þ is R-increasingly complete if each d-Cauchy sequence fz n g n∈ℕ ⊆ M satisfying that z n Rz n+1 for all n ∈ ℕ is d-convergent to a point of M. In this case, Corollary 26 can be stated as follows. | 2022-02-12T16:15:51.057Z | 2022-02-10T00:00:00.000 | {
"year": 2022,
"sha1": "124b094b47324ce8ed5db8d61c4972fcce8264c4",
"oa_license": "CCBY",
"oa_url": "https://downloads.hindawi.com/journals/jfs/2022/7667499.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "fb6fd9b325a7d7d35d3d325584d1c3c0617842e1",
"s2fieldsofstudy": [
"Mathematics"
],
"extfieldsofstudy": []
} |
82886610 | pes2o/s2orc | v3-fos-license | Effects of Aqueous Extracts of Seeds of Peganum harmala L. (zygophyllaceae) on 5th Stage Larvae Locusta migratoria cinerascens (Fabricius, 1781) (Orthoptera: Oedipodinae)
The study has for objective the determination of the efficiency of the aqueous extracts from seeds of Peganum harmala L. on the mortality of the larvas of 5th stage and on the fertility of the female adults of Locusta migratoria cinerascens. For that purpose, a breeding of lo custs was realized in th e conditions of labor atories. At hatching, the larvae are fed daily basis lawn Stenotaphrum americanum and a pro tein supplement of wheat b ran. The extraction of the aqueous extract of the seeds of P. harmala is done after maceration in the ethanol, under magnetic stirring using a rotavapor. To determine larval mortality L5, two modes of treatment have been made, one by contact and another by ingestion, using for bo th treatment 4 doses in a geometric progression, 0.03 mg/mL, 0.06 mg/mL, 0.12 mg/mL and 24 mg/mL. Th e results showed that the mortality for the doses of 0 .12 mg/mL and 0.24 mg/mL, reaches respectively 40% and 60% on the 3rd day, as well for the treatment by contact as by ingestion. But the LD50 for ingestion treatment is lower. It is 0.19 mg/mL contrary to that of the contact treatment (0.19 mg/mL). The larvae that survived the treatment by ingestion, have suffered morphological changes as well as physiological which consist of a deformation of the wings, delayed of the larval molt, of 6 day s, b locking the fledg ing, the ch ange of the pigmentation as well as an extension of the preoviposition. Fertility was a lso affected and females lay only twice, a small number of eggs, unlike untreated females which come to lay 3 times with an average of 62.7 eggs/female at first spawning against 50 eggs for the females treated.
Introduction
Locusts as harmful insects occupy a very i mportant place am ong agricultural p ests. It is a heterogeneous group t hat includes bot h the l ocust and t he grasshopper. The m ajority of crop pests are located in the African c ontinent. The subspecies L. migratoria cinerascens, wid e distribution M editerranean is present i n Europe (Franc e, Italy , Spai n, Y ugoslavia, Greece) and North Africa (Morocco, Algeria, Tunisia) [1]. In Al geria, it is characteristic of coastal areas and plains of the Tellien Atlas as well as in the south of the Corresponding author: Abdelmadjid Ben zara, professor, research field: plant protection. E-mail: a.benzara@gmail.com. Saharan Atlas, including Tamanrasset and Adrar which offer a perm anent ha bitat co nducive t o m aintaining and the dispersion of the locusts, whether in remission or invasion perio d, du e to its favorable cl imatic ecological c onditions [2][3][4]. Inde ed, L. cinerascens migratoria has the ability to be in two phases, the one solitary and the o ther on e gregarious. T hese are the larval b ands of t he gregarious phase w hich are formidable biography ag gressors and w hich ca uses considerable damage to far mers because of their large polyphagia [5]. In fact, since its signaling in 1991 and 1994, in t he per imeters of irri gated w heat of Zaouiet-Kounta (Adrar) [2] and in the region of Touat (Adrar) [ 6], in the central Sahara Alge rian, it has become a pest potential concern. A great many plants are t hen l ikely to be attacked, they are timber as the banana and date palm [7][8][9][10].
Currently c hemical c ontrol against i nsects i n general, a nd l ocusts i n par ticular uses an arsenal of active materials e qually e ffective as t he other. It remains, in effect, the only solution to cope with this scourge, i n c ase of invasion, d espite th e catastrophic consequences on t he e nvironment a nd the fra gile ecosystems of des ert re gions or sem i des ert. O f t his fact, several scien tists were int erested in the alternative s olutions to substitute t he p esticides organic of sy nthesis by bio pesticides of veg etable origin, bi odegradable, no t po lluting an d resp ectful o f the environment [11][12][13][14][15].
Indeed, the use of plants as a source of pesticides is reported by an a bundant literature [16][17][18]. Of by secondary co mpounds (a lkaloids, car denolides a nd glucosinolates terpenes) that they contain, many plants are now known to posses s inse cticidal properti es. geographical distribution occupies mainly the northern Sahara a nd the A lgerian highlands. It is use d i n traditional m edicine in A lgeria a nd t he Maghreb, in internal a nd external use to treat dif ferent dis orders, but it is not consumed by the animals they are cattle or sheep. All parts of the plant (root, stem, leaf and seed) are characterized by high toxicity linked to its richness in al kaloids indol iques [20], which are becoming much more significant during the phase of ripening of the see d [21]. This is why we co nsidered i t us eful to study t he ef fect of a queous extra cts from seeds of P. harmala on some physiological parameters (mortality, larval m olt, ferti lity and p igmentation) and morphological changes on larvae of the 5th stage of L. migratoria cinerascens.
Extraction of the Aqueous Extract of the Seeds
The seed s are d ried fo r several days b efore b eing ground using a coffee grinder. 10 mg of the ground are removed, then soaked in 50 mL of ethanol for 2 h with magnetic stir ring us ing a rotary eva porator. A fter removal of alcohol, doses in geometric progressi on, are obtained by simple dilution: d 1 = 0. 03 mg/mL, d 2 = 0.06 m g/mL, d 3 = 0.12 mg/mL and d 4 = 0.24 mg/mL).
Determination of Fertility Treatment after Ingestion
The determination of fertility was performed taking into a ccount 10 u ntreated fem ales an d 5 fe males survived treatmentby ingestion. They were isolat ed in two separate cages, the same size a nd under the same conditions as a bove. Fer tility is determined by counting the number of eggs given after each egg. For reasons th at are o utside our c ontrol, we di d not consider individuals w ho ha ve not s urvived t he treatment con tact in determining t he f ertility of L. migratoria cinerascens.
Method of Analysis Results
To e stimate th e LD 50 , le thal dose from which we obtain 50% mortality, c orrected m ortalities wer e transformed into pro bit, a dose in decimal l ogarithm, that esta blish the e quations of regressio n li nes. Th e results are also treat ed st atistically by analysis of variance (XLSTAT version 6.0, ANOVA).
Treatment Effect of Contact on Larval Mortality
Like m any pl ants, P. harmala has grea t potential insecticides with respect to L. migratoria cinerascens. Its toxic effect lethality caused more or less important depending on the mode of penetration of the aqueous extract a nd do ses. Th e to xicity of th e ext ract i s ev en higher than the doses are important both for the test of contact and ingestion, although the biopesticide effect of the latter is more im portant. H owever, i nsect mortality d ecreases wi th time and it does no t e xceed 10% on day 10 for dos es d 1 = 0.03 m g/mL an d d 2 = 0.06 m g/mL (T able 1). This is li kely due to the volatility of certain components of the aqueous extract.
This charac teristic s hould be che cked since Ref. [22] showed a t oxicity of 100% at day 16 of t reatment. In the same way, th e e xtract of Calotropis procera, rich in alkaloids, caused a mortality of 100% on the desert locust after 15-day treatment (Schistocerca gregaria).
In t he case, although the doses d 3 = 0.12 mg/mL and d 4 = 0. 24 m g/mL gav e re spective deaths, 40% a nd 60%, t he t hird day, th ey are 2 0%, day 10 or cumulative mortality for bo th d oses is 6 0% and 80% ( Table 1).
The biocidal action of P. harmala concerns not only the migratory locust, but also other zoological groups where its i nsecticidal ac tivity by contac t of the black bean a phid ( Aphis fa bae) ca uses toxicity by 30%.
While by in gestion, m ortality was 70% [23]. It als o showed that the a queous e xtract of P. harmala has a nematicidal ranging from 60% to 95%, the only direct contact in vitro, s imilar t o that of a n ematicide business (Vydate) against Meloidogyne spp (root-knot nematodes) [13]. Also, Acacia gummifera Acacia gummifera (Fabaceae) a nd Tagetes patula L.
(Asteraceae) have a t oxic power of 84% a nd 82% against n ematodes because of th eir re latively h igh content of flavonoids [2 4], substances th at also ex ist in P. harmala.
The ca lculation of LD 50 gave a va lue of 0.1 9 mg/mL and t he c orrelation b etween m ortality a nd dose(r = 0.94) (Fig. 1). Similarly, the analysis of variance Consequently, mortality is even m ore im portant t hat the dose is high.
Treatment Effect of Ingestion on Larval Mortality
As for t he contact tre atment, t he d oses of 0.03 mg/mL and 0.06 m g/mL cause d l ow m ortality a t 3 days (20% and 30%). It reached 60% and 80% always on 3rd day, for doses of 0.12 mg/mL and 0.24 mg/mL. Again, m ortality a t day 1 0 d oes not e xceed 10% whatever t he dose , whi le cumulative mortality was 70% and 90% ( Table 2).
The c alculation o f LD 50 gav e a va lue of 0.095 mg/mL. The correlation coef ficient is cl ose to 1 ( r = 0.99), a nd i ndicates a st rong corre lation between mortality a nd d ose (Fig. 2). Th e a nalysis of variance(ANOVA) showed a significant difference between doses at P 0.05 (F = 3.57 , df = 1.19, Pr = 0.027). And mortality is even more important that the dose is high.
Effects of Both Treatments on the Physiology and Morphology
The larvae, which survived the treatment with oral intake, un derwent b oth morphological and physiological changes which lead to a deformation of the wings of certain individuals (2 females and 1 male, died on e day after fledg ing), a de lay o f 6 day s the larval molt (1 2 day s for un treated i ndividuals an d 1 8 days for in dividuals trea ted), b locking fledging ( 4 females an d 2 m ales), c hange in pigmentation (3 females bro wnish d ied two day s af ter molting imaginal) an d an e xtension of the preoviposition (1 0 days fo r femal es t reated and 6 days for u ntreated females).
Fecundity w as also a ltered. Am ong fi ve fem ales survived to tr eatment ef fect of Ingest ion, thre e have produced a n average of 50 eggs/female, one of them has la id tw ice by prod ucing 31 e ggs w hile the other two are d ied with out being able to l ay eg gs. I n contrast, am ong 1 0 untreated fem ales, nine ha ve l aid eggs once, emitting on average 62.7 eggs. 4 have lain twice by producing 49.3 eggs/female, 3 have lain three times by produci ng 32.7 eggs/female. Som e fem ales lay three times as w ell. The number of eggs produced progressively decreases from first to third oviposition (Table 3).
Moreover, P. harmala al so cau ses ph ysiological disturbances of the insect in this case a delay of larval moulting 6 t o 8 day s, and a ch ange i n p igmentation. The la tter b ecomes brownish at t he le gs, pro notum and abdomen. The resu lts are consistent with those of [22] w ho o bserved th e s ame phe nomenon on S. gregaria. T he ef fect of antipalatable P. harmala resulting in decreased we ight of i nsects, de layed sexual m aturity a nd red uced fert ility, whi ch is particularly m arked after treatment by in gestion [ 1 1-25]. O ther p lants suc h as Mentha spicata L. an d Origanum glandulosum L. ( Lamiaceae) had the same effect on the fecundity and Callosobruchus maculatus L. (Coleoptera) [26].
Conclusion
It is k nown that toxins in P. harmala are harmane, harmaline, harmine and harmol (harmalol), harmaline which is most toxic to the extent that it contains 2/3 of alkaloids [27]. The process is the toxicity to the wealth of i ndole a lkaloids th at a ct through harmine a nd harmaline, s ubstances present i n al l p henological stages of the plant and especially in the seeds of which the alkaloid le vels ri ses sharply in summer (3-4%) during the p hase of frui t ri pening [25]. The harmine and ha rmaline a re re sponsible fo r the tox icity o f th e aqueous ext ract face t o fac e the L ocust and a ct by ingestion th rough th e di gestive tract, which deserves further study.
In th e prese nt st udy, the aut hors h ave trie d t o emphasize the poy rntialities agro phytosanitary of P. harmala, a plant wi despread i n Al geria, which co uld be a sou rce o f na tural insecticide co uld rep lace chemical inputs that have partly contributed t o the pollution of th e biosphere. T he i nvolvement of aqueous e xtracts of t his plant i n t he fig ht economic factor as chemical crop pr otection m ight fit i nto t he context of alternative a nd com plementary strategy indefense of plants. | 2019-03-19T13:04:15.413Z | 2013-02-28T00:00:00.000 | {
"year": 2013,
"sha1": "c6251745b040d1a7d7bff5c5d0e8ac4d5eb629cd",
"oa_license": "CCBYNC",
"oa_url": "http://www.davidpublisher.org/Public/uploads/Contribute/55754f8cef3a3.pdf",
"oa_status": "HYBRID",
"pdf_src": "Anansi",
"pdf_hash": "a480539c57f4054a9f4cabc330bf8f3510f570fa",
"s2fieldsofstudy": [
"Agricultural and Food Sciences",
"Biology"
],
"extfieldsofstudy": [
"Biology"
]
} |
239609977 | pes2o/s2orc | v3-fos-license | PCI in High Risk Advanced Breast Cancer Patients
Background: A high incidence of brain metastases has been conveyed in patients with triple-negative and her 2-positive breast cancer receiving trastuzumab therapy. The rationale for prophylactic cranial irradiation is to regulate or eliminate unnoticeable micro-metastases without making unwanted harm. Methods: This prospective study investigated the role of prophylactic cranial irradiation for lowering the frequency of brain metastases for patients who had triple-negative and her2-positive advanced extra-cranial metastatic breast cancer including 48 succeeding patients, with this disease scenario were collected in this study & were categorized into 2 arms over 3 years period: The first arm consisted of 24 patients who did not receive PCI. The second group included 24 patients who received PCI 25 Gray/10 fractions over 2 weeks carried 4 weeks after completion of chemotherapy with or without trastuzumab while hormone therapy is continuous for hormone-positive patients. All patients were primarily evaluated by brain CT scan with contrast or MRI which was part of the neurological assessment included before PCI then every 3 months in the first year, then every 6 months thereafter. Neuro-cognitive Functions (NCF) were estimated in both arms before and then 6 months and 1 year after PCI using Mini-Mental State Exam (MMSE). Health-related quality of life was assessed before and then 1 month and 3 months after PCI using Functional Assessment of Cancer Therapy-Brain (FACT-Br). Results , only four (16.6%) patients developed symptomatic brain metastases in the treatment arm compared to nine (37.5%) patients in the control arm, the median brain metastasis-free survival duration in the PCI arm was 22 months with a 95% CI (18.37-25.62) compared with 16 months with a 95% CI (13.78-18.21) with p = 0.011, figure (1). A brain metastases hazard ratio = 0.398, 95% CI (0.187.0.844) is significantly reduced by 60% in the PCI patients set when compared to no PCI arm at any given time over 30 months with p-value (0.016). All patients died due to progressive breast cancer. There were no death due to treatment. Three of the 24 patients experienced grade 3/4 toxicity (two grade 3 nausea and vomiting, one grade 4 nausea and vomiting). Grade 1 to 4 fatigue occurred in majority of the fifteen (62.5%) treated patients, but only in 3 (12.5%) grade 3 and 4 patients. Hair loss was virtually common as a consequence of chemotherapy, so no more alopecia was observed during PCI. Neurocognitive function in both groups was equal, without statistical differences between the MMSE scores between the two study arms (p=0.137). Most of the MMSE scores declined at a 6-month evaluation in the PCI group with a significant difference at a P value of 0.001, but resumed to the baseline value in the one-year evaluation without statistical difference between the two arms, P = 0.679. The initial levels of evaluation of the quality of life of the respondents in both groups were comparable, without statistical differences, P = 1.000. In the PCI group, most of the scores (FACT-Br) were reduced at the 1-month evaluation compared to no PCI group with a significant difference at a P value of 0.050, but returned almost to baseline at the 3-month evaluation without statistical differences between the two groups, (P=0.162). Conclusions: PCI was linked with accepted toxicities & give rise to lower frequency of brain secondary metastasis with prolonged median brain metastasis-free survival duration. Whether, this result may be interpreted into satisfactory therapeutic improvement necessite an additional assessment.
INTRODUCTION
Breast cancer is extreme common non-skin cancer in female, and its frequency continues to rise throughout the world [1].
Breast cancer survival has significantly enhanced & the disease course has altered with aggressive multimodal therapies and the contribution of novel agents in latest years so that the number of longstanding breast cancer survivors similarly continues to increase with breast cancer patients comprising a great proportion of long-term cancer survivors.Breast cancer is a leading cause of brain metastases, generally second only to lung cancer, but in female patients it is the leading cause [2].
Despite the success in improving survival, the increasing rate of brain metastases as late complications has become a major clinical dilemma [3].Although this cancer mostly metastasizes to bone, lung, & liver and rarely metastasize to the brain, which is generally a delayed presentation of advanced breast cancer [4].However, some reports suggest that the brain is the primary site of progression in a high proportion of patients with controlled systemic disease [5].
A mechanism that is usually proposed to explain this phenomenon is the selective destruction of nonbrain metastases by new chemotherapy regimens, which allows for the subsequent development of brain metastases.Breast cancer patients with brain metastases are generally younger than breast cancer patients with non-brain metastases [6].The presence of brain metastases portends a poor fate off in breast cancer, with symptomatic disease indicating a worse quality of life & increased morbidity and mortality [7].
Presently, median survival after a diagnosis of metastatic brain cancer changed widely, from 3 months to more than 2 years [8].Up to one-third of patients with metastatic breast cancer will eventually develop brain metastases during the disease course [2].
The frequency of brain metastases in the breast cancer population continues to rise mainly due to improvements in systemic therapies leading to longerlasting control of extra-cranial metastatic disease and prolonged survival [9]. 1.This increment in incidence probably due to: 2. Enhanced detection with advanced imaging modalities such as magnetic resonance imaging.
3. Prolonged survival of patients with excellent extracranial disease control with systemic therapies.4. Restricted blood-brain permeability of systemic agents.
5. Biological sub-types such as triple-negative breast cancer & her2-positive breast cancer with an increased propensity for brain metastases.
A numerous factor predicts an increased risk of brain metastases, including tumor of large size, tumor grade, younger age at diagnosis, number of positive axillary lymph nodes (≥N2), ER-negative tumor, short disease-free survival (<2 years), triple-negative tumor subtype and her2 positive, furthermore BRCA1 phenotype and p53 abnormalities [10].However, none of these factors reliably or consistently predict the risk of brain metastasis, and thus identifying significantly higher-risk subgroups of patients (who could become candidates for prophylactic cranial irradiation strategies) becomes difficult.A single factor of the above different factors that have appeared as a strong predictor of brain metastases in recent times is the phenotype of breast cancer, as triple-negative and her2positive tumor sub-types have been reported to have an incidence of 20-30% [11].
Her2-positive breast cancer has been reported to have a 25% to 35% incidence of brain metastases.Her-2-negative breast cancer patients, on the other hand, have a lower frequency of brain metastases than her-2positie breast cancer patients [12].The brain metastases incidence in breast cancer patient with her-2 positive treated with trastuzumab is even more, alternating from 25% to 48% [13].This may be possible due to the poor blood-brain barrier penetration of trastuzumab due to its high molecular weight (~ 148 kDa), it creates a tumor cell sanctuary.In addition, trastuzumab improves systemic control of extra-cranial disease and also increases survival, leading to the "unmasking" of brain metastases in patients who would otherwise have died of progression of systemic disease [14].Therefore, the possibility of early diagnosis or prevention of brain metastases could lead to better survival and a better quality of life.The crucial key to an effective preventive central nervous system approach is the identification of high-risk patients based on images, clinicopathological factors, & molecular profiles.
Several hypotheses have been proposed for the prophylaxis of brain metastases in this group of highrisk patients, such as: 1. Frequent detection with MRI screen.Early detection of occult brain metastases and treated with radiotherapy has also been revealed to decrease the incidence of brain death threefold (48% versus 16%) [15].Although screening for brain metastases is not part of routine check-up, there is no proof evidence of advantage from early identification [16].
However, this may rather be due to the lack of satisfactory selection criteria for a potential screening cohort.Therefore, a more accurate definition of patients with breast cancer and sub-types at high risk of the early development of brain metastases is needed [17].
2. Use of substitute drugs for those patients' subtype with her2-positive cancer breast like lapatinib (with superior penetration of the blood-brain barrier).Treatment with lapatinib lead to decrease in the incidence rate of brain metastases in her2-positive disease [18].
3. Prophylactic cranial irradiation is usually used as an active and proven treatment to eradicate sub-clinical brain metastases, prevent symptomatic intracranial recurrence & increase overall survival in various malignancies such as acute lymphoblastic leukemia & small cell lung cancer [19].
Advances in Health Sciences Research, volume 38
Although breast cancer has been overly feared and underused as a treatment modality and not routinely used, as no survival benefit has been observed so far, it appears to be a realistic choice as there are no effective systemic therapies [20].
Patient Eligibility
This was a prospective study involving forty-eight patients with advanced breast cancer, who presented with triple-negative (n = 16) or her2-positive (n = 32) who had advanced extra-cranial metastatic disease confirmed by histology and radiology, as these patient groups were at high risk of developing brain metastases and then developing brain metastases during the followup period from February 2015 to January 2020 at Najaf Cancer Clinic (NCC) affiliate clinic, Jaber Ibn Hayyan Medical University was included in the study and randomized 24 patients to PCI and 24 patients to no PCI.
The procedures of staging included a full history & physical examination, laboratory evaluations.Serum cancer antigen (CA15-3) & carcinoembryonic antigen (CEA) levels were analyzed for pretreatment as a basis and a bilateral diagnostic mammogram when indicated, breast and abdomen-pelvis ultrasound, chest X-ray, and bone scintigraphy with radionuclides.Selected patients received computed tomography, positron emission computed tomography or magnetic resonance imaging of the breast as indicated.
At an interdisciplinary meeting of the Tumor Board is held where patients are presented & discussed and made treatment guidelines recommendation rely on the National Comprehensive Cancer Network (NCCN).
Hormone-receptor, estrogen-receptor, & progesterone receptor status were evaluated by immunohistochemistry (ERa antibody, clone 1D5, Dako A / S, Glostrup, Denmark; and PR antibody, Dako A / S).Receptor expression was estimated as the percentage of positively stained tumor cells.Results were reported as positive or negative spots 1, 2, and 3 with a 10% cutoff value of positive tumor cells [21].
Her-2 status was assessed via immunohistochemistry (Herceptest; Dako A / S) or dual-color fluorescent in situ hybridization (FISH; PathVision HER-2 DNA probe kit, Vysis Inc., Downers Grove, IL, USA).Tumors were classified as her2positive if they had a staining intensity of 3 on the Herceptest; if a score of 2 was obtained, the tumors were retested by FISH [22].
Systemic various chemotherapy options were administered to all patients depending on the characteristics of the patients.Tamoxifen with or without Zoladex or aromatase inhibitors has been used in hormone receptor-positive patients depend on their menopausal status.Trastuzumab was given to patients with her2-positive tumors.Follow up on the periodic examination were programmed every twelve weeks for the first year, then every twenty-four weeks thereafter.PET-CT was requested during follow-up in many patients as indicated.Symptomatic secondary brain metastases were diagnosed via imaging (typically MRI or cranial computed tomography with contrast).
Prophylactic cranial irradiation:
Whole-brain irradiation was applied to 6 MV LINAC (linear accelerator) and radiotherapy fields enclosed the whole brain with intracranial meninges, to comprise the cervical spine at the lower part of the first cervical vertebra using three-dimensional planned conformation radiotherapy (3-DCRT) providing 25 Gray in 10 fractions over 2 weeks, given the new data from pulmonary PCI studies.PCI begins 4 weeks after completion of chemotherapy, while in her2-positive disease, continuing treatment with trastuzumab during PCI.Forty-eight breast cancer patients who attended at an advanced stage were evaluated every 3 months with CT or MRI of the head with contrast in the first year, then Thereafter, every 6 months, from February 2015 to January 2020 at the Najaf Cancer Clinic (NCC) affiliated clinic, Jaber Ibn Hayyan Medical University was eligible for inclusion.
End Points
-Primary endpoint was the time to development of symptomatic secondary brain metastases, measured as the interval from identification of the disease until the development of secondary brain metastasis.Diagnosis of symptomatic secondary brain metastases was completed via appropriate imaging (contrast-enhanced computed tomography or magnetic resonance imaging brain scan) triggered by clinical judgment based on the patient who developed one or more of the key symptoms, as defined above.
Treatment evaluation included
All patients were evaluated at the start of the study as a baseline and then every twelve weeks in the first year and then every twenty-four weeks thereafter, using high-resolution chest abdomen and pelvic computed tomography (HRCT) scan, chest X-ray, abdominal and pelvic ultrasound.Serum levels of cancer antigen (CA15-3) and carcinoembryonic (CEA) were analyzed at intervals during the evaluation of tumor response.
Physical examination and laboratory tests (CBC and biochemistry) were performed before treatment.Tumor response was classified into the complete response (CR), partial response (PR), stable disease (SD), and progressive disease (PD) according to the criteria for estimating response in solid tumors (version 1.1) mentioned by Eisenhauer et at [23].
Adverse events were classified according to the NCI common terminology criteria for serious adverse events (version 4.0).The time between randomization & occurrence of brain metastasis was defined as Brain metastasis-free survival (BMFS).Censored patients are those with no events on the last follow-up date (July 1, 2020).For patients with continuous information was collected at the time of follow-up visit.
STATISTICAL ANALYSIS
Analysis of statistics using an SPSS computer program (model 22).Numerical information was stated as median & range as appropriate.Qualitative data were expressed as a percentage.Survival curves were estimated using the Kaplan-Meier technique.
RESULTS
Between February 2015 and January 2020, 48 patients who had established advanced breast cancer (TN = 16, her2-positive = 32) were enrolled in our clinic (NCC) at the Najaf Cancer Clinic (NCC) Affiliated Clinic, Jaber Ibn Hayyan Medical University, there were 24 patients randomized to receive PCI & 24 randomized not to have treatment.
Closure of the study (from January 1, 2020) due to poor recruitment.The data displayed here were grounded on follow-up through 1 July 2020 with a median follow-up of 30 months (range 4 to 66 months).All patients received systemic chemotherapy and trastuzumab treatment as indicated, also hormone therapy as indicated in most patients.Baseline comparative patient characteristics for both arms were listed in Table 1.
The age group was between 22 and 67 years with an average age is 43.9 years in control arm while 44.3 years with a range of in PCI arm of non-significant differences with a p-value of 0.728, while two (8.3%) were 65 years & older compare to only one (4.15%) in PCI arm.The majority of our patients (66.6% in control arm versus 58.3% in treatment arm) were postmenopause with a non-significant difference with a pvalue of 1.00.Also, the greatest percentage of our patients (87.5% versus 87.4%) were of KPS of 0-1 with a non-significant p-value of 0.747.More than two-thirds of our patients were diagnosed with ductal histology (79.1% versus70.8%)with a non -significant difference with a p-value of 0.096.Majority of our patients (79.1% vs 87.4%) had tumor size of 2 cm and more and of nodepositive disease with a non-significant difference with a p-value of 0.162 and 0.622 for tumor size and positive node respectively.The maximum percentage of our patients (91.6% vs 91.6%) had grades 2 and 3 with a non-significant difference with a p-value of 0.479.Twothirds of our patients (66.6% her2-positive & 33.3% triple-negative) with a non-significant difference in both groups with a p-value of 1.000.All patients in both groups had metastatic disease mostly to lungs were 41.6 % in control arm versus 37.5 % in the treatment arm, but without significant difference with a p-value of 0.704.Only one of 24 treated patients was discontinued for PCI after the first fraction of radiotherapy, due to grade 4 vomiting, was hospitalized due to circulatory collapse, and received parenteral nutrition and fluid replacement until hemodynamically stabilized and returned for the full course of radiotherapy after a week's delay.All these patients received the planned 25 Gray in 10 fractions within the 2 weeks included in the final analysis.
Prophylactic Cranial Irradiation Effectiveness
Only four patients developed symptomatic brain metastases in the treatment arm compared with nine patients in the control arm, and all those patients in both groups received therapeutic SRS with a change in systemic therapy.The median brain metastasis-free
Advances in Health Sciences Research, volume 38
survival rate in the PCI arm was 22 months with a 95% CI (18.37-25.62)compared with 16 months with a 95% CI (13.78-18.21)with p = 0.011, figure (1).A hazard ratio of brain metastases = 0.398, 95% CI (0.187.0.844) is significantly reduced by 60% in the PCI arm compared to the non-PCI arm at any given time over 30 months with p-value equal to 0.016.Although all patients died of progressive breast cancer, but there was no treatment-related death.
Acute Toxicity of Prophylactic Cranial Irradiation
The poorest perceived grade of every single adverse event throughout PCI treatment was documented using NCI CTCAE-v3.Three of the 24 treated patients experienced grade 3/4 toxicity (two grade 3 nausea & vomiting, one grade 4 nausea and vomiting).
Grade 1 to 4 fatigue occurred in most of the fifteen (62.5%) treated patients, but only in 3 (12.5%)grade 3 and 4 patients.
Hair loss was almost universal due to chemotherapy, so no excess alopecia was detected during PCI.
Neurocognitive Function Impairment
Neurocognitive Function Test: It is a way to measure brain function in a non-invasive way, basically we evaluate before & then 6 months and 12 months after PCI by mean of the Mini-Mental State Exam (MMSE).
As we knew, breast cancer patients who received chemotherapy and combined endocrine therapies have been reported to show lower scores on a working memory task than patients who received chemotherapy or endocrine therapy alone [24].
Furthermore, a longitudinal study reported that treatment-induced menopause was associated with cognitive decline after chemotherapy in patients with early-stage breast cancer [25].
Patients receiving brain irradiation treatments often experience radiation-induced headache and fatigue, as well as possible cognitive decline.Whole-brain radiation therapy has been shown to worsen fatigue in cancer patients [26].
The European Organization for Research and Treatment of Cancer (EORTC) reported that patients who received whole-brain irradiation had a measurable cognitive decline that they attributed to fatigue, as well as clinically meaningful higher fatigue scores.
We assessed neurocognitive in both arms before & then 6 months and 12 months after PCI by mean of the Mini-Mental State Exam (MMSE).The following four cut-off ranks were customized to categorize the severity of cognitive impairment: no cognitive impairment 24-30; mild cognitive impairment 19-23; moderate cognitive impairment 10-18; & finally severe cognitive impairment ≤9 as shown in Table 2.The preliminary assessment levels of neurocognitive function in both groups were identical, without statistical differences across MMSE scores between the two study arms, P = 0.137.In the PCI group, most MMSE scores declined at a 6-months assessment with a significant difference in a p-value of 0.001 but returned to baseline at one-year assessment without statistical differences between both groups, P = 0.679.
Quality of Life
The respondents' initial quality of life assessment levels in both arms were identical, without statistical differences across the Functional Assessment of Cancer Therapy-Brain (FACT-Br) between the two study arms, P = 1,000.In the PCI group, most marks (FACT-Br) were lowered at the 1-month assessment compared to no PCI group with a significant difference in a p-value of 0.050, but returned to near baseline on the 3-month assessment with no difference statistics between the two groups, P = 0.162.
5.Correlation Parameters
There is no correlation between BMFS in the PCI arm with age, menopausal status, KPS, abnormal BMI, LN status, tumor size, her2-status, positive hormonal status, but there is a significantly negative correlation between BMFS & disease grade, (r = -0.615)and pvalue 0.001.Finally, there is a significant positive correlation between BMFS and quality of life at baseline, (r = 0.587) & p-value 0.003, there is also a significant positive correlation at one month and at 3 months, [(r = 0.483) and p = 0.017], [(r = 0.401) & pvalue 0.0501] respectively.There is no correlation between BMFS and the MMSE score.Identical detection in the control arm, as detailed in the table below.
DISCUSSION
In our prospective study, her2-positive and triplenegative advanced breast cancer patients with extracranial metastases had a 40% risk of emerging secondary brain metastases throughout the sequence of disease.For this of patients group, preventive medical intervention, such as prophylactic cranial irradiation treatment or diagnostic screening, may be helpful.
The best promising preventive medical interference to enhance outcomes may be prophylactic cranial irradiation.Postmortem revisions have revealed a high frequency of hidden brain metastases in patients with secondary metastatic breast cancer [27].When brain metastases are diagnosed, survival is often short.Median survival rates reported for patients with breast cancer who have brain metastases generally range from 3 to 8 months [10].
Prophylactic cranial irradiation has been revealed to efficiently decrease the brain metastases frequency and improve lung cancer survival [28].In a study of advanced-stage lung cancer (small cell type), prophylactic cranial irradiation lessen the frequency of brain metastasis from 40.4% to 14.6% (p <0.001) and enhanced the survival rate from 13.3%.% to 27.1% one year after stratification [29].
The whole radiation dose mandatory for effective prophylactic whole-brain irradiation is lower than that wanted for therapeutic whole-brain irradiation of symptomatic secondary brain metastases & the corresponding toxicity is tolerable [30].
In our study, PCI led to an approximate numerical more than halving of the frequency of symptomatic secondary brain metastasis, via 30 months of follow-up, compared without PCI in the control group.The median brain metastasis-free survival was significantly different between PCI and no PCI arm [22 months with 95% CI (18.37-25.62)versus 16 months with 95% CI (13.78-18.21)with p=0.011] respectively, which is negatively correlated with tumor grade, p=0.001.The hazard ration of brain metastasis =0.398, 95% CI (0.187,0.844) is significantly reduced by 60% in PCI arm compared to no PCI arm at any given time over 36 months period with p=0.016.
The prophylactic cranial irradiation revealed a negative impact on verbal memory, however showed no or slight impact on overall cognitive function or overall health status compared to no prophylactic cranial irradiation [31].Data are available from two prospective randomized studies of PCI versus no PCI in lung cancer (small cell type), which gauged toxicity and quality of life [32].Both showed no deterioration owing to PCI during short-term follow-up, & patients who had extended the survival duration more than 2 years after treatment showed no evidence of functional cognitive impairment or deterioration in quality of life [33].
Since a totally non similar patient population, the end results from these trials are therefore entirely unlike with our study, in which there was a discernible effect on cognitive function at 6 months follow-up post PCI, although at base-line evaluation levels of NCF in both groups were identical, without statistical difference through MMSE scores between both arms of the study, P=0.137, however in PCI arm most of the MMSE scores were decreased at the 6 months assessment in PCI arm with significant p-value difference 0.001 but returned to the original level at the 12 months assessment without statistical difference among both groups, P=0.679.
On other hands, health-related quality of life initially prior to & then 1 month and 3 months after PCI using Functional Assessment of Cancer Therapy-Brain (FACT-Br) which including 23 questioners and each item took one of four degrees ranging from zero to four, although, the base-line evaluation ranks of QoL questioners in both arms were identical, without statistical difference through Functional Assessment of Cancer Therapy-Brain (FACT-Br) between both groups of the study, p-value of 1.000.In PCI arm most of the (FACT-Br) degrees were declined at the 1 month's assessment compare to no PCI arm with significant pvalue difference 0.050 but resumed to near the initial level at the 3 months estimation without statistical difference among both groups, P=0.162.Furthermore, three of 24 PCI-treated patients practiced grade 3 & 4 toxicity (two grade 3 nausea & vomiting, one grade 4).Grade 1 to 4 fatigue had occurred in most fifteen (62.5%) treated patients, but only 3 (12.5%)patients of grades 3 and 4. Hair loss was nearly universal due to chemotherapy therefore no excess was seen during the PCI.
Own to the lack of supporting data, PCI now shows no role in the treatment of breast cancer [20].The time to development of brain metastases varies from patient to patient & it can't be debarred that in selected patients seeding of tumor cells into the brain may happen later on next to whole-brain irradiation prophylactically.It is highly desirable that future randomized trials be conducted to assess the usefulness of prophylactic cranial irradiation in patients with high-risk breast cancer.
An additional hopeful prophylactic treatment for patients with the her2-positive disease could be lapatinib, a double tyrosine kinase inhibitor of EGFR & HER2.Fewer cases with secondary brain metastasis were observed in early progression after lapatinib treatment in a preliminary analysis of a randomized breast cancer trial (4 vs 13, the total number of patients 399; P = 0.045) [34].Lapatinib with Xeloda has also displayed a good job as a first-line treatment of secondary brain metastases from her2-positive cancer breast in a phase II study [35].
The worth of brain metastases diagnostic screening in breast cancer patients is unclear.Patients with single brain metastases appear to have wonderful survival than those with multiple metastases [36], & with surgery and stereotactic radiotherapy, there are effective treatment choices for patients with limited number of brain metastasis 1-5 that is called (oligo brain metastasis).Though, early recognition of secondary brain metastases has not until now been shown to expand survival [37].In this study, the adversarial prognostic features intended for brain metastasis-free survival in the univariate analysis are tumor grade, other factors were unrelated.
The restrictions of our study are linked to the small sample size.The observed nonexistence of a statistically significant impact of early age at diagnosis, advanced disease and other factors on secondary brain metastasis-free survival may be possibly explained by insufficient statistical power of our study.
CONCLUSION
Patients with her2-positive and triple-negative advanced breast cancer with extra-cranial metastases had a nearly 40% risk of developing brain metastases during course of the disease.Although there was an excess of tolerated toxicity in the PCI received patients compared to the non-PCI arm regarding cognitive function or quality of life, both returned to baseline after some time.All deaths in the PCI group owing to progressive breast cancer and no treatment-related deaths.PCI has resulted in a numerical halving of the frequency of symptomatic secondary brain metastases, which is statistically significant.
Forthcoming prospective trials are highly desirable to assess the effectiveness of a preventive medical intervention, such as prophylactic treatment or diagnostic screening for these high-risk patients.ACKNOWLEDGMENTS: I would like to thank my Colleague Assistant professor Thaer Wally for his assistance with information collection and statistical calculations also thanks for other sub-staff for considerable aid and patient support and finally thanks for our patients accepted to treatment and remained in close follow up.
Advances in Health Sciences Research, volume 38 Proceedings of the 1st International Ninevah Conference on Medical Sciences (INCMS 2021) Copyright © 2021 The Authors.Published by Atlantis Press International B.V. This is an open access article distributed under the CC BY-NC 4.0 license -http://creativecommons.org/licenses/by-nc/4.0/.
ABBREVIATION 3 -
ABBREVIATION 3-DCRT : three-dimensional planned conformation radiotherapy ESMO: The European Society for Medical Oncology HER2: Human epidermal growth factor receptor 2 Gy :gray FACT-Br: Functional Assessment of Cancer Therapy-Brain QoL: Quality of life LINAC : linear accelerator MBC: Metastatic breast cancer MMSE: Mini-Mental State Exam NCF: Neuro-cognitive Functions PCI : prophylactic cranial irradiation WHO-PS: World Health.Organization Performance Status.
Table - 1
: Comparative patients base-line characteristics
Table 4 :
Correlation between BMFS in PCI arm with different variables | 2021-10-23T15:07:14.584Z | 2021-01-01T00:00:00.000 | {
"year": 2021,
"sha1": "82ebe319ec991ddd7a68d9c704fa7e1c149d6a26",
"oa_license": "CCBYNC",
"oa_url": "https://www.atlantis-press.com/article/125961543.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "6e5f56457ae3ac8f928cc8dca27026ada3e16d8f",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": []
} |
119669163 | pes2o/s2orc | v3-fos-license | A short guide through integration theorems of generalized distributions
The generalization of Frobenius' theorem to foliations with singularities is usually attributed to Stefan and Sussmann, for their simultaneous discovery around 1973. However, their result is often referred to without caring much on the precise statement, as some sort of magic spell. This may be explained by the fact that the literature is not consensual on a unique formulation of the theorem, and because the history of the research leading to this result has been flawed by many claims that turned to be refuted some years later. This, together with the difficulty of doing proof-reading on this topic, brought much confusion about the precise statement of Stefan-Sussmann's theorem. This paper is dedicated to bring some light on this subject, by investigating the different statements and arguments that were put forward in geometric control theory between 1962 and 1994 regarding the problem of integrability of generalized distributions. We will present the genealogy of the main ideas and show that many mathematicians that were involved in this field made some mistakes that were successfully refuted. Moreover, we want to address the prominent influence of Hermann on this topic, as well as the fact that some statements of Stefan and Sussmann turned to be wrong. In this paper, we intend to provide the reader with a deeper understanding of the problem of integrability of generalized distributions, and to reduce the confusion surrounding these difficult questions.
Introduction
Foliation theory is the study of foliations on manifolds. A foliation on a manifold M is a partition of M into connected immersed submanifolds, that are called leaves. A foliation is called regular if the leaves have the same dimension, and singular otherwise. Over every point x ∈ M , the tangent space of the leaf L x through x is a subspace of the tangent space of M . The data of a subspace D x of T x M at every point x ∈ M define what is called a distribution D = x∈M D x on M . Notice that a distribution is not necessarily a sub-bundle of T M because it may not have constant rank. For example, for a regular foliation, since the leaves have the same dimension, the induced distribution formed by the tangent spaces at every point has constant rank over M . In the singular case however, the dimension of the tangent spaces to the leaves may vary from leaf to leaf. Since the tangent spaces to a given foliation form a distribution D on M , and since the space of vector fields tangent to the leaves are closed under Lie bracket, then D inherits the Lie bracket of vector fields. More precisely, we say that a distribution D is involutive if for every two sections X, Y of D, the commutator [X, Y ] is a section of D as well. On the other hand, a given distribution D may not come from the tangent spaces of a foliation. Then, we say that D is integrable if there exists a foliation such that each leaf L satisfies T x L = D x for every x ∈ L. A legitimate question is thus: 'Given a distribution on M , what are the conditions under which it is integrable to a foliation? ' This question is a modern formulation of a set of results and investigations that were related to -but not directly concerned with -the topic of integrating distributions into foliations. Originally, the problem emerged as finding the solutions of non-linear first-order partial differential equations, and was pioneered by Lagrange that provided a method for systems involving up to two independent variables. It was then formalized to an arbitrary number of variables by Pfaff in his memoir at the University of Berlin in 1815, hence the name of Pfaffian systems [25]. He showed how one may transform a set of n first-order nonlinear partial differential equations into a set of 2n ordinary linear differential equations. The simplification method presented by Pfaff could be seen as finding a submanifold of the space of variables on which some specific one-form vanishes. The problem was that Pfaff could not make precise what were the conditions under which one could use this simplification. This question -designated as the problem of Pfaff -led to multiple investigations that finally found an accurate answer by Frobenius in 1877.
Actually, the name 'Frobenius' theorem' comes from Cartan in 1922 because Frobenius' result had an tremendous influence on Cartan's calculus of differential forms. Frobenius' paper is actually archetypal of the production of the Berlin school of Mathematics at that time, which promoted the idea that a clear, rigorous and systematic presentation of the arguments was just as important as the discovery of new results by whatever means. Frobenius and his contemporaries in Berlin participated in a shift of paradigm in modern mathematics by improving standards of rigor and presentation [25]. This is in part the reason why Frobenius is remembered for this theorem, whereas the work of his predecessors has been forgotten.
Indeed, it turns out that Frobenius' theorem is actually an algebraic reformulation of a result published in 1840 by Deahna [1], who then became a teacher in a secondary school (that was common at the time) before his premature death at age 28 in 1844. Deahna's work did not gain much interest, and it was later Clebsch in 1861, editing a posthumous article of Jacobi, who improved Pfaff's argument [2]. Even if the problem of solving Pfaffian systems had been around for many years, it was the article of Clebsch which motivated the interest of Frobenius on this question. Simultaneously, unaware of Clebsch's investigations, Natani proposed another approach to the question of solving Pfaffian systems, but the relationship with Clebsch's work was not realized before a few years [25]. The modern formulation of Frobenius' theorem does not correspond to the one appearing in its original paper [3], because it has been modified to fit with modern-day standards and conventions: Involutivity is a natural necessary condition because the set of vector fields on any leaf of a foliation is involutive, hence the corresponding distribution should be as well. Frobenius implicitly proved that it turns out to be a sufficient condition in the regular case, see [27] for a short proof.
Interestingly, the problem of integrating generalized distributions was approached in the same way as for the Frobenius' theorem: i.e. solving a set of linear differential equations. Indeed, at the turn of the 1950s, numerous investigations in the field of control theory − the study of the solvability of first-order differential equations under the influence of one or more external parameters − arose and developed in the following years. Unfortunately, the picture in control theory involves external parameters that modify and generalize the structure of Pfaffian system, so those parameters prevent to use the integrability arguments of Pfaff, Deahna, Clebsch and Frobenius. Inspired by the work of Carathéodory on the geometrization of the calculus of variations and Pfaffian systems, many mathematicians aimed at solving some linear differential systems from a geometric perspective [26]. Chronologically, Hermann was the first to draw a bridge between control theory and differential geometry in 1963 [5]. In his view, the solutions of a differential system would correspond as the attainable set of points that are reachable from the initial data, following the flow of the vector fields associated to the differential equations. Hence investigating integrability conditions of generalized distributions into singular foliations appeared as a necessity for control theorists. Actually, it was also Hermann who stated the first integrability conditions both in the smooth and the analytic cases (without proof for this last case) [5]. Nagano proved that analyticity together with involutivity are sufficient conditions for integrability in 1966 [6]. Then, after a few years of small improvements [7,8], Stefan and Sussmann independently clarified in 1973 the conditions for a family of smooth vector fields F to induce an integrable distribution [10,13]: the only assumption is that the induced distribution D F has to be invariant under the action of the flow of any element of F . Of course, both of them had supplementary material in their respective and quasi-simultaneous papers, but this is the main result that they had in common and that was thoroughly used in control theory. The F -invariance was not in fact a new idea, since it was around since the first proposals of Hermann in 1963 [5], and since it was made explicit by Lobry in 1970 [8]. The breakthrough of Stefan and Sussmann was showing that one can drop Hermann's assumption that F is a sub-Lie algebra of the space of globally defined vector fields. After this, Stefan himself deepened his research on the integrability problem and made interesting discoveries on this topic [15].
This paper is an investigation of the different statements and arguments that occurred in geometric control theory between 1962 and 1994 related to the problem of integrating generalized distributions. We will see that, even if some hard work was done, many results were forgotten or mistakenly attributed to other mathematicians, and above all, that many people involved in this story made mistakes that led to some confusion that persists today. In particular we want to address the persistent claim that Stefan's and Sussmann's results were not totally correct when they were published, and that Balan corrected them. We will see to what extent this is true and we will give precisions on Balan's statements. The goal of this article is to clarify who said what and what is proven regarding these subtle questions. In Section 2, we recall mathematical notions that are commonly used in the field. In Section 3, we present the most historical and useful results that were proven in the 1960s, namely Nagano's and Hermann's theorems. In Section 4, we give an overview of the path that led from these pioneers to the well-known theorem of Stefan and Sussmann in 1973. Then, in Section 5, we discuss improvements and some results that followed this breakthrough. In Section 6 -the conclusion -we acknowledge the history of this long standing question, we clarify present debates and we propose future research.
Mathematical background
There are two approaches to the problem of integrating a distribution into a foliation. The approach in control theory starts from a linear differential system, then defines a set of vector fields that carries all the information from the differential equations, and then look for solutions of these equations as the points that are reachable by the flows of these vector fields. On the contrary, the geometric approach is more focused on the concept of distribution as a given object, and questions the possibility that this distribution is the tangent space of a foliation. Thus, it is not surprising that the two communities refer to the same theorems, but under different names and formulations. Let us now recall some fundamental mathematical notions: A distribution D is smooth at a point x if any tangent vector X(x) ∈ D x can be locally extended to a smooth vector field X on some open set U ⊂ M such that X(y) ∈ D y for every y ∈ U . The space of smooth sections of D is the sub-sheaf Γ(D) : U → Γ U (D) of the sheaf of vector fields X consisting of smooth (resp. analytic) vector fields that take values in D. As a side remark, let x ∈ M , then any family of independent sections of D that span D x is locally free. It implies that the rank of the distribution in a neighborhood of x is greater than or equal to the dimension of D x . All these definitions have similar counterparts in the real analytic category, i.e. when all objects are analytic.
In the 1960s, mathematicians used mostly globally defined vector fields since they had in mind the link between geometry and control theory. In the 1970s, Stefan and Sussmann gave the first geometric results that involve locally defined vector fields. When it is not specified, vector fields can be either globally or locally defined. A set of (possibly locally defined) vector fields F induces a distribution D F on M by the formula: for every x ∈ M . Also, F induces a pseudogroup of (possibly local) diffeomorphisms of M [10,13,23]. First, any X ∈ F defines a flow t → φ X t : for every t ∈ R, the map φ X t is a (local) diffeomorphism of M , with inverse φ X −t . The set of all (local) diffeomorphisms φ X t t∈R is thus a group that is called the group of diffeomorphisms generated by X, and it is denoted by G X . Second, the set of all flows φ X t | X ∈ F, t ∈ R generates a subgroup of the group of (local) diffeomorphisms of M , which is the smallest group generated by X∈F G X . It is called the group of diffeomorphisms generated by F , and it is denoted by G F . An element of G F is a composition of flows of vector fields: where X i ∈ F and t i ∈ R for every 1 ≤ i ≤ n.
for every y in the domain of X and t ∈ R.
The link between F -orbits and the integration of distributions is subtle. The point is that the distribution D F generated by the family F may not be equal to the tangent space of the F -orbits. Indeed, by definition of the Lie bracket, the F -orbits of a given linear differential system contain the integral curves of the commutators of vector fields of F . Sussmann provides some precision in [10]: given some x ∈ M , if X, Y are tangent vectors to the orbit O F x at x, then [X, Y ] is tangent to O F x as well. However, the distribution D F may not be closed under Lie bracket because F may not be either. This implies that in general we have with a strict inclusion. Hence, for control theorists, this is not very interesting to look at the distribution D F , but rather to the distribution that contains also the directions spanned by the commutators of elements of F . To make things more precise, let us define the Lie closure of F as the smallest Lie algebra generated by elements of F , and denote it by Lie(F ). It is the smallest space of vector fields such that [S, Lie(F )] ⊂ Lie(F ). Then the preceding argument implies that the F -orbits contain the Lie(F )-orbits, then as a corollary they coincide. The problem of finding the solutions of a linear differential system could then be reformulated as integrating the distribution induced by Lie(F ). This is consistent with the idea that the space of vector fields tangent to the orbits are closed under Lie bracket. In general, if the F -orbits are submanifolds, one has: The equality on the right hand side is guaranteed when F satisfies some particular conditions. For example, Nagano showed that if F is analytic, the equality is automatically satisfied, whereas Hermann's condition is that F be locally finitely generated. Essentially, these are the two cases that most control theorists consider, see [22,23,26] and Section 3 for details. On the other hand, Stefan and Sussmann proved that [10,13]. Example 1. On M = R 3 , let F be the family of vector fields generated by the action of so(3) on R 3 . It defines an integrable distribution: the leaves are concentric spheres, and the point at the origin. The tangent bundle of each sphere is indeed invariant under the action of so(3). Remark. If we restrict ourselves to the semi-group H F generated by the flows of elements of F with positive times only, we may obtain a different set that we call the attainable set of x. This correspond to the situation where only forward-in-time motions are allowed, and this is essentially the set that control theorists are interested in. These are the conventions mostly used by Sussmann and control theorists [10,22,23,26]. Notice that Stefan's conventions are slightly different: in his fundamental paper [13], he designates the F -orbits as accessible sets. An equivalent formulation is made by using compositions of integral curves of elements of F : Then the attainable set of x is precisely the set of points reachable by such integral paths.
Obviously, if the family F is symmetric, i.e. if F = −F , then the F -orbit of x and the attainable set of x coincide. This is now time to introduce more geometrical tools, and where we turn to the theory of integration of distributions: Given a point x ∈ M , we say that the distribution D is integrable at x if there is an integral manifold N of D that contains x. An integral manifold through x is said maximal if it contains every integral manifolds through x. A distribution D is integrable if for every x ∈ M , there exists a maximal integral manifold through x. In particular, if D is integrable, M is the disjoint union of the maximal integral manifolds of D. Moreover, if an integrable distribution D = D F is induced by some family of vector fields F , then the maximal integral manifolds are the F -orbits, this is precisely the content of Stefan-Sussmann's theorem, see Section 4.
Here, the word 'integrable' refers directly to the theory of foliations. Recall that one defines a (possibly singular) foliation as a partition of M into connected immersed submanifolds, that are called the leaves of the foliation. These definitions imply that if a distribution D is integrable, then the maximal integral manifolds of D form the leaves of a foliation. Stefan has even shown that there exist distinguished charts that are adapted to the foliation, see Section 6. Given a point x ∈ M , we write the maximal integral manifold of D through x as L x and we call it the leaf through x. Since the map x → dim(L x ) which associates to any point x the dimension of its leaf is lower semi-continuous, the dimensions of the leaves in a neighborhood of x are necessarily greater than or equal to dim(L x ). This is consistent with the fact that the rank of a distribution is lower semi-continuous as well. A point x ∈ M is said to be a regular point if the dimension of the leaves is constant in some neighborhood of x, and a singular point (or singularities) otherwise. A leaf L is said regular if every point of L is a regular point, and it is said singular otherwise. The set of regular points is open and dense in M , and the leaves of highest dimensions are necessarily regular.
Example 2. Let D be the smooth distribution on R 2 defined by: for y ≤ 0 The regular points are those that do not belong to the horizontal axis. This distribution is integrable into a foliation of R 2 that has horizontal leaves for y > 0 and points otherwise. Last but not least, every theorem that are presented in this article are constructive, which means that they provide a recipe to build the integral manifolds of a distribution. In most cases, the integral manifolds are the F -orbits, but it does not come with a natural topology and smooth structure. This is where a result by Chevalley is systematically referred to: the construction of a 'strong' topology on M that is adapted to such integral manifolds. This construction is presented in chapter 3, Section VIII, of [24]. More precisely, the idea is to define, for every F -orbit, a family of small patches that cover the orbit. In most cases, this is done by using the exponential map, because the integral curves of an element of F stays in the orbit. Hence each point of an F -orbit would induce such a small patch in its vicinity, that is also an integral manifold through x. The union of all these patches is taken as a basis for the new topology, which turns to be finer than the original one. Mathematically, it is as if every open set in the older topology was now 'foliated' by the integral manifolds through each of its points. This topology enables to rigorously define continuous maps, local homeomorphisms and so on. Then, one can rely on this topology to provide each leaf with a smooth (or real analytic) manifold structure. Originally, the construction of Chevalley was fit for analytic regular foliations, but Hermann and Nagano could adapt it easily to their context [22]. They indeed only considered vector subspaces of X(M ), hence the basis of the new topology could be obtained from the small exponential patches based at each point. On the contrary, Stefan and Sussmann considered families of vector fields that may not satisfy the vector space axioms, and then they had to adapt the construction of Chevalley to their own needs. This is precisely for this reason that their proofs are tedious to go through, and that their work has to be acknowledged.
Remark. As a final remark, a recent and important result [18] shows that any smooth distribution D is actually point-wise finitely generated, i.e. there is a finite family of vector fields F such that D = D F . However, this fact does not imply that the sheaf of sections Γ(D) (or any other family of vector fields generating D) is finitely generated. For example, take the vector field X = χ(x) ∂ ∂x defined on M = R, where the function χ is defined by: The associated distribution D X consists in the null vector space on R − and the tangent space T x R on the open line R * + . This distribution is point-wise generated by X: D X x = Span X(x) , and it is obviously integrable, as an integral curve of the vector field X. However the sections of D X are not finitely generated in any neighborhood of 0, see [18].
Nagano and Hermann
As was said in the introduction, Hermann played a prominent role in the search for a generalization of Frobenius' theorem to generalized distributions. In a paper in 1963 [5], he made explicit the relationship between control theory, Pfaffian systems and foliation theory. Hermann introduces the geometric setup for control theory: linear differential systems can be equivalently seen as a family of vector fields. As such, he can be seen as the founder of geometric control theory. In the same paper, he also gives not only one sufficient condition to the integrability problem, but he actually gave three. Two of them relate to the smooth case and were proven by himself one year earlier [4]. The last one, in the analytic case, is a claim that he had not yet proven at that time, and that was proven by Nagano in 1966 [6].
Let us start with the analytic case. Item (c) in [5] states that any analytic family of vector fields that is involutive induces an integrable distribution. Working in the analytic category is simpler than in the smooth category because one can rely on some properties of analytic geometry. Reproducing the statement of Hermann, Nagano considers the set of globally defined analytic vector fields X(M ) as an infinite-dimensional Lie algebra, then he picks up a subspace F (i.e. a set of globally defined vector fields) and his claim is as follows:
Theorem 2. Nagano (1966) Let M be a real analytic manifold, and let F be a sub-Lie algebra of X(M ). Then the induced analytic distribution D F is integrable.
Proof. We only give here a sketch of the proof, and we refer to [6] for more details. The original proof of Nagano consists in showing that for any point x ∈ M : 1. the set N x of integral manifolds through x is not empty, and that 2. any finite intersection of integral manifolds through x, restricted to some open neighborhood of x, is an embedded integral manifold. He proves the first item by splitting the distribution in a neighborhood U of x, in the sense that he selects a dim D [24]. Chevalley proves Frobenius' theorem for analytic regular distributions, and Nagano's theorem is a direct generalization of this theorem to analytic singular distributions.
The proof of Nagano uses the analyticity of the vector fields by summoning the property that any real function whose successive derivatives vanish at the origin is the zero function. This theorem generalizes Frobenius' theorem in a straightforward way to the analytic (and singular) case, because it doesn't assume anything other than involutivity. We see in the following example that in the smooth case, the involutivity condition is not sufficient anymore: Example 3. Let D be the smooth distribution on R 2 defined by: where we understand ∂ ∂x as the subspace of T (x,y) R 2 spanned by the tangent vector ∂ ∂x . Sections of this distribution consists of sums of horizontal vector fields and vertical vectors fields which vanish for x ≤ 0. The bracket will preserve this property and therefore the distribution is involutive.
We now show that though this smooth distribution is involutive, it cannot be integrated into a singular foliation. On the right half-plane (for x > 0), the leaf associated to this distribution is all of the open half-plane. On the contrary, on the open left half-plane (for x < 0) the vertical vector field vanishes hence the distribution admits integral manifolds that are horizontal lines (since at each point the vector field ∂ ∂x generates the tangent space to the leaf). The maximal integral manifold passing through the point (x, y) (for x ≤ 0) is the line N y = (w, y) | w < 0 . On the vertical axis, the distribution is spanned by ∂ ∂x but for any given y ∈ R 2 , the subset N y ∪ {(0, y)} is not an immersed submanifold of R 2 , because it is not open on its right end. Hence the points on the vertical axis do not admit maximal integral manifolds, i.e the distribution is not integrable.
The above example shows that in the smooth case, involutivity does not imply integrability. Hermann proposed two conditions to solve this issue. The condition for which Hermann is known is condition (b) in [5] and corresponds to the condition found in the theorem now bearing his name that was proven one year earlier in [4]. Because he is focused on the relationship with control theory where equations may be defined everywhere, Hermann considers only globally defined vector fields. In other words he relies on subspaces F ⊂ X(M ) to describe a linear differential system of equations. He says that F ⊂ X(M ) is locally finitely generated if for every open set U ⊂ M , there exists X 1 , . . . , X p ∈ F such that the restriction of F to U is contained in the C ∞ (U )-module generated by X 1 , . . . , X p . In other words: F | U ⊂ C ∞ (U )Span X 1 | U , . . . , X p | U . Then, Hermann's statement in [4] is:
Theorem 3. Hermann (1962) Let M be a smooth manifold, and let F be a locally finitely generated sub-Lie algebra of X(M ). Then the induced smooth distribution D F is integrable.
Proof. As before, this is but a sketch of the original proof, whose details can be found in [4]. The proof of Hermann relies on showing that the rank of the distribution is constant along the integral curve of any vector field X ∈ F , and hence on the F -orbits. Hermann shows this result by using the fact that F is involutive, and since it is also locally finitely generated, the Lie bracket with X can be expressed in terms of local generators of the distribution. He uses this property to obtain a matricial differential equation in T x M , and solving it shows that the rank of D F is locally constant on the integral curve of X. By a compactness argument, he concludes that it is constant on the entire integral curve of X.
He then defines L x as the set of all points of M that can be joined to x by an integral path of F . This set of points L x coincides with the F -orbit of x because the set of vector fields F is symmetric. Notice that the involutivity of F implies that D F = D Lie(F ) . Since the rank of D F is constant along the integral curve of any element X ∈ F , it is constant over L x , this is then a good candidate to be the leaf of D F through the point x.
The topology and the smooth atlas on L x are induced by the construction of the leaf itself: for any point y ∈ L x , one can find a subspace F (y) ⊂ F whose dimension is the dimension of L x (hence the importance of showing that is it constant over L x ), and then the exponential map defines an embedding of a neighborhood of zero in F (y) into M such that 0 is mapped on y. By definition the image N y of this exponential map is entirely contained in L x . Then Hermann uses these embedded submanifolds {N y } y∈M as a basis for the new topology on M , as discussed in the construction of Chevalley [24]. This topology is used afterwards to equip the F -orbits with a manifold structure. All this discussion is made possible because the families of vector fields that Hermann studies are subspaces of X(M ), thus he can use the exponential map as a tool to generate charts on the F -orbits. This is definitely not allowed anymore in the generalization of this result by Stefan and Sussmann, who work with families of vector fields that are not necessarily vector spaces. Notice that an alternative choice of charts for L x is made by Lobry in [8], who provides a set of 'curviligne coordinates' adapted to any choice of basis of D F y . The last integrability condition proposed by Hermann in his 1963 paper is in fact nothing but the second part of the proof of Theorem 3, see condition (a) in [5]. More precisely, Hermann's statement is that if F is a sub-Lie algebra of X(M ), and if the rank of the distribution D F is constant on the integral paths of F , then D F is integrable. Notice that the converse of Hermann's statement is not true, even though it is claimed in Theorem 1.41 in [28]. It can indeed be refuted by the following counter-example due to Balan in some unpublished notes: for (x, y) = (0, 0) 0 for (x, y) = (0, 0) Let F be the C ∞ (M )-module generated by X and Y . The induced distribution D F is given by: which is obviously integrable. However, the commutator [X, Y ] is, for any couple (x, y) = (0, 0): One can show that the function (x, y) → 2x φ(x,y) x 2 +y 2 is smooth at the origin, but that the function (x, y) → 2y x 2 +y 2 does not admit a limit in (0, 0). Hence it is not a smooth function, and the commutator [X, Y ] does not take values in F , that is: F is not involutive. This example can be used to show that Theorem 1.40 in [28] is wrong as well, since the finite set of vector fields consisting of X and Y is not in involution, even though the induced distribution is integrable.
Very interestingly, Hermann was not very acknowledged for his third statement, even though it had deep consequences regarding the integrability issues. Every mathematician that tried to prove some result on integrability of smooth distributions systematically emphasized the importance of working with the integral curves of the family of vector fields F , in particular to show that the distribution is F -invariant along the integral paths. Being the first to bring the attention to this idea, it seems natural to emphasize Hermann's work on integration of generalized distribution as one of the most influential of the field.
As a final remark, notice that the fact that a smooth distribution is integrable does not necessarily imply that the sheaf of its sections is locally finitely generated (see the final remark in Section 2). In the years following the breakthrough of Hermann and Nagano, some attempts were made to find the minimal assumptions that are sufficient for a smooth distribution to be integrable. For example, after a careful analysis of the point in Nagano's proof that requires analyticity, Matsuda provided an adaptation of Nagano's theorem to the smooth case, at the price of requiring rather unnatural conditions [7].
Improvements and achievements
After Hermann, the first important contribution to the problem of integrating smooth distributions was made by Lobry in 1970 [8]. He tried to reproduce the proof of Hermann by weakening the two assumptions that F is involutive and locally finitely generated, and mixing them in a unique condition. Thus, Lobry proposed the following: a set of vector fields F is locally of finite type if for every x ∈ M there exist X 1 , . . . , X p ∈ F that span D F x , and such that for every X ∈ F , there exists an open neighborhood U of the point x and some functions (f ij ) 1≤i,j≤p ∈ C ∞ (U ) such that: for every y ∈ U . Originally Lobry did not require that the subset of vector fields X 1 , . . . , X p span the distribution at x, and it was a condition that Sussmann thought was missing so he added it in his paper when he referred to Lobry's conditions [10]. The main difference between Hermann's and Lobry's conditions (4.1) is that in the first case, since one can pick up a set of local generators of the family F , they span the distribution D F in some neighborhood of x, whereas in Lobry's assumption, the set of vector fields F ′ that span D F x may not span D F in any neighborhood of x. What can only be shown is that the distribution D F ′ has constant rank on the integral curve of any elements X ∈ F if we are sufficiently close to x. Hence, using the same kind of arguments as in the proof of Hermann is not sufficient to conclude that the distribution D F has constant rank on the integral curves of elements of F . Thus, contrary to the claim of Lobry, the condition that a family of vector fields is locally of the finite type is not a sufficient condition for integrability. This was first noted by Stefan who proposed more subtle conditions in [15], see Sections 5 and 6.
On the other hand, it may happen that Lobry's conditions (4.1) are sufficient for integrability, when applied to the correct set of vector fields. In particular, a smooth distribution may be integrable if the sheaf of sections of D satisfies Lobry's conditions. This is a claim made by Stefan in 1974 [13], but he did not provided any proof before his 1980 paper, as a corollary of a more general proposition, see Theorem 4 in [15]. We will show in Section 5 that even though Theorem 4 is wrong as it is written in [15], it can be subtly modified to obtain a correct proof of Stefan's claim on Lobry's conditions. Sussmann himself, convinced of the validity of Lobry's conditions in broad generality and of the truthfulness of the proof of Lemma 1.2.1 in [8], provided a refinement of the condition that F is locally of the finite type by noticing that since one works on the integral curves of elements of F , one can get rid of the open neighborhood condition and only ask that the bracket [X, X i ] is defined on the integral curve on X. In other words, Sussmann's integrability conditions are that for every x ∈ M there exist X 1 , . . . , X p ∈ F that span D F x , and such that for every X ∈ F , there exists some ǫ > 0 and some functions (g ij ) 1≤i,j≤p ∈ C ∞ ]−ǫ, ǫ[ such that: for every t ∈ ]−ǫ, ǫ[. Unfortunately, even if this last condition seems mathematically satisfying because it appears as an optimized generalization of Hermann's condition for integrability, it is not sufficient. This was pointed out by Balan in [16]. Independently, Stefan, in his 1974 paper [13] (however written and submitted in 1973), provided a resembling condition that was sufficient for integrability. He slightly modified the wording and added the conditions that the vector fields X 1 , . . . , X p depend on the choice of the vector field X ∈ F , and that they span D F on the integral curve of X. In other words, for every x ∈ M and X ∈ F , there exists a finite set of vector fields X 1 , . . . , X p ∈ F , some ǫ > 0 and some functions (g ij ) 1≤i,j≤p ∈ C ∞ ]−ǫ, ǫ[ such that: The important idea is that now the vector fields X 1 , . . . , X p depend both on x and on X. The proof that a family of vector fields satisfying such conditions induce an integrable distribution follows the exact same lines as Hermann's proof. However, since it is usually very cumbersome to check Stefan's integrability conditions, mathematicians do not use them and they are today mostly forgotten.
The story does not stop here: the fact that Lobry proposed a wrong claim was systematically emphasized by Stefan [13,15]. However, he did not present any counter-example before his 1980 paper [15], nor did he ever publicly mentioned that Sussmann's conditions were not sufficient either, even though he may have been completely aware of it. This observation was made by Balan in 1994 [16]. He explained in details that the implication (e) =⇒ (d) of Theorem 4.2 in [10] (that relies on Sussmann's conditions (4.2)) is false, but that the equivalences (a) ⇐⇒ (b) ⇐⇒ (c) ⇐⇒ (d) ⇐⇒ (f ) are true (and these form the content of the so called 'Stefan-Sussmann Theorem'). The counter argument that was proposed by Stefan to refute Lobry's claim -and that also works to refute Sussman's conditions (4.2)is the following: Example 5. Let M = R 2 and let F be the family of vector fields containing all the vector fields of the form: for some functions f, g ∈ C ∞ (R 2 ) such that g ≡ 0 in some neighborhood of (0, 0). The family F is actually a C ∞ (R 2 )-module, that is locally of finite type [15,16]. However the induced distribution D F turns out to be: for (x, y) = (0, 0) which is obviously not integrable at the origin. Stefan noticed that the space of sections of D F was not of the finite type, though. That is why he conjectured in 1974 that if the space of sections of a distribution is locally of finite type, then the distribution is integrable [13].
Following Stefan refutations, Lobry published a public erratum [14]. Even if he was wrong on this precise point, he nonetheless has to be acknowledged for the insight that led to the breakthrough of Stefan and Sussmann. Lobry was indeed the first to show that the flows of the vector fields are a crucial tool to prove integrability. More precisely, his lemma 1.2.1 in [8] which he mistakenly attributed to Hermann, and which was later refuted by Stefan [13], was implicitly providing the condition for the distribution D F to be integrable: it has to be F -invariant. This is precisely the content of the theorems of Stefan and Sussmann in their subsequent papers [10,13] that were submitted independently in 1972 and 1973, respectively. Both Stefan and Sussmann brought the discussion to another level because they did not rely on Lie algebras of globally defined vector fields anymore as in Hermann's and Nagano's papers, but they allowed F to be a mere family of vector fields that may be locally defined. Their 'tour de force' was then to circumvent Chevalley's construction of a refined topology adapted to the integral manifolds of D F . With different notations, Stefan and Sussmann presented similar results that could be reformulated as follows: Historically, it was Sussmann who first published this result in a short note without proofs in January 1973 in the Bulletins of the American Mathematical Society [11]. During the same year, his seminal paper was published in June in the Transactions of the American Mathematical Society [10]. Both were submitted in June 1972. On the other hand, Stefan submitted his own article in July 1973 to the Proceedings of the London Mathematical Society [13], but it was only published in December 1974. To claim his result a little bit faster, he submitted a short note to the Bulletins of the American Mathematical Society in March 1974 [12], that was actually published in November 1974. These two papers are a condensate of the work he has done during his PhD, that he defended in December 1973 at the University of Warwick [9]. However, it seems that Stefan was not aware of Sussmann's work before June 1973, as he says explicitely in [12], and as he emphasizes in the introduction of his paper [13] that the draft was already written when he heard about Sussmann's papers. This, the deep understanding of Stefan on the questions of integrability, together with the dissemblance of Stefan's and Sussmann's notations and formalism, exclude any suspicion of plagiarism.
Example 6. To illustrate this theorem, let us go back to Example 3, where a non-integrable distribution has been presented. Indeed, let F be any family of vector fields that generates D, and let u = (0, y) be any point of the vertical axis. Then by definition we know that there is a vector field X ∈ F which is defined in a neighborhood U of u and such that X(u) = ∂ ∂x . Then one can always push forward the distribution D v = T v R 2 , for some v ∈ U ∩ {(x, y) | x > 0}, to the left half-space, using the flow of −X. But on the left half-space, the distribution is one-dimensional, hence D is indeed not F -invariant. As expected it is not integrable either.
The formulation of Theorem 4 is rather satisfying because it is a direct analogue of Frobenius' theorem, having the advantage of making evident the condition for integrability in terms of families of vector fields. In the case that the distribution D F is not integrable, the F -orbits still do exist and are submanifolds or M . Stefan and Sussmann characterized the tangent space of these orbits as the smallest distribution containing D F and that is F -invariant, see Theorem 4.1 in [10] and Theorem 1 in [15], where Stefan finally adopted Sussmann's notations. This is the content of the well known Orbit Theorem in modern day control theory [22,23,26]. In this field, the original statements of Stefan and Sussmann have been slightly modified to obtain a more convenient formulation adapted to control theorists' needs. The Orbit Theorem is usually attributed to Nagano and Hermann, or Nagano and Sussmann -because control theorists often work in the analytic context. Then they usually drop the F -invariance which is no longer necessary in the analytic case, in favor of involutivity. This could be a bit confusing for someone who is exterior to the field. Another important remark is that the original articles of Stefan and Sussmann are very difficult to go through, either because the notations are unusual (in Stefan [13]), or because the proof is very tedious (in Sussman [10]). For all these reasons, the result of Stefan and Sussmann has not been fully acknowledged, adding more confusion for those who are not specialists of the field.
Further developments
Independently of this discussion, Balan proposed an alternative formulation of Stefan's local subintegrability conditions that would be valid for any C ∞ (M )-module of vector fields [16]. In the following the family F will hence be considered as carrying a C ∞ (M )-module structure. He understood that the flaw of the condition that a family F ⊂ X(M ) be locally of finite type is that, given a vector field X ∈ F , the open set U on which the bracket with X is defined depends on X. The same argument applies to Sussmann's conditions (4.2), where ǫ depends on X. This is precisely the reason why Example 5 works so well.
That is why Balan proposed to modify Sussmann's conditions into a stronger one, where ǫ does not depend on the choice of the vector field X ∈ F . Balan encoded this condition not directly with a parameter ǫ, but with an indirect way, with the help of some open subset. More precisely Balan's integrability conditions can be stated as follows: for any x ∈ M , there exist X 1 , . . . , X p ∈ F and some open set U ⊂ M such that X 1 (x), . . . , X p (x) is a basis of D F x , and that for every X ∈ F , there exist smooth functions (g ij ) 1≤i,j≤p ∈ C ∞ ]−µ X , µ X [ such that: Proof. The proof is essentially an adaptation of Nagano's proof of integrability [6], using the same splitting F (x) ⊕ G(x) of the family F at the point x. Let B ǫ be some ball of radius ǫ centered at the origin in T (x), then the exponential map exp : B ǫ → M defines an embedded submanifold N x of M . The core of Nagano's proof is that , the result is proven by showing that G(x) vanishes on N x , which is done by using the analyticity of the objects. In Balan's paper, one has to use a different argument since the objects are not analytic anymore. To do this, Balan singles out the condition of integrability in Lemma 3.4 in [16]. In Nagano's proof, item 2. is a consequence of analyticity, and item 1. is a consequence of item 2. In the smooth case, Balan uses his conditions to show item 1., which is then necessary to prove item 2. But this last item is not necessary to prove that N x is an integral manifold, since Equation (1.5) in Nagano's proof only require the validity of item 1. To show the first item, Balan assumes without loss of generality that there exists a set of vector fields that satisfies his conditions and also that induces a splitting in the sense of Nagano. He then uses this particular basis to show both the first item and then the second item of Lemma 3.4 in [16]. The notations used to describe the differential equation are not very clear, and one can find a clearer presentation in Lemma 6.1 in Sussmann's paper [10]. Finally, the second item of Lemma 3.4 in [16] is necessary to prove the 'only if' part of the theorem, that is: if D F is integrable, then Balan's conditions (5.3) are satisfied.
Conclusion
We have shown in the preceding three sections that the road to a definitive answer on the issue of integrating distributions reveals itself incredibly flourishing and twisting from the 1970s on. There are many results that turned to be wrong, and many claims whose proof are inconclusive. To this day, the main theorems that have been proven are: Nagano's theorem and Hermann's theorem in Section 3, Stefan-Sussmann's theorem in Section 4, and Stefan's theorem and Balan's theorem in Section 5. It is interesting that they involve different objects such as sub-Lie algebras of X(M ), spaces of sections of distributions, C ∞ (M )modules of vector fields, and even mere families of vector fields in Stefan-Sussmann's theorem.
Mathematicians could now choose the theorem that is more adapted to their needs. In particular, in control theory and in geometry, the most popular theorems are Nagano's and Hermann's. Hermann's influence on the field has to be emphasized as the founder of geometric control theory, and as the first one who gave results in the smooth and analytic cases, with a powerful argument: that the integral paths are the objects of interest when one attempts to integrate a distribution. In the same way, Stefan has to be acknowledged for his insights and his understanding of the topic that enabled him to produce many new results on the question of integrability, most notable the local characterization of integrable smooth distributions. This is all the more important, since he had a very short career before his tragic death while climbing mount Tryfan in 1978.
Another achievement of Stefan is the characterization of the leaves of a singular foliation with respect to the smooth structure on M [13], which is based on the usual definition of regular foliations [27]. Any regular foliation is characterized by a foliated atlas, which means that any of its charts is a saturated set: it is the union of disjoint connected submanifolds of a specific form that we call plaques. In the singular case, the definition is slightly modified because transition functions cannot be defined in the same way as in the regular case. More precisely, given a smooth manifold M and a foliation on it, we say that M is equipped with a distinguished atlas if for any point x ∈ M , there exist an open neighborhood U and a diffeomorphism ϕ : U → V × W , such that: corresponding foliation has two leaves: the point at the origin, and the punctured plane. It is shown in [17] that the holonomy groupoid corresponding to these various actions are drastically different.
This example shows that a family of vector fields contains more informations than the distribution that it induces. The focus on the family of vector fields rather than on the distribution draws a link with the original motivation of geometric control theory: solving linear differential systems using tools from geometry, and considering that the vector fields are the main objects of interests. This is not a new idea because Nagano himself for example defines a linear differential system as the C ∞ (M )-module generated by the sub-Lie algebra F ⊂ X(M ) [6].
There has also been a shift in the kind of object that are manipulated. Today, some geometers are more accustomed to manipulate modules or sheaves of vector fields than just families of vector fields as was typical of Stefan's and Sussmann's work. In the field of Poisson geometry for example, there are various notions and definitions but they all rely on this module property. A sub-module of compactly supported vector fields that is locally finitely generated and involutive is called a singular foliation in [17], or a Stefan-Sussmann foliation in [19]. A different formulation appears in [20]: a Hermann foliation is a sub-sheaf F : U → F (U ) of the sheaf of vector fields X that is locally finitely generated and closed under Lie bracket. Here, we say that a sheaf F is locally finitely generated if for any x ∈ M , there exists an open set U such that F (U ) is finitely generated as a C ∞ (U )-module.
It has been shown that these two different notions are in one-to-one correspondence [21]. Thus, it would be useful to find a common denomination for these objects that are equivalent, but bear different names. In any case Hermann's theorem implies that the distributions induced by either 'Stefan-Sussmann foliations' or 'Hermann foliations' are integrable. There is no need to use Stefan-Sussmann's theorem to show this result. As a historical note, in Hermann's original paper [4], the sub-Lie algebras F ⊂ X(M ) that he is studying are called foliations with singularities. Hence that would justify that one uses the term singular foliations for the equivalent notions used in [17,19,20], as this term was originally used by Hermann to precisely designate those families of vector fields that are locally finitely generated and involutive. | 2018-08-27T17:37:19.000Z | 2017-10-04T00:00:00.000 | {
"year": 2017,
"sha1": "c4fb0646e5a9af2ef9ccf3b02f5e50ccd03e1b84",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/1710.01627",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "c4fb0646e5a9af2ef9ccf3b02f5e50ccd03e1b84",
"s2fieldsofstudy": [
"Mathematics"
],
"extfieldsofstudy": [
"Mathematics"
]
} |
263816099 | pes2o/s2orc | v3-fos-license | Acute Thoracic Syndrome in Children: Epidemiological, Diagnostic and Evolutionary Aspects at the Albert Royer National Children’s Hospital in Dakar Senegal
Acute chest syndrome (ACS) is a serious pulmonary complication of sickle cell disease. It is estimated to be responsible for a quarter of deaths in the pediatric sickle cell population. In Senegal, there are not enough pediatric studies in this area. The objective of our study was to determine the epidemiological, diagnostic and evolutionary characteristics of ATS at the Albert Royer National Children’s Hospital (CHNEAR) in Dakar. This was a retrospective study in patients hospitalized at CHNEAR for ATS from January 1, 2021 to March 31, 2022. We included patients hospitalized and diagnosed with ATS. We had collected 102 patients, i.e. a hospital incidence of 2.96%. The average age of the children was 9 years old; the sex ratio was 1.04. The main symptoms on admission were hypoxemia (97.06%), chest pain (77.45%), dyspnea (77.45%) and fever (65.69%). 52.94% of patients had an associated vaso-occlusive crisis (VOC). The chest x-ray was abnormal in 92 patients, a rate of 90.20% and showed images of pneumonia (71%); bronchitis (17.65%) and pleurisy (0.98%). None of the children benefited from a pulmonary ultrasound. The treatment associated with analgesics (100%), broad-spectrum antibiotics (100%
Introduction
Sickle cell disease is a hereditary disease with autosomal transmission, clinically recessive and biologically codominant, characterized by the presence in the red blood cells of an abnormal hemoglobin called hemoglobin S. The latter is responsible for the sickling of the red blood cells in a situation of hypoxia [1].It is one of the most common genetic diseases in France [2].Acute chest syndrome (ACS) is one of the major complications of sickle cell disease.It is defined by the occurrence in a sickle cell patient of an acute respiratory attack, febrile or not, painful or not, associated with new pulmonary infiltrates on the chest X-ray [3].Hypoxemia is not present in the definition, but is a predictor of unfavorable outcome [4].This pathology is more common in the pediatric population with a frequency that decreases with age, the peak incidence being between the ages of 2 and 4 years [5].In Senegal, the absence of previous studies on the acute chest syndrome and the fact that it constitutes a frequent reason for hospitalization in a pediatric environment motivated the realization of this work in the pediatric pulmonology and continuing care department of the Albert Royer children's hospital in Dakar with the general objective: To describe the epidemiological, diagnostic and evolutionary aspects of ACS at the Albert Royer children's hospital in Senegal.The specific objectives were to determine the incidence of ACS at the CHNEAR in Dakar, to determine the major signs of STA and to specify the methods of management of ACS.
Methodology
The study was conducted at the Albert Royer National Children's Hospital (CHNEAR) in Dakar, Senegal.This was a retrospective study from January 1, 2021 to March 31, 2022, i.e. a duration of 15 months.It was descriptive and analytical in patients who were hospitalized for an acute chest syndrome.We included hospitalized patients in whom the diagnosis of acute chest syndrome (ACS) was made whether they were known to have sickle cell disease or not on admission, whose file was available and usable.Any incomplete file was excluded from the study.Sociodemographic, clinical, paraclinical and evolutionary data were collected using a pre-established survey form filled out from patient files.The data collected was entered into the Epi info V 7.2 software.The analysis was performed with Excel 2010 and SPSS version 22 software.
During the analysis, the qualitative variables were described by frequency tables and bar charts.Quantitative variables were described by their positional parameters.
Hypoxemia was defined by a pulsed oxygen saturation of less than 95% in ambient air.
Epidemiological Aspects
A total of 3451 children were hospitalized during the study, among them 102 patients were hospitalized for ACS, i.e. a hospital incidence of 2.96%.The average age of the patients was 110 ± 54.6 months and extremes of 12 and 204 months.
Clinical Aspects
93 patients were known to have sickle cell disease and were followed regularly.The baseline hemoglobin level was 7.59 ± 0.87 g/l.
Sickle cell disease was diagnosed during hospitalization in 2 patients, i.e. 1.96%.
A notion of corticosteroid therapy was found in 11.76% of patients (n = 12).Eighty-eight children (86.27%) had a history of familial sickle cell disease.
Chest pain was present in 79 patients and dyspnea in 79 patients.Hypoxemia was found in 99 patients, a rate of 97.06% (Table 1).Pulmonary condensation was found in 72 patients or 70.59% during the pleuropulmonary examination, bronchial syndrome in 17 patients (16.67%) and pleural effusion in 1 patient or 0.98%.
Abdominal pain was the main sign found during abdominal examination in 49 patients (48.04%) and splenomegaly in 14 patients or 13.73%.
The CVO was found in 53.94% (n = 54) of the patients during the osteo-articular examination.
Paraclinical Aspects
The average hemoglobin level was 6.97 ± 1.44 g/l with extremes of 2.2 and 10 g/l.
The mean leukocyte count was 27469 ± 21885 elements/mm 3 .The mode and the median were respectively 24000 elements/mm 3 and 23950 elements/mm 3 .
Ninety-two patients (90.20%) had a positive C-reactive protein (CRP) with an average rate of 113 ± 92.73 mg/l and extremes of 5.2 and 350.2 mg/l.Blood culture was performed in 12 patients (11.76%) of which 3 came back positive for staphylococcus aureus.The chest X-ray performed in all patients returned normal in 10 patients (9.80%) and abnormal in 92 (90.20%) and showed images of pneumonia (71%); bronchitis (17.65%) and pleurisy (0.98%).
The chest CT scan was performed in one patient (0.98%) and the results showed lesions in favor of SARS Cov2 infection.
None of the children benefited from a pulmonary ultrasound during their hospitalization.
Therapeutic Aspects
Analgesic treatment was prescribed in all patients as well as antibiotic therapy and oxygen therapy.Seventy-five patients (73.53%) were transfused, of which 97.33% (n = 73) simply and 2.67% (n = 2) in transfusion exchange (Figure 1).
Cefotaxime was used in 88 or 87.13% and macrolides were administered in 81 patients or 80.20% (Figure 2).Incentive spirometry and preventive heparin therapy were not performed in patients.
Evolutionary Aspects
The evolution was favorable in 97 patients (95.10%).5 children had an unfavorable outcome including 4 cases of death (3.92%) and one complication such as stroke (0.98%).
Causes of death: -2 of the deaths were due to severe acute respiratory distress syndrome followed by cardiopulmonary arrest.-One death caused by septic shock with intravascular disseminated coagulation following sepsis with pulmonary localization due to SARS COVID 19.-Death as a result of a stroke-type complication followed by a deep coma and then brain death: patient disconnected from the respirator.The average length of hospitalization was 8 days and extremes of 3 and 31 days.
Epidemiological Aspects
In our sample, 102 patients had ACS out of 3451 hospitalizations during the study period, an incidence of 2.96%.This rate is lower than the 10% -20% rate of admissions reported by Miller and Gladwin in their study [6].ATS is a frequent emergency in children but is underdiagnosed and is sometimes confused with a pulmonary infection or a thoracic vaso-occlusive crisis, which are the main causes [7].Hemoglobin electrophoresis is not carried out systematically in Senegal, most parents do not know their status or that of their children and some patients have escaped enrolment.For financial constraints, we would also have missed all those whose chest X-ray was normal at the beginning but who subsequently developed an ACS since the radiological results can change over time [5].While it is true that the presence of new pulmonary infiltrates on the chest X-ray is the key element in defining the disease [8], some have asserted that a single normal chest X-ray does not exclude the diagnosis [5].
In our cohort, the age groups 9 -10 years and 13 -14 years were the most represented with each a percentage of 10.78 (n = 11).The median age was 9 years old.
A study conducted in Brussels by Bertholdt al [7] reports similar results with a median of 8 years.The 0 -5 age group represented 24.4% and that of more than 5 years 75.6% of the patients.Innocent et al. [9] in Nigeria report a relatively higher rate with 53.3% of their patients who were under 5 years old.
The American cooperative study found an incidence which decreases with age, the peak incidence being between the ages of 2 and 4 years (25 out of 100 persons/year in this age group, reaching 8.8/100 people/year for adults) [10].This is not consistent with what we found in our study.This difference can be explained by the fact that the diagnosis of sickle cell disease is often discovered late in our context and that the ACS, occurring in the population under 5 years old, is often confused with another pulmonary pathology, especially infectious, given the absence early detection and the frequency of infectious pathologies in sub-Saharan Africa.Among the children, 6.86% (n = 7) were poorly followed.In Yaoundé Nansseu et al. [11] reported that only 1.1% of their patients were poorly followed.
In fact, patients who are not regularly monitored may have episodes of ACS and not come to consult at the Albert Royer hospital, which would underestimate the number during enrolment.
1.96% of patients (n = 2), did not know their status at the time of admission and the diagnosis of sickle cell disease was made during hospitalization.The ACS can sometimes be the circumstance of discovery of sickle cell disease and can quickly engage in this case the vital prognosis of the child [12].
Clinical Aspects
Hypoxemia was found in 99 patients, i.e. 97.06%.Innocent et al. in Nigeria [9] found that 100% of patients had pulse oxygen saturation below 95%.Hypoxemia is not present in the definition, but is a predictor of unfavorable outcome [4].
This strong finding of hypoxemia in our children hospitalized for STA is explained by the late consultation period.Hypoxemia is a criterion of severity of the acute chest syndrome and its rapid and adequate management can promote a better clinical course.
Fever was present in 67 patients on admission, a rate of 65.69%.Nansseu et al. [11] in Cameroon found a much higher rate in their patients, i.e. 90.5%.However Lebouc et al. [13] noted a lower rate of 49.3%.This frequency of fever would be linked to the extreme susceptibility of sickle cell patients to infections and especially to encapsulated germs.Chest pain was observed in 77 patients or 77.45%.Bertholdt et al. [7] found a rate of 67%.On the other hand Cissé et al. [14] and Nansseu et al. [11] found respectively lower rates of 24% and 28.6%.Chest pain is a key element of the diagnosis and should prompt appropriate treatment to avoid serious complications.
Dyspnea was found in 79 patients in our series, i.e. 77.45%.Cissé et al. [14] observe more or less similar results with a rate of 71.11%.Lebouc et al. [13] found dyspnea in 35.7% in their study.
Respiratory distress and chest pain are part of the clinical picture of ATS, which explains the high frequency of dyspnea in our study.
The evaluation of respiratory distress must be strict in order to be able to classify it and allow its rapid management.The latter will consist of early oxygen therapy and an improvement in hemoglobin levels through transfusion, which will improve oxygen transport to the tissues.
Pulmonary auscultation was abnormal in 90 patients, a rate of 88.24%.Lebouc et al. [13] in the West Indies found abnormal pulmonary auscultation in 79.4% of their patients.
Pulmonary condensation syndrome was found in 72 patients on admission, a rate of 70.59%.This was the main sign found during the pleuropulmonary examination.
Cissé et al. [14] had found a rate of 64.44% of pulmonary consolidation.Lebouc et al. [13] in the West Indies found a much higher rate of 82.4%.
A bronchial obstruction syndrome was noted in 17 patients or 16.67%.TINE et al. [15] in Senegal observed a rate of 13%.Indeed bronchospasm is sometimes present (13%) during the ACS [16].
Only one patient (0.98%) presented with pleural effusion syndrome.Indeed, it has been described that parenchymal involvement is sometimes associated with pleural involvement in severe forms [17].
Paraclinical Aspects
The average hemoglobin level was 6.97 ± 1.44 g/l with extremes of 2.2 and 10 g/l.
The mode and median were 6.2 and 7 g/dl, respectively.Our results are comparable to those of Doumbia S et al. [16] who, in their study in Burkina Faso, found an average hemoglobin level of 6.7 g/dl with extremes ranging from 2.5 g/dl to 10 g/dl.Nansseu et al. [11] had similar results with an average of 6.48 g/dl but lower than those of Bertholdt S et al. [7] in whom the average rate was 8.3 g/dl.Indeed, anemia is constant in sickle cell patients due to chronic hemolysis which can worsen in acute situations.The management of anemia in case of ACS may be necessary for a good improvement in oxygen transport.
Hyperleukocytosis was noted in 98 patients, i.e. 96.08%.The mean number of leukocytes was 27469 ± 2185 elements/mm 3 with extremes of 6630 and 196,200 elements/mm 3 .The mode and the median were respectively 24,000 elements/mm 3 and 23,950 elements/mm 3 .Cissé et al. [14] in Mali had found the same results with hyperleukocytosis in 97.77% of patients and an average of 31,856/mm 3 of leukocytes and extremes ranging from 11,500 to 78,300/mm 3 .Nansseu et al. [11] noted an average of 32479.4/mm 3 with extremes ranging from 10,600 to 73,900/mm 3 .This may suggest that infection can initiate or precipitate the development of ATS in our patients, thus validating previous reports [7] [10].
Ninety-two patients (90.20%) had a positive CRP with an average rate of 113 ± 92.73 and extremes of 4.10 and 350.2.Nansseu et al. [11] had found results that are superior to ours with an average CRP value of 228.4 mg/dl and extremes ranging from 4.5 to 432 mg/dl but lower than those of Lebouc et al. [13] who found an average CRP of 88 mg/dl.The increase in CRP is therefore almost constant during ATS and is not synonymous with bacterial infection [17].
Blood cultures were performed in 12 patients (11.76%) and only 3 all came back positive for Staphylococcus aureus.Lebouc et al. [13] found similar results with positive blood cultures in 29.9% of cases.However Nansseu et al. [11] in Cameroon, performed blood cultures in 47.6% of patients but no germ was identified.
The blood cultures carried out had isolated Staphylococcus Aureus on 3 occasions, being therefore the germ most frequently found in nosocomial infections, on catheter material and osteomyelitis in patients with sickle cell disease [18] [19].Lebouc et al. [13] confirm by reporting in their study that the most common pathogen in their patients was coagulase-negative staphylococcus.
The chest x-ray was abnormal in 92 patients, a rate of 90.20%.LEBOUC et al. [13] noted superior results highlighting abnormal chest X-rays in 94.1%.Douamba et al. [16] in Burkina Faso, on the other hand, observed a lower rate with 76.5% of anomalies.
This difference could be explained by the lag of the radiological diagnosis compared to the clinic.Repetition of chest x-rays may be necessary, sometimes hampered by the lack of financial resources limiting their performance.A normal chest X-ray does not exclude the diagnosis of ACS [5].
No child benefited from a pulmonary ultrasound, although the latter remains very contributive in the early diagnosis of the acute chest syndrome.It can be performed at the patient's bedside, less irradiating than the chest X-ray and allows early signs to be sought, such as consolidation predominantly at the bases of the lungs with air bronchogram, pleural effusion can also be associated [20].
Therapeutic Aspects
In accordance with the literature, our management of ACS included careful hydration, respecting daily fluid needs, analgesics, broad-spectrum antibiotic therapy, oxygen supplementation and transfusion.
Antibiotic therapy was used in all patients, i.e. a rate of 100%.3rd generation cephalosporins and macrolides being the most frequently used antibiotics, and are administered respectively in 88 patients, i.e. a rate of 87.13% and in 81 patients, i.e. 80.20%, the two are associated in 80.20% of patients.
In fact, in the case of ACS, a broad-spectrum antibiotic therapy active on intracellular germs and pneumococcus (macrolides and Cefotaxime) must be adopted [3].In Senegal, the absence of 100% care in terms of medico-social coverage of children with sickle cell disease proves the fact that several children monitored do not have good vaccination coverage against encapsulated germs.Senegal's expanded immunization program does not cover non-compulsory vaccines.
Oxygen therapy was used in all patients, 13 of whom received oxygen therapy with glasses.This is related to the fact that children arrive late in the emergency room at the stage of often profound hypoxemia.All this is linked to the low socioeconomic level in most children followed for sickle cell disease.None of our patients benefited from incentive spirometry.It should have been performed in all patients over 5 years hospitalized for CVO or for early ACS in order to limit hypoventilation and worsening of severe forms.The absence of physiotherapists and the non-availability of this device justifies the nonprescription.
Seventy-five patients (n = 75) were transfused, a rate of 73.53%.97.33% of these patients either (n = 73) in a simple way and 2.67% (n = 2) in transfusion exchange.This is explained by the fact that the majority of children arrive with a hemoglobin level having dropped by at least 2 points at the time of diagnosis.Transfusion can improve oxygen transport and is very beneficial in children hospitalized for ATS [21].
Almost all of the patients 95.10% (n = 97) had a favorable evolution.However, five children (4.90%) had an unfavorable outcome including 1 case of complication such as cerebrovascular accident (0.98%) and 4 cases of death, i.e. 3.92%, which is consistent with the percentage of 4% obtained by Bertholdt et al. [10] in Brussels.Nansseu et al. [11] in Cameroon note similar results with a mortality rate of 4.8%.
Two (2) patients died in a context of severe acute respiratory distress followed by cardiopulmonary arrest.The first death caused by septic shock with a CIVD following a sepsis with pulmonary localization to Sars cov 2. The second death following a stroke type complication followed by a deep coma and brain death.
The duration of hospitalization was on average 8 days and extremes ranging from 3 to 31 days which is comparable to the duration of 7 days reported by Bertholdt S et al. [7] but a little higher than the results of Vychinski et al. [22] which yields 5.4 days.Hunald et al. [23] reported superior results with an average hospital stay of 10 days.
The introduction of other supportive care in our practice such as incentive spirometry as elsewhere [6] as well as the systematic supply of oxygen and early blood transfusion could considerably reduce the hospital stay.
Conclusion
Acute chest syndrome is the second cause of hospitalization and the first cause of death among sickle cell patients in Senegal.Generalized neonatal screening must be implemented in order to allow early diagnosis and treatment of children with sickle cell disease.For better management adapted to our contexts, other multicenter studies would be necessary in order to be able to clearly describe the etiological factors associated with ATS in our populations.
Figure 2 .
Figure 2. Distribution of patients by type of antibiotic (N = 102).
Table 1 .
Distribution of patients by general examination results.N = 102. | 2023-10-11T15:03:02.660Z | 2023-01-01T00:00:00.000 | {
"year": 2023,
"sha1": "f8c7320cb509fb24b7017d496ddbcf092c7c9280",
"oa_license": "CCBY",
"oa_url": "http://www.scirp.org/journal/PaperDownload.aspx?paperID=128194",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "7248ecbcbe2078ad5764c508d1b4564968c5a519",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": []
} |
1895748 | pes2o/s2orc | v3-fos-license | Using Selectively Applied Accelerated Molecular Dynamics to Enhance Free Energy Calculations
Accelerated molecular dynamics (aMD) has been shown to enhance conformational space sampling relative to classical molecular dynamics; however, the exponential reweighting of aMD trajectories, which is necessary for the calculation of free energies relating to the classical system, is oftentimes problematic, especially for systems larger than small poly peptides. Here, we propose a method of accelerating only the degrees of freedom most pertinent to sampling, thereby reducing the total acceleration added to the system and improving the convergence of calculated ensemble averages, which we term selective aMD. Its application is highlighted in two biomolecular cases. First, the model system alanine dipeptide is simulated with classical MD, all-dihedral aMD, and selective aMD, and these results are compared to the infinite sampling limit as calculated with metadynamics. We show that both forms of aMD enhance the convergence of the underlying free energy landscape by 5-fold relative to classical MD; however, selective aMD can produce improved statistics over all-dihedral aMD due to the improved reweighting. Then we focus on the pharmaceutically relevant case of computing the free energy of the decoupling of oseltamivir in the active site of neuraminidase. Results show that selective aMD greatly reduces the cost of this alchemical free energy transformation, whereas all-dihedral aMD produces unreliable free energy estimates.
Introduction
Molecular dynamics (MD) simulations have become a crucial theoretical tool in advancing our understanding of the function of biological macromolecules. 1 Advances in algorithms 2 and computing power [3][4][5] continue to allow for simulations of increasingly larger systems on longer and longer time scales, permitting the direct observation of allatom protein folding, 6,7 the observation of ion permeation through a transmembrane channel, 8 and the simulation of a complete virus. 9 Despite the remarkable progress that has been made in the field, simulation times still often fall far short of the miscosecond to millisecond time scales inherent in many biological processes. There have been several methodological advances which have aimed at simulating longer time scales within current computational power such as implicit solvation models, 10 multiple time stepping algorithms, 11 and improved treatment of long-range electrostat-ics. 12 Sampling of phase space may also be enhanced through the deformation of the underlying potential energy surface, 13 as has been done in hyperdynamics, 14 puddle jumping, 15 conformational flooding, 16 and the local boost method 17 (to name only a few). Our group recently developed a method to enhance the crossing of barriers and the sampling of phase space termed accelerated molecular dynamics (aMD), 18 which has been shown to enhance sampling of biomolecular systems as in the conformational switching of Ras, 19 improve the agreement between experimental and calculated chemical shifts for IκBR, 20 and accelerate the calculation of pK a values in lysozyme. 21 An area of particular interest concerns using aMD simulations to calculate ensemble averages for physically relevant nonaccelerated systems. For systems in which the energy added to accelerate the system is low, trajectories may easily be reweighted; however, as the system size increases the boost energy required for significant acceleration increases, causing the exponential reweighting factor to produce ensemble averages that are dominated by only a few configurations with high weight, thereby decreasing the precision of thermodynamic quantities such as free energy changes. 22 Here, we propose limiting the acceleration to the degrees of freedom most responsible for conformational changes, thereby reducing the energy added to a system to enhance sampling and resulting in improved reweighting statistics. This work is an extension of a previous study in which aMD was limited to the dihedrals in the backbone of a peptide substrate bound to cyclophilin; 23 however, here we demonstrate that accelerated dihedrals may contain atoms which are also contained in nonaccelerated torsions (that is, individual molecules may contain both accelerated and nonaccelerated torsions). Two examples are highlighted. First, we show that even in the case of the model system alanine dipeptide, selectively targeting dihedrals in the molecule's backbone results in similar acceleration levels while reducing the amount of energy added to the system. Then we turn our attention to the larger problem of calculating the binding energy of the clinically approved drug oseltamivir (marketed as Tamiflu by Roche Pharmaceuticals, Basel, Switzerland 24 ) to the N1 flu protein neuraminidase. 25,26 Using free energy perturbation with a modification to the Bennett acceptance ratio (to account for reweighting), 27 we show that the computational cost required to accurately calculate the binding energy may be reduced by as much as 70% while maintaining a similar level of precision.
Theory
To enhance phase space sampling, the original aMD applies an additional potential only when the potential energy, V(r), is below a specified criterion E to produce dynamics on the artificial landscape V*(r) such that The form of the "boost" potential ∆V(r) is defined as The formalism of this boost potential has several practical advantages: it produces a potential energy surface with a smooth first derivative, it does not require the definition of a "reaction coordinate" along which to enhance sampling, it reflects the shape of the original potential, and it is relatively simple with only two adjustable parameters (E and R).
Simulation results generated with aMD may be reweighted by the exponential of the boost potential, exp( ∆V(r)), to recover theoretically exact thermodynamic properties for the physically relevant unaccelerated system. In practice, however, the exponential dependence of the reweighting hinders convergence as trajectory averages become dominated by a smaller subset of their snapshots as the range of boost potentials increases. While this does not severely affect small systems such as alanine dipeptide, it does prevent the use of aMD in accurate free energy calculation in larger biomolecular systems. To improve the reweighting statistics and enhance sampling, several variants of aMD have been developed including "barrier lowering" aMD, 28 replicaexchange aMD, 29 and adaptive aMD (personal communication P. Markwick). In this paper we discuss an extension to aMD which may be incorporated into other aMD implementations, which is to selectively accelerate a user-defined subset of dihedrals most pertinent to sampling the relevant degrees of freedom, which we refer to as selectiVe aMD. Selective aMD has the advantage that by only accelerating the degrees of freedom most important to sampling, lower overall boosts may be utilized to achieve a similar acceleration level, thus resulting in improved reweighting statistics. The idea of enhancing sampling along a user-defined manifold has been previously shown to improve the calculation of time-correlation functions for kinetics of multidimensional systems. 30 2.1. Weighted Bennett Acceptance Ratio. Free energy perturbation (FEP) is a well-established technique used in free-energy calculations, specifically in the case of ligand binding and computational alchemy. 31,13 In FEP, a nonphysical energy pathway is constructed between two physical end states, for example, a ligand bound in the active site of an enzyme (which we denote as λ ) 0) and an active site without the ligand (λ ) 1). The path between these two states is divided into a series of "windows" in which the Hamiltonian is transformed from state 0 to 1. Traditionally, free energy differences between successive windows are estimated by exponentially averaging the instantaneous work of going between the states, and the overall free energy is a sum of free energy differences between windows. 32 Shirts et al. showed that the Bennett acceptance ratio (BAR) was superior to exponential averaging in producing asymptotically unbiased free energy estimates between two states that could improve precision by an order of magnitude. 27,33 For a series of work functions between two states in which individual works do not each have the same weight (as in aMD), the derivation of a weighted BAR follows that in Shirts et al. with the exception that their eqs 5 and 6, the probability of a single measurement of the work W i for the forward and reverse work functions, are modified to where the constant M is redefined as Therefore, the value of ∆F that solves M ) kT ln is the optimal free energy estimate between adjacent windows. It has recently been shown that reweighting of states in BAR to account for non-Boltzmann sampling may have practical advantages outside of aMD simulations. 34
Computational Details
Molecular Dynamics Details. MD simulations were performed with the MD package Desmond (version 2.2) developed by D. E. Shaw Research. 35 Both systems were built, solvated, and ionized with Schrödinger's Maestro modeling suite such that there was a minimum of 12 Å of TIP3P 36 water buffer between the macromolecule and the periodic boundary and an ionic concentration of ∼150 mM NaCl was present. The CHARMM22 force field with the CMAP correction was utilized (except where noted below in the neuraminidase calculations). 37 Following 10 000 steps of minimization, systems were continuously heated to 300 K over 1.5 ps. All simulations used the Martyna-Tobias-Klein constant pressure and temperature algorithms (a combination of Nosè-Hoover constant temperature and piston constant pressure algorithms) 38,39 with a reference temperature and pressure of 300 K and 1.01425 bar, respectively. Short-range nonbonded interactions were truncated at 12 Å, while long-range electrostatics were calculated with a particle-mesh Ewald algorithm using a sixth-order B-spline for interpolation and a grid spacing of <1 Å in each dimension. 40 A time step of 2 fs was employed, and the M-SHAKE algorithm was used for constraining all hydrogencontaining bonds. 41 A plugin was written for Desmond to perform aMD calculations on specified dihedrals.
Alanine Dipeptide. An alanine dipeptide molecule based on a model compound was solvated in a (27 Å) 3 box using Maestro and equilibrated for 5 ns following heating. Two sets of 50 ns aMD trajectories were run, one in which all dihedrals were accelerated with aMD and one in which only the two dihedrals defined as φ and ψ were accelerated (selective aMD, see Figure 1). For each setup, 16 simulations spanning the parameter space of E and R were run to optimize these parameters. For the all-dihedral simulations, parameters of E ) 8 and R ) 4 had an optimal fit to our metric (as discussed below), whereas for the selective aMD E ) 1 and R ) 0.75 were optimal. Additionally, a 250 ns classical MD simulation was performed.
To determine the accuracy of the aMD results, welltempered metadynamics was performed to calculate the underlying two-dimensional free energy landscape in φ/ψ space. 42 In well-tempered metadynamics, the height of a Gaussian centered at position x is proportional to the Boltzmann weight of the metadynamics potential already present at x, that is the added Gaussian has a maximum value of ω 0 · exp (-V t (x)/k∆T), with ω 0 being the initial Gaussian height, -V t (x) the metadynamics potential at x, and k∆T a user-defined energy which limits the explored energy range. We performed a 50 ns simulation in which gaussians were added every 0.2 ps with a width of 0.1 radians and a height determined by ω 0 ) 0.02 kcal/mol and k∆T ) 2.4 kcal/ mol. To quantitate convergence we define a metric as follows For a given well (defined as i in Figure 2f) this metric calculates the ratio of the population of states below the energy U (P i, U aMD ) with that expected from the metadynamics results (P i, U exact ) and averages this over M energy values of U from 0.6 to 3 kcal/mol (in increments of 0.1 kcal/mol) for each of the N ) 3 wells. If a well has a population greater than that expected from the metadynamics result, the inverse of the ratio is taken to equally account for over and under sampling of the well. This metric has several advantageous features: by averaging over multiple energy levels it selects for smooth population densities (as are observed in cMD and metadynamics but may not result when the trajectory is reweighted with aMD), it treats over and under sampling a well as equally poor, and it equally weights all three of the main energy wells. ) 1 is considered an ideal reproduction of the exact population. N1 Neuraminidase. A monomer of neuraminidase bound to the inhibitor oseltamivir taken from the 2HU0 crystal structure was solvated in an approximately (70 Å) 3 box. Following the minimization and heating protocol outlined above, the system was equilibrated for 5 ns with the AMBER99SB force field 43 before protein and water parameters were changed to the CHARMM22 force field and further equilibrated for 1 ns. Parameters for oseltamivir were previously developed for use with the AMBER99SB force field 44 and maintained throughout the CHARMM simulations. The switch to the CHARMM force field was performed after testing of aMD free energy calculations revealed that increased acceleration levels in the AMBER force field tended to disturb the electrostatic components of the free energy calculation; therefore, this hybrid AMBER (for the ligand) and CHARMM (for the remainder of the system) force field was utilized. While the authors concede this may produce incorrect absolute binding energies, the goal of this paper is to study convergence of accelerated free energy calculations to results obtained from unaccelerated (and longer) calculations.
Alchemical free energy calculations for the decoupling of the ligand in the protein's active site were performed using 21 windows in which the electrostatics were decoupled over Figure 1. Conformation of the model system alanine dipeptide can be expressed by the angles of the two torsions φ and ψ. (7) 10 windows followed by the Lennard-Jones interactions decoupled over 10 windows with a softcore potential using R ) 0.5 45 (as in a previous study 46 ). Free energies were calculated using BAR, with the modified BAR formulas described above used in the aMD calculations. A positional restraint of .8 kcal/mol was placed on a central carbon atom in oseltamivir to prevent the ligand from sampling nonactive site portions of the simulation box, 47 and calculations were performed for cMD, all-dihedral aMD (with E ) 2600 and R ) 400), and selective aMD (with E ) 13 and R ) 2). For cMD calculations, three sets of windows were run (as has been shown to improve calculated free energies 48 ), each with the same initial coordinates but different velocities for 5 ns per window, whereas for the selective aMD the same three sets of windows were run with 200 ps of all-dihedral aMD (to quickly equilibrate the whole protein) followed by 1.5 ns of selective aMD per window. Note that the times indicated in the text include the simulation time spent in alldihedral aMD; thus, a time of 500 ps represents 200 ps of all-dihedral and 300 ps of selective aMD. One set of FEP calculations was performed for the all-dihedral case for 1.75 ns window to illustrate the futility of using standard aMD in large-scale biomolecular FEP calculations. The choice of dihedrals to accelerate was based on previous work in which the tetramer was simulated for 100 ns. 46 Acceleration was applied to those dihedrals which contained only heavy atoms, were in residues that had a heavy atom within 5 Å of the oseltamivir in the crystal structure, and had a multivariate distribution for a total of 29 dihedrals.
Work functions were decorrelated based on the statistical inefficiency using code provided by Shirts and Chodera. 49 For each BAR calculation, a bootstrap analysis was performed (with 50 independent calculations) for an error σ B , which was combined with the variance of the three means (σ V ) to calculate an overall error estimate for the free energy by a For each of the three wells ( Figure 2f) the minimum energy relative to the global minimum is calculated, as is the population of states within 1.8 kcal/mol of that minimum. Results show that both aMD forms converge on the order of five times as fast as the cMD simulations.
Results
Alanine Dipeptide. The free energy landscape of alanine dipeptide can be described by the rotation of two torsional angles, φ and ψ, making it an ideal model system for methodological development that has been extensively studied (Figure 1). The metadynamics, or "infinite sampling limit", results (Figure 2a) show three distinct energy wells which we label for further discussion in Figure 2f. The energy barrier between wells 1 and 2 is relatively low, and classical MD (cMD) simulations sample both sets of configurations on the 10 ns time scale (Figure 2b). Well 3 is substantially oversampled, which we attribute to the system becoming trapped in this state due to the higher energy barrier, thereby discouraging transitions to and from well 3. With 50 ns of simulation the sampling of all three wells are improved (Figure 2c). Further detail is shown in Table 1, which compares the minimum energy of each well and probability of all states within 1.8 kcal/mol of that minimum to the theoretically exact answer derived from metadynamics. The 10 ns cMD shows good agreement for well 2 (it is identified as the global minimum and the population at 3kT is nearly identical to the metadynamics results); however, well 1 is undersampled by 39% whereas well 3 is oversampled by 292%, and the minimum energies are incorrect by .4 and .7 kcal/mol, respectively. With 50 ns of simulation time the sampling improves such that errors in the populations range from 11% to 20%, and by 250 ns of sampling the statistics agree much better with the metadynamics results for all three wells, although the populations of wells 1 and 3 are still off by >10%.
For comparison, the all-dihedral and selective aMD simulations sampled all three energy wells on the 10 ns time scale (Figure 2d and 2e). Free energy statistics indicate a maximum error in the minimum well energy estimate of 0.24 kcal/mol in the all-dihedral case and 0.06 kcal/mol in the selective aMD simulation, whereas the greatest disagreement in well populations was an undersampling of well 1 by 17% in the all-dihedral aMD and an undersampling of well 3 by 16% in the selective aMD. Extension of the simulations to 50 ns results in further improved statistics of the well populations, with well 1 only being undersampled by 8% in the all-dihedral simulation and by 6% in the selective aMD.
To further examine the convergence of the free energy statistics we defined the parameter to quantitate the difference in the two-dimensional energy profiles (as discussed in the methods) which has the property of a value of 1 representing ideal sampling of the wells as compared to the metadynamics results. In Figure 3 we compare the time course of this parameter between the cMD and the aMD simulations (note the different time scales for the two sets of simulations). The scores for both aMD are similar to those for cMD simulations of five times the length, with cMD simulations requiring 200 ns before consistently having values above 0.9, whereas the aMD simulations pass this value at 44 and 35 ns for all-dihedral and selective aMD, respectively.
A comparison of the boosts applied throughout the aMD simulations ( Figure 4a) shows that higher boost potentials are applied throughout the all-dihedral simulation than in the selective aMD. For the all-dihedral case, ∆V varies from 0 to 4.2 kcal/mol, with an average value of 0.45 kcal/mol, whereas the selective aMD has a range of 0 to 0.54 kcal/ mol with an average of 0.11 kcal/mol. This decrease in the applied boosts has a significant impact on reweighting as the maximum weight relative to an unaccelerated state is reduced from 1097 to 2.46. In Figure 4b the amount of total weight recovered from a simulation is plotted against the percentage of frames that contribute to that weight. In the case of all-dihedral aMD, very few frames contribute a substantial portion of the reweighting, 50% of the total weight comes from 4.8% of the trajectory, whereas 90% of the weight comes from 53.7% of the trajectory. This results in almost one-half the sampling (46.3%) contributing very little (less than 10%) to the calculated ensemble averages. In the selective aMD case the lower boosts result in more uniform weights, 50% of the weight comes from 37.9% of the trajectory and 90% from 87.6% of the trajectory (for comparison, in cMD configurations in the trajectory are uniformly weighted, so 50% of the weight comes from 50% of the trajectory). This increased reweighting efficiency improves the recovered statistics as high-boost configurations tend to dominate the ensemble average. For example, wells 1 and 2 appear significantly smoother in the energy landscape of the selective aMD relative to the all-dihedral aMD ( Figure 2e and 2d) and is consistently higher in the selective case, both of which can be attributed to smoother statistics in the reweighted energy profiles. N1 Neuraminidase. The binding of oseltamivir to neuraminidase is an example of how highly accurate free energy calculations may be employed in the study and development of novel pharmaceutical compounds. In order to validate our aMD simulations of the decoupling of oseltamivir in the N1 active site, we performed extensive sampling at each step of the alchemical transformation with 5 ns of cMD simulation for each of the 21 λ values, which σ ) was repeated three times (with different initial velocities). An equilibration period is typically discarded from the BAR calculations, the length of which is determined by several factors, including molecular rigidity, the slow motions of loops which are relevant to free energy differences, and the amount of computational time available. 50 In Figure 5a we show the time evolution of the mean free energies for three equilibration times, 1000, 2000, and 3000 ps, along with the associated errors. The 3000 ps of equilibration curve is the only one which does not have a mean that changes appreciably with increased sampling, suggesting that for this initial configuration of this complex a full 3 ns per window is required for each of the λ windows to equilibrate to their new Hamiltonians. The free energy of 66.9 ( 1.2 kcal/mol is calculated by using 3 ns for equilibration and the remaining 2 ns for sampling, as shown in Table 2. Note that this is not the free energy of binding in solution; rather it is only one leg of the thermodynamic cycle required for that calculation. Examination of individual BAR runs shows that increased sampling decreases both the variance between the three runs and the bootstrap errors associated with each ( Figure S1, Supporting Information). BAR results from simulations using selective aMD (with the first 200 ps utilizing all-dihedral aMD) are presented in Figure 5b. As in the cMD calculations, we have chosen three equilibration times, 250, 500, and 750 ps, and the mean value of the free energy does not remain constant with increased sampling unless the longest of these equilibration times is discarded. With 750 ps of equilibration and 750 ps of sampling we obtain a free energy value of 67.3 ( 1.5 kcal/mol, identical (within error) to that from the longer cMD calculations. Results from individual aMD runs are shown in Figure S2, Supporting Information. A comparison of the time evolutions of the BAR results shows similar behavior for the aMD and cMD free energies (with the aMD on shorter time scales) for the short, medium, and long equilibration times (Table 2 and Figure S3, Supporting Information). For short equilibration periods (cMD, 1000 ps; aMD, 250 ps) the free energy is initially overestimated, and while it approaches the values obtained for longer equilibration, the bias introduced in this nonequilibrated period results in free energies ∼1.5 kcal/mol too high. Medium length equilibration periods (cMD, 200 ps; aMD, 500 ps) suffer from this effect as well but are not quite as biased, whereas for long equilibration times (cMD, 3000 ps; aMD, 750 ps) the calculated free energies remain stable (within error) with increased sampling.
As a comparison, we also performed BAR calculations on a single set of windows run for 1750 ps with all-dihedral aMD ( Figure S4, Supporting Information). The much larger range of weights resulted in very few configurations contributing to the BAR results and poor free energy estimates.
Concluding Discussions
The method of accelerated molecular dynamics has been well established as a means of enhancing phase space sampling with minimal computational cost; however, the exponential reweighting required for the recovery of ensemble averages in the unaccelerated case introduces excessive noise such that it is often difficult, if not impossible, to recover accurate ensemble averages. Even in the case of the well-studied system alanine dipeptide this becomes evident. For example, in Figure 2d the free energy profile of well 1 appears discontinuous in the region of (φ, ψ) ) (-50,150), due in large part to the fact that several of the conformations visited have weights 1-3 orders of magnitude below that of the maximum weighted conformation. In contrast, by limiting the acceleration to only those dihedrals which are most pertinent to phase space sampling (in this case, φ and ψ) the maximum weight was reduced from 1097 to 2.46, which not only increased the smoothness of the recovered free energy profile (Figure 2e) but also moderately improved the agreement to the infinite sampling limit (as calculated with metadynamics) results in Table 1 and Figure 3. However, both aMD formalisms showed approximately a factor of 5-fold increase in efficiency relative to cMD for our order parameter . Extension of this idea to the pharmaceutically relevant case of oseltamivir binding to neuraminidase shows the expensive free energy calculation of the decoupling of the ligand in the protein's active site may be reduced by up to 70% (from 5 to 1.5 ns/window) without a loss in precision. While this may not be crucial in the case of studying the binding of only a single ligand to a protein, one could imagine that in the lead optimization stage of a drug-design effort, when highly accurate binding energies are necessary, the ability to examine three times the number of possible compounds at little extra computational cost may be highly desirable. In the case presented here, the accelerated dihedrals were chosen based upon extensive prior MD simulations; however, if this data were not available one could choose the accelerated dihedrals by residue type (accelerating dihedrals in residues with highly mobile side chains such as arginine and not accelerating dihedrals in aromatic rings), atom types (non-hydrogen containing), and proximity to the ligand. Additionally, in some cases where the ligand is bulky and has multiple torsions with high-energy barriers between local minima, one could accelerate dihedrals in the ligand molecules themselves, as was done in the case of cyclophilin. 23 Selective aMD may easily be incorporated into other free energy algorithms. For example, the methods of one-step perturbation and envelope distribution sampling provide techniques for effectively calculating the binding of several ligands to a protein with a single extended MD simulation. 51,52 They have, however, not been extensively utilized due to the fact that, depending on the system being studied, they may require simulations on the microseconds time scale. 53 Therefore, a reduction of the computational cost of 3-to 5-fold (as observed in this study) could reduce the necessary simulation length from the highly expensive 1-2 µs range into the manageable 200 ns time scale. These methods highlight that sampling of the partition function is, in general, a slow process requiring extensive calculations; therefore, methods such as selective aMD, which can enhance sampling of the relevant portions of phase space while not introducing excessive noise into the calculations, may prove useful in future applications. | 2016-05-17T04:32:52.429Z | 2010-10-13T00:00:00.000 | {
"year": 2010,
"sha1": "a870de4077dc5a2af3ace212b957a80f080162c5",
"oa_license": "acs-specific: authorchoice/editors choice usage agreement",
"oa_url": "https://doi.org/10.1021/ct100322t",
"oa_status": "HYBRID",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "86d2d17c8140156144dc40116990a2986cdc3042",
"s2fieldsofstudy": [
"Engineering"
],
"extfieldsofstudy": [
"Chemistry",
"Medicine"
]
} |
251281295 | pes2o/s2orc | v3-fos-license | Association between sarcopenia and osteoarthritis: A protocol for meta-analysis
Background Sarcopenia, a relatively new syndrome referring to the age-related decline of muscle strength and degenerative loss of skeletal muscle mass and function, often resulting in frailty, disability, and mortality. Osteoarthritis, as a prevalent joint degenerative disease, is affecting over 250 million patients worldwide, and it is the fifth leading cause of disability. Despite the high prevalence of osteoarthritis, there are still lack of efficient treatment potions in clinics, partially due to the heterogeneous and complexity of osteoarthritis pathology. Previous studies revealed the association between sarcopenia and osteoarthritis, but the conclusions remain controversial and the prevalence of sarcopenia within osteoarthritis patients still needs to be elucidated. To identify the current evidence on the prevalence of sarcopenia and its association with osteoarthritis across studies, we performed this systematic review and meta-analysis that would help us to further confirm the association between these two diseases. Methods and analysis Electronic sources including PubMed, Embase, and Web of Science will be searched systematically following appropriate strategies to identify relevant studies from inception up to 28 February 2022 with no language restriction. Two investigators will evaluate the preselected studies independently for inclusion, data extraction and quality assessment using a standardized protocol. Meta-analysis will be performed to pool the estimated effect using studies assessing an association between sarcopenia and osteoarthritis. Subgroup analyses will also be performed when data are sufficient. Heterogeneity and publication bias of included studies will be investigated. PROSPERO registration number CRD42020155694.
Introduction
With global population aging and increased longevity, aging and age-related diseases have become substantial burden and inevitable challenges worldwide. Sarcopenia, defined as an age-related muscle mass decline and muscle strength loss, results in reduced mobility, function and quality of life, and thus greatly increasing healthcare expenditures [1]. Although sarcopenia is a relatively new syndrome which was first described in the 1980s [2], it has become a common condition with an estimated prevalence from 12.9% to 40.4% with various diagnostic criteria [3,4]. Sarcopenia is not only simply recognized as an age-related syndrome but also found to be correlated with increased risk of fall/fracture [5,6], functional decline [7], multiple chronic diseases [8][9][10], loss of independence [11][12][13], frailty and mortality [14]. Sarcopenia is becoming a critical public health burden compounded by an expanding elderly population, being reported that the direct cost for medical spending due to sarcopenia was around $18.5 billion (i.e., 1.5% of the total health care spending) [15] for the year of 2000 in the United States, and since then, the economic burden of this progressive and generalized skeletal muscular disorder has grown substantially [16].
Osteoarthritis, the most common degenerative joint disease, is a leading contributor of physical disability nowadays [17,18]. Since osteoarthritis has brought a severe impact on both individuals and the society as a whole, a comprehensive understanding of the underlying mechanism and potential risk factors of osteoarthritis has a significant importance [19]. Multiple types of risk factors have been identified to be correlated with pathogenesis of osteoarthritis [20], among which muscle weakness is considered as one of the major ones [21,22]. For the various recommended intervention measures of osteoarthritis, functional exercise and muscle strength exercise have been drawing growing attention. Previous studies have suggested that there appears to be a bidirectional relationship between muscle weakness and osteoarthritis, muscle weakness might be a contributor to osteoarthritis progression and vice versa. On the one hand, as the atrophy or weakness of periarticular muscles would lead to the development, progression and severity of osteoarthritis, patients with osteoarthritis would adapt their lifestyle to sedentary and inactivity to avoid joint pain and stiffness [23][24][25][26]. Subsequently, sedentary and physical inactivity would in turn reduces energy expenditure and results in muscle wasting, thus would lower the joint-protective ability [27]. On the other hand, pain and stiffness of osteoarthritis joints cause physical inactivity, which would lead to adipose tissue gains and overweight development in these patients. The pressure of increased load further exacerbates the progression of osteoarthritis, and it is the combination of these factors that is considered to create and perpetuate a vicious cycle between muscle weakness and osteoarthritis [28,29].
Yet, few studies considered muscle weakness or atrophy as a disease (i.e., sarcopenia) and the relationship between sarcopenia and osteoarthritis has remained ambiguous and no strong consensus has been reached [30]. Some suggested that sarcopenia was likely to positively correlate with osteoarthritis [31][32][33][34], and other studies did not support this observation [35,36]. One of the plausible reason could be the definition of sarcopenia has been progressing and updating for decades, but full agreement on the involved variables and cutoff points has not reached yet [3], and this may lead to different prevalence rates. Furthermore, different anatomical location of osteoarthritis may exhibit different associations with sarcopenia. One study found that sarcopenia was associated with osteoarthritis at the hip and lower limbs [34], while another study reported that sarcopenia was independently associated with knee osteoarthritis and inversely associated with lumbar spine osteoarthritis [33]. One approach to synthesis existing knowledge is to identify consistencies across studies through a meta-analysis, but to our knowledge, no such study has systematically reviewed current evidence on the association between sarcopenia and osteoarthritis. Therefore, this meta-analysis study aims to identify the association between sarcopenia and osteoarthritis more comprehensively. The results of this study will further our knowledge on whether sarcopenia and osteoarthritis are associated at different targeted joints, thereby enabling the development of preventive and therapeutic strategies for both sarcopenia and osteoarthritis.
Study design
This meta-analysis protocol has been registered with the international prospective register of systematic reviews PROSPERO network (registration number: CRD42020155694). The consent of this protocol is developed based on the Preferred Reporting Items for Systematic Review and Meta-Analyses Protocols (PRISMA-P) 2015 Statement Guidelines (S1 Appendix) [37].
Eligibility criteria
The initially-retrieved studies will be evaluated for inclusion according to the following inclusion criteria: (1) observational studies including cohort studies, cross-sectional studies or casecontrol studies that focus on the prevalence of sarcopenia in patients with and without osteoarthritis, (2) diagnosis of sarcopenia using any definition criteria (e.g., low appendicular muscle mass criteria, or the European Working Group on Sarcopenia in Older People [EWGSOP] criteria including low handgrip strength and/or low walking speed in combination with low muscle mass), and (3) the age of included subjects are �60 years. Studies will be excluded if they are: (1) lack of reporting on study outcomes and (2) duplicate publications.
Information sources
Three electronic databases (i.e., PubMed, Web of Science, and Embase) will be searched with appropriate search strategies from inception up to February 2022 from each platform or database. In addition, reference lists of the included literature and relevant systematic reviews will also be browsed to identify the eligible studies.
Search strategy
The search will be carried out by combining keywords terms or medical subject heading terms (MESH) for eligible studies from the databases mentioned above. The same search terms will be adapted based on the specific requirements of different syntax rules. The electronic search strategy is listed in Tables 1-3.
Study selection
Two investigators will screen the title and abstract of each retrieved study independently to identify eligible studies after removing duplicates. Full text will be reviewed according to the inclusion and exclusion criteria if the eligibility of studies is uncertain. Discussion will be made by consulting a third investigator for any disagreements between the two investigators. Studies will not be restricted on the language and publication date. Study selection will be documented and summarized based on the PRISMA flow diagram.
Data extraction
After systematic literature search is carried out, two investigators will screen the the included studies independently and extract the following data from in a standardized format: name of the author(s), year of publication, study design, setting of the study, data sources, study period, sample size, age range of the participants, sex distribution, prevalence of sarcopenia in patients with and without osteoarthritis. The effect sizes (i.e., odds ratio [OR], relative risk [RR] or hazard ratio [HR]) will be directly extracted or calculated on the basis of the relevant data in the original study as far as possible. If any data of interest is not available, we will contact the author(s) of the concerned study to obtain the supplemental data to the best extent. Any disagreements in data extraction will be consulting a third investigator to reach a consensus.
Quality assessment
Two investigators will evaluated the quality of included studies independently according to the Newcastle-Ottawa Quality Scale (NOS) [38]. The NOS scale is a validated scale for non-randomized studies in meta-analysis that evaluates the risk of bias with broad perspectives: (1) the selection of the study groups; (2) the comparability of the groups; and (3) the ascertainment of
PLOS ONE
Association between sarcopenia and osteoarthritis: A protocol for meta-analysis either the exposure or outcome of interest for case-control or prospective/ retrospective cohort studies, respectively [39]. For the cross-sectional studies, an adapted form of NOS will be used to evaluates the risk of bias [40,41]. Studies with more than five stars will be considered as high methodological quality. In case of any discrepancies, a consensus will be reached through 2 TS = (sarcopen � OR "muscle weakness" OR "muscle atrophy" OR "muscle mass" OR "muscle volume" OR "muscle quality" OR "muscle size" OR "lean mass" OR "muscle strength" OR "grip strength" OR "gripping strength" OR "hand strength" OR "holding power" OR "grip dynamometer" OR handgrip OR "muscular atrophy" OR "muscular dystrophy" OR "muscle dystrophy" OR "physical function" OR "muscle weakness")
PLOS ONE
Association between sarcopenia and osteoarthritis: A protocol for meta-analysis a discussion, with the assistance of a third reviewer when necessary. Studies with a high risk of bias (e.g., small-sample or low-quality studies) will be excluded and the reasons for their exclusion will be noted.
Data analysis
All data will be statistical analyzed using the statistical software Review Manager 5.3 software.
The study characteristics will be summarized in narrative texts and baseline tables. Specifically, effect sizes (the pooled OR, RR or HR) and corresponding 95% CIs will be calculated respectively. Statistical heterogeneity between the studies will be evaluated with I 2 values, for highly heterogeneous studies (>50%) a random-effects model will be used. A fixed-effects model will be applied to perform data pooling when the level of heterogeneity is not significant. The meta-analysis is set to a statistical significance as p value < 0.05. When data are sufficient, this study will also perform subgroup analyses stratified by obesity (obesity and non-obesity) and different joints (hip, knee and hand).
Assessment of publication bias
The publication bias among various studies will be assessed using the visual examination of funnel plot and Egger's test if ten or more studies are available. Asymmetric funnel plot may imply possible publication bias, small-study effects, or other factors. If asymmetry is caused by small-study effects, we will conduct sensitivity analysis by excluding these studies to explore how this affects the results and conclusions of the meta-analysis.
Sensitivity analysis
Sensitivity analysis will be performed to test the robustness of pooled results regarding study characteristics and methodological quality by removing some of the small-sample or low-quality studies. If heterogeneity exists, sensitivity analysis will be re-run while removing poor quality data in a step-by-step wise.
Discussion
As sarcopenia is a relatively new disorder with high incidence and prevalence in elderly population, it has seriously affected the health of the elderly throughout the world. It has been postulated that sarcopenia and osteoarthritis may be co-existing conditions [42]. But the pathophysiological mechanisms associated with sarcopenia and osteoarthritis are unclear. Plausible factors might include ageing, disuse and inflammation. Yet, the relevance of these findings has not been established. To explore the relationship between these two prevailing diseases, it is of great significance to conduct a meta-analysis to determine the impact of sarcopenia on osteoarthritis. So far, there have been several studies on the correlation between sarcopenia and osteoarthritis. Of them, four studies suggested that sarcopenia was likely to positively correlate with osteoarthritis [31][32][33][34], and two studies showed that obesity and sarcopenic obesity, but not sarcopenia, were associated with osteoarthritis [35,36]. In addition, one study found that sarcopenia was associated with osteoarthritis at the hip and lower limbs [34], while another study reported that sarcopenia was independently associated with knee osteoarthritis and inversely associated with lumbar spine osteoarthritis [33]. However, due to the variation in diagnostic criteria and classification of sarcopenia, the association between sarcopenia and osteoarthritis is still inconclusive [32,35]. Previous studies analyzed sarcopenia by adopting different diagnosis standards and in relation to different weight-bearing joints osteoarthritis, which could be a possible reason why the literature findings were inconsistent. An earlier cross-sectional study discussed the associations between low skeletal muscle mass and radiographic osteoarthritis of the hip, lumbar and knee joints, and the results showed that the skeletal muscle mass exhibited different associations with different joints [33]. Sarcopenia, as a disease affecting the whole body, may not only influence the knee joints but also other joints. According to the diagnostic criteria given by EWGSOP [1], sarcopenia can be diagnosed by the tests of muscle strength (usually based on grip strength), muscle mass (usually based on dual-energy X-ray absorptiometry or bioelectrical impedance analysis) and muscle function (usually based on gait speed, short physical performance battery, or time-up-and-go tests), which involve multiple joints of the body including hand, hip, and knee.
Nevertheless, sarcopenia, as well as sarcopenic obesity, have both been recognized as leading contributors of increased disability and mortality [14,43]. Given that the effect of obesity towards osteoarthritis in previous studies, we will further perform subgroup and sensitivity analyses focusing on obesity or sarcopenic obesity as well. Previously, obesity, which is often represented by an increased body mass index (BMI) or body weight, was generally considered as a major risk factor of osteoarthritis [44]. However, in view of that the ratio of muscle mass over fat mass is changing constantly with the aging process [45][46][47], conventional anthropometric indicators, such as BMI and weight, may not be able to fully represent adiposity [48]. Recently, sarcopenia and sarcopenic obesity have been reported to be associated with a number of diseases including osteoarthritis [10,29,49]. Thus, subgroup and sensitivity analyses of sarcopenia, sarcopenic obesity and obesity will be conducted to better illustrate the relationship of these disorders.
The outcome of this meta-analysis may address an association between sarcopenia and osteoarthritis that is of pivotal importance to understanding the underlying mechanisms. The results from this study are also likely to inform healthcare a better decision-making treatment decision and to maximize the benefits of prevent and control osteoarthritis progression for limiting sarcopenia risk. | 2022-08-04T06:17:08.030Z | 2022-08-03T00:00:00.000 | {
"year": 2022,
"sha1": "29c843f443ea0b9bc57a958a931cf894ebd9222f",
"oa_license": "CCBY",
"oa_url": null,
"oa_status": null,
"pdf_src": "PubMedCentral",
"pdf_hash": "2ab58fffc092a375f36d99348a12f1a4a392da78",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
15975146 | pes2o/s2orc | v3-fos-license | Modulation of Replicative Lifespan in Cryptococcus neoformans: Implications for Virulence
The fungal pathogen, Cryptococcus neoformans, has been shown to undergo replicative aging. Old cells are characterized by advanced generational age and phenotypic changes that appear to mediate enhanced resistance to host and antifungal-based killing. As a consequence of this age-associated resilience, old cells accumulate during chronic infection. Based on these findings, we hypothesized that shifting the generational age of a pathogenic yeast population would alter its vulnerability to the host and affect its virulence. SIR2 is a well-conserved histone deacetylase, and a pivotal target for the development of anti-aging drugs. We tested its effect on C. neoformans’ replicative lifespan (RLS). First, a mutant C. neoformans strain (sir2Δ) was generated, and confirmed a predicted shortened RLS in sir2Δ cells consistent with its known role in aging. Next, RLS analysis showed that treatment of C. neoformans with Sir2p-agonists resulted in a significantly prolonged RLS, whereas treatment with a Sir2p-antagonist shortened RLS. RLS modulating effects were dependent on SIR2 and not observed in sir2Δ cells. Because SIR2 loss resulted in a slightly impaired fitness, effects of genetic RLS modulation on virulence could not be compared with wild type cells. Instead we chose to chemically modulate RLS, and investigated the effect of Sir2p modulating drugs on C. neoformans cells in a Galleria mellonella infection model. Consistent with our hypothesis that shifts in the generational age of the infecting yeast population alters its vulnerability to host cells, we observed decreased virulence of C. neoformans in the Galleria host when RLS was prolonged by treatment with Sir2p agonists. In contrast, treatment with a Sir2p antagonist, which shortens RLS enhanced virulence in Galleria. In addition, combination of Sir2p agonists with antifungal therapy enhanced the antifungal’s effect. Importantly, no difference in virulence was observed with drug treatment when sir2Δ cells were used for infection, which confirmed target specificity and ruled out non-specific effects of the drugs on the Galleria host. Thus, this study suggests that RLS modulating drugs, such as Sir2p agonists, shift lifespan and vulnerability of the fungal population, and should be further investigated as a potential class of novel antifungal drug targets that can enhance antifungal efficacy.
INTRODUCTION
Cryptococcus neoformans is a formidable fungal pathogen that causes disease in immunocompromised individuals; especially AIDS patients and organ transplant recipients (Perfect and Casadevall, 2002). This haploid fungus grows by asexual reproduction during the course of infection (Alanio et al., 2011). During asexual reproduction, it undergoes asymmetric mitotic divisions, and the sum of these divisions determines its replicative lifespan (RLS) (Steinkraus et al., 2008). In the course of these divisions, the aging mother cells increasingly manifests phenotypic changes, including increased cell body size analogous to changes described in Saccharomyces cerevisiae, Candida albicans, and Schizosaccharomyces pombe (Fu et al., 2008;Roux et al., 2010;Yang et al., 2011). Also analogous to these yeasts, old C. neoformans cells cease to divide at the completion of their RLS (Bouklas et al., 2013). RLS is different from chronological lifespan (CLS) as it involves active growth of the yeast population, whereas CLS is defined as the number of days non-replicating cells remain viable in a medium with no nutrition (Fabrizio and Longo, 2007;Cordero et al., 2011). It is noteworthy that both RLS and CLS affect longevity in S. cerevisiae, but their relationship does not always correlate (Qin and Lu, 2006;Fabrizio and Longo, 2007;Barea and Bonatto, 2009;Murakami and Kaeberlein, 2009).
Recent investigations from our laboratory demonstrated that the RLSs of individual C. neoformans strains vary and constitute a stable and reproducible, albeit strain-specific trait (Jain et al., 2009a;Bouklas et al., 2013Bouklas et al., , 2015. It was also shown that C. neoformans undergoes replicative aging during chronic infection in the human host (Alanio et al., 2011). Most importantly, our data from human as well as rat infection indicated that older C. neoformans cells accumulate in chronic infection because they were selected. Specifically, old cells were found to be more resistant to hydrogen peroxide stress, macrophage-mediated killing, and amphotericin B-mediated killing (Bouklas et al., 2013). This finding is important because patients with cryptococcosis primarily die from chronic meningoencephalitis (Perfect and Casadevall, 2002), and treatment is commonly initiated after weeks or even months of symptoms. The pathogen's ability to evade the host immune response combined with its ability to replicate and persist in vivo poses a challenge to effective clearance. Consequently, despite the introduction of Combination Antiretroviral Therapy and antifungal therapy, treatment failure, persistent disease, and death remain common (Perfect and Casadevall, 2002;Park et al., 2009).
Based on our published data on selection and acquired resilience of older C. neoformans cells (Bouklas et al., 2013), we hypothesized that emergence, selection, and ultimately persistence of older C. neoformans cells may constitute an unanticipated virulence trait that could potentially be modulated with drug treatment. Consequently, the intriguing question that has transpired is: could manipulation of RLS in C. neoformans have an effect on resilience of the yeast population in the host environment, and therefore indirectly also on virulence? We investigated this question by using drugs known to manipulate RLS. Sirtuins are a large family of NAD + -dependent histone deacetylases that are well-conserved across many species (Greiss and Gartner, 2009), including C. neoformans. SIR2 has been implicated in aging in many model organisms (Landry et al., 2000), including S. cerevisiae (Kaeberlein et al., 1999).
We demonstrate that chemical agonists and antagonists to Sir2p can result in its activation or inhibition, respectively, and consequently affect the lifespan and resilience of the pathogen population in the host.
Ethics Statement
All animal experiments were carried out with the approval of the Albert Einstein College of Medicine Institute for Animal Studies. The protocol number 20091015 was approved by the Institutional Animal Care and Use Committee at Einstein. The study was in strict accordance with federal, state, local and institutional guidelines that include "The Guide for the Care and Use of Laboratory Animals, " "The Animal Welfare Act, " and "Public Health Service Policy on Human Care and Use of Laboratory Animals." All surgery was performed under ketamine and xylazine anesthesia, and every effort was made to minimize suffering.
Disruption and Complementation of SIR2
The complete ORF sequence of SIR2 (CNAG_04886.2) obtained from the Broad Institute was replaced with a neomycin cassette in H99 cells by homologous recombination using biolistic transformation in a PDS-1000/He hepta system (Biorad). For transformation, 5 µg of a purified linear DNA construct was used containing neomycin under H99 ACT1 promoter control and a TRP1 terminator in addition to 1,000 bp of up-and downstream regions of the target ORF. These regions were amplified from the H99 genomic template using the respective primers (Supplementary Table 2). The neomycin resistance gene was amplified from plasmid pJAF1 using primers Neo-F and Neo-R, and the ampicillin (Amp) resistance gene was amplified from the pUC19 plasmid using primers pUC19-F and pUC19-R. All primers contained a Van91I restriction site to permit onestep directional cloning. Amplified products were restricted with Van91I and ligated using Quick ligase enzyme (New England Biolabs, USA), transformed into XL10 Gold cells (Agilent), and clones were selected on Amp-LB agar plates and confirmed by single digestion with Van91I. Clones with the correct construct were amplified using the primers H99SIR2-Lfor and H99SIR2-Rrev. Transformants were screened on YPD plates containing 100 µg/ml G418 (neomycin) and further confirmed by PCR.
The wt SIR2 gene was amplified with its native promoter from the H99 genome template with primers H99SIR2R-For and H99SIR2R-Rev containing EcoRV and XhoI restriction sites (Supplementary Table 2). The gene was cloned into plasmid pJAF13, then linearized using ApaI and randomly inserted into sir2 cells by biolistic transformation. sir2 +SIR2 positive clones were selected on YPD plates containing 100 µg/ml nourseothricin (NAT) (Werner Bioagents, Germany). Gene complementation was confirmed by PCR.
Lifespan Measurement
Replicative lifespan was measured by microdissection as published in S. cerevisiae (Park et al., 2002) with some modifications. Briefly, 20-60 C. neoformans cells of each strain were arrayed on an agar plate maintained at 37 • C. The first bud of each cell was identified as the virgin mother cell, which then grew in size with every budding event and could be easily distinguished. New buds from the mother cell were separated at the end of each division (1-2 h) using a 50 µm fiber optic needle (Cora Styles) on a tetrad dissection Axioscope A1 microscope (Zeiss) at 100x magnification. The plate was returned to the incubator after each separation, or to 4 • C overnight to prevent excessive budding. The study was terminated when cells had failed to divide for 24 h, and then plates were kept incubated for an additional week to ensure that the failure to divide was from death, not cell cycle arrest. The RLS of each cell was the sum of the total buds until cessation of divisions.
Chronological lifespan was determined by adaptation of a S. cerevisiae protocol (Burtner et al., 2009). Briefly, 2 × 10 6 cells/ml of the respective strain were grown in YPD medium for 3 days at 37 • C and 150 rpm until they reached stationary phase. They were then transferred to sterile dH 2 O, and the number of viable cells was measured by plating appropriate dilutions every 2 days on YPD agar plates. Colony forming units on the plates were quantitated at 72 h, and the study was terminated when 99.9% of the cells were dead.
Phenotypic Characterization
For cell and capsule size measurements, C. neoformans cells were suspended in India ink. All slides were imaged at 1000X magnification on an Olympus AX70 microscope, pictures were taken with a Qimaging Retiga 1300 digital camera using the Qcapture Suite V2.46 software (Qimaging, Surrey, BC, Canada), and size was measured in Adobe Photoshop CS5 for Macintosh. At least 100 cells were imaged per group. Yeast cells were also stained with mAb 18B7 to the capsular polysaccharide glucuronoxylomannan and visualized with fluorescein isothiocyanate-labeled goat antimouse immunoglobulin G (IgG) (Casadevall et al., 1998). Switching frequencies, doubling times, capsule induction, melanization, mating, macrophage-mediated phagocytosis and killing assays, and MICs of amphotericin B were determined as previously described (Jain et al., 2009b).
Isolation of Old Cells
Wt or mutant C. neoformans cells were grown in YPD medium and isolated at 0-2 or 10-generation-old as described previously (Bouklas et al., 2013). Briefly, newly budded C. neoformans cells were isolated by elutriation (Beckman JE-5.0 rotor in a Beckman J-6B centrifuge; Beckman Instruments, Inc.) and labeled with Sulfo-NHS-LC-Biotin (Thermo Scientific). The newly budded and labeled cells were grown for several generations (10 generations), and collected by first binding them to streptavidin-conjugated magnetic microbeads (Miltenyi Biotec), then isolating them on a magnetic column (Miltenyi Biotec). Unbound young yeast cells (0-2 generations old) that washed off the column and had been exposed to similar manipulations were used as controls. Purity of old cells was confirmed by fluorescein isothiocyanate (FITC)-staining of the streptavidin-labeled cells.
For infection of mice, 5 × 10 4 C. neoformans cells were used to infect 6-8 week old female BALB/c mice (n = 10) (National Cancer Institute, Bethesda, MD, USA) either i.v. or i.t. (Huffnagle et al., 1991;Mukherjee et al., 1994). The fungal burden was determined either 4 h or on day 10 by sacrificing mice, and plating dilutions of homogenized organ suspensions onto YPD plates.
RNA Sequencing and Analysis
Three biological replicates of wt or sir2 cells were grown in YPD or YEP and 0.05% glucose broth overnight at 37 • C, and approximately 10 8 cells were collected and suspended in 0.5 mm zirconia beads and RLT buffer (Qiagen), then disrupted mechanically using a mini bead beater (Biospec) for 2 min for a total of four cycles with 1 min intervals on ice. Following lysis, total RNA was isolated using RNeasy mini kit (Qiagen) according to the manufacturer's instructions. RNA hybridization, data acquisition and analysis were performed by the Genome Technology Access Centre, Washington University in St. Louis (GTAC-WUSTL). Briefly, total RNA was first reverse-transcribed with polyA selection and then sequenced on an Illumina HiSeq 2000. The raw sequence reads were then converted to basecalls, demultiplexed, and aligned to a reference sequence with Tophat v2.0.9 and Bowtie2 v2.1.0. Gene abundances were derived by HTSeq. Differential expression was estimated by pair-wise negative binomial tests with EdgeR and DEXSeq. Gene ontology (GO) enrichment was performed by GTAC-WUSTL. Each gene was assigned a GO category per the Broad Institute's PFAM annotations using the provided map 1 . Any genes with a p < 0.05 by a hypergeometric test and an FDR q < 0.25 were used to determine significance. A heatmap of transcriptome data was generated using R software.
Real-Time PCR
Real-time PCR was performed on RNA isolated from wt or mutant grown in variable media (2% YPD, or 2% YPD with 2.5 mM isonicotinamide, 1 nM resveratrol, 1 nM SirAct, 1 pM SRT1460, 1-10 pM SRT1720, 1 nM Sirtinol). The RNA was cleaned for DNA contamination using the MessageClean kit (GenHunter, Corp.), and cDNA was synthesized using the Firststrand Superscript II RT kit (Life Technologies) according to the manufacturer's instructions. Relative expression of genes was measured by real-time PCR using SYBR green (Applied Biosystems) in an ABI 96 system using primers listed in Supplementary Table 2. Expression levels were performed in quadruplicates and normalized against the wt grown in YPD without drug, and relative transcript levels were determined using the delta-delta CT method. cDNA integrity was verified by measuring expression levels with β-actin, and DNA contamination was ruled out by using cDNA made with dH 2 O.
Statistics
Standard statistical analysis and non-parametric tests, such as Student's t-test, Log-rank, and Wilcoxon rank sum tests were performed using Prism version 6 (Graphpad), or Microsoft Excel 2011 for Macintosh. Differences were considered significant if p < 0.05.
Loss of SIR2 Shortens the Replicative Lifespan of C. neoformans
Sirtuins impact RLS in many diverse organisms, which explains why they are chosen as a target for the development of RLS modulating drugs. Protein sequences encoded by SIR2 are conserved among fungi (Supplementary Table 1). To test the effect of Sir2p modulating drugs on pathogenesis, a sir2 strain had to first be generated. SIR2 was deleted by homologous recombination (Supplementary Figure 1) in a C. neoformans serotype A VNI strain, H99, by standard techniques (sir2 ), and a complemented strain was also generated (sir2 +SIR2). As expected for a lifespan-modulating gene, loss of SIR2 resulted in measurably impaired fitness. Specifically, a mildly attenuated in vitro growth in standard rich media (YPD) was observed ( Table 1; Supplementary Figure 2A). However, fitness of sir2 was less affected in calorie restricted (CR) low glucose growth conditions. Of note is that low glucose growth conditions are encountered in the host, especially in the brain environment, and therefore the mutant is still virulent in vivo. A smaller capsule size was noted in the mutant; however, both the wild type (wt) and the mutant capsules induced successfully ( Table 1). Similar to S. cerevisiae, the sir2 mutant was unable to mate with its isogenic mating partner ( Table 1).
Replicative lifespan analysis by microdissection confirmed that SIR2 controls lifespan in C. neoformans ( Figure 1A). Specifically, lifespan analysis determined that the median RLS of sir2 cells was shortened by 33% relative to the RLS of H99 cells (33 to 22 generations, p < 0.0001). The shortened RLS was reconstituted to 34 generations in the complemented strain. RLS was also determined under CR with 0.05% glucose, whereby 0.05% corresponds to the glucose concentration encountered in human cerebrospinal fluid (CSF). Under CR conditions, the median RLS of H99 cells was extended by 48% from 33 to 49 generations (p < 0.0001) ( Figure 1B). This was dependent on SIR2, and accordingly RLS analysis of sir2 cells under Results not significant unless stated otherwise.
Frontiers in Microbiology | www.frontiersin.org FIGURE 1 | Loss of SIR2 shortened the RLS of strain H99, but had no effect on its virulence. (A) The effect of loss of SIR2 on the RLS of a serotype A VNI strain, H99, was determined by microdissection of sir2 cells (dashed line) and found to significantly shorten its median RLS by 33% compared to the wt (straight line). p < 0.0001. The shortened RLS was reconstituted in the complement (dotted line). (B) 0.05% glucose calorie restriction (CR) significantly extended the median RLS of wt cells by 48%. p < 0.0001. CR had no effect on the RLS of sir2 cells. (C) A slightly attenuated virulence was observed in Galleria mellonella infected with 2 × 10 3 mutant cells compared to wt cells. Also (D) no significant virulence difference was observed in BALB/c mice injected i.v. with 5 × 10 4 mutant or wt cells.
(E) sir2 cells crossed the blood-brain barrier equally well-compared to the wt strain as suggested by comparable brain CFUs 4 h after i.v. infection. (F) 10-generation-old sir2 cells significantly resisted killing by murine macrophages compared to 0-2 generation old sir2 cells, and 0-10 generation old wt cells. RLS experiments with the respective medium (n = 40-60 cells) were performed at the same time. p-values were calculated by Wilcoxon Rank Sum Test. * * p < 0.01.
CR demonstrated no effect. CLS, measured by viability without nutrition, was not affected by CR, or different in sir2 cells compared to the wt ( Table 1; Supplementary Figure 2B). In summary, similar to S. cerevisiae, SIR2 function has a major impact on the RLS of C. neoformans, but loss also has an effect on its fitness, which has to be taken into consideration when association of lifespan and virulence is examined.
Loss of SIR2 Impairs Fitness and Decreases Virulence
Given the impaired fitness, which is seen with most mutants of RLS modulating genes , we compared the virulence of sir2 and wt cells in different infection models, including G. mellonella (waxworm) and two murine models of infection. In Galleria, survival was found to be slightly decreased at an inoculum dose of 2 × 10 3 CFU (Figure 1C), and significantly decreased at higher inocula (Figure 3). Consistent with the observed growth attenuation of sir2 cells in rich medium, sir2 cells exhibited hypovirulence also in the murine pulmonary infection model (Supplementary Figure 3), where a lower organ fungal burden (data not shown) was noted in sir2 infected mice. Interestingly, comparable survival was observed in the intravenous (i.v.) model, where growth difference would be expected to be less pronounced because the brain is a low glucose growth environment ( Figure 1D). Comparable brain CFUs obtained 4 h after i.v. infection from mouse brain also suggested that both sir2 and wt cells crossed the blood-brain barrier equally well ( Figure 1E). Interestingly, despite the expected hypovirulence, resistance to macrophage-mediated killing at baseline was found to be comparable in the sir2 and wt cells. This suggests that hypovirulence is predominantly the result of slightly slower growth. And more importantly, when killing was compared in cells aged to 10 generations, resistance was significantly higher in sir2 cells compared to wt cells of the same generational age ( Figure 1F). Specifically, 10-generation-old sir2 cells manifested a higher resistance to killing by a murine macrophage cell line, J774.16, indicating that at 10 generations they were phenotypically older, consistent with their shortened RLS. Whereas, wt cells, which have a longer RLS are still younger phenotypically at 10 generations, which is reflected in their decreased resilience. The reason that this enhanced resilience does not lead to hypervirulence is most likely because young sir2 cells grow slower than wt cells in the nutrient rich environment of the lung. The sir2 cells can therefore not expand fast enough to undergo selection for older generations. In summary, these data highlight limitations of knockout mutants and demonstrate that association of RLS and virulence cannot be investigated by a genetic approach because growth, albeit slightly, is impaired by loss of SIR2. The data, however, indicate that the mutant is potentially valuable because it can still cause death of Galleria and mice under certain experimental conditions. Therefore, it is justified to use the mutant as a control for off-target effects of Sir2p modulating drugs.
In vitro Effects of Sir2p Agonists and Antagonists on RLS
We sought to explore if we could identify drugs that modulate RLS in an individual C. neoformans strain. First, we explored RLS modulation in vitro. For these experiments, six drugs were chosen. These included isonicotinamide (INAM), a nicotinamide isostere, chosen because it extends RLS in S. cerevisiae by alleviating nicotinamide, the feedback inhibitor to Sir2p's deacetylation function (McClure et al., 2012). Additional Sir2p activators examined were resveratrol, SirAct, and two small molecules (SRT1460 and SRT1720) that were developed originally by Sirtris (now GlaxoSmithKline) to activate the human homolog, Sirt1p. Lastly, we also included a Sir2p inhibitor (sirtinol).
These studies found that all Sir2p agonists, except SRT1720, had a prolongevity effect on C. neoformans (Figure 2). Specifically, INAM extended the median RLS of H99 cells restricted for nicotinic acid (NA) by 46% from 28 to 41 generations (p < 0.0001) (Figure 2A). Resveratrol extended the median RLS of H99 cells by 32% from 31 to 41 generations (p = 0.006) (Figure 2B). SirAct extended the median RLS of H99 cells by 41% from 27 to 39 generations (p = 0.01) ( Figure 2C). Finally, SRT1460 extended the median RLS of H99 cells by 39% from 27 to 37.5 generations (p = 0.04) (Figure 2D). SRT1720 was found to be toxic to cells at nM concentrations, and when titered down to non-toxic concentrations of 1-10 pM, it was found to have no significant effect on RLS of H99 cells ( Figure 2E). As would be expected for Sir2p agonists, RLS prolongation required the presence of SIR2, and therefore no prolongevity effect was observed with sir2 cells for all the RLS drugs tested (Figure 2). In addition, we documented the opposite effect on RLS with sirtinol, a Sir2p inhibitor. Exposure to this drug shortened the median RLS of H99 cells by 20% from 31 to 25 generations (p = 0.006) in a Sir2p-dependent manner ( Figure 2F).
Effects of RLS Modifying Drugs on Virulence in Galleria
Next the effect of RLS modifying drugs was tested in vivo. We chose the Galleria infection model because phagocytic cells constitute the predominant host response in this model (Aperis et al., 2007), and drug treatment is easily executed. Older C. neoformans cells are not truly more virulent (Bouklas et al., 2013), rather they should be viewed as more resilient to clearance by host cells, and therefore persistence of older cells is the consequence of selective preferential killing of younger cells. We have previously shown that old cells are more resistant to phagocytosis and killing by macrophages (Bouklas et al., 2013). We explored if RLS modulating drugs would affect clearance. Importantly, neither the RLS drugs, nor PBS alone had an effect on non-infected waxworms (Supplementary Figure 4). When INAM was given on alternate days to waxworms infected with H99 at an inocula of 2 × 10 4 cells, we observed decreased virulence relative to sham-treated waxworms, measured as a significantly prolonged survival of the waxworm (Figure 3A). Similar decreased virulence of H99 was observed in waxworms with resveratrol treatment (Figure 3A). To control for off target effects, we tested the drugs also in Galleria infected with sir2 cells, where you would not expect an effect if the RLS modulating drug works strictly through Sir2p. As expected because of slightly impaired fitness, the sir2 infected waxworms die later compared to wt H99 infected waxworms ( Figure 3B). However, these experiments confirmed specificity because the effect of RLS modulating drugs on waxworm survival was dependent on Sir2p, and neither INAM, nor resveratrol treated waxworms infected with sir2 cells exhibited a survival difference when compared to untreated controls. Consistent with our hypothesis that prolongation of RLS lessens virulence, whereas shortening of RLS enhances virulence, we found that treatment with the Sir2p inhibitor, sirtinol, had the opposite effect and enhanced virulence, and therefore decreased waxworm survival was documented ( Figure 3A). This drug effect was again SIR2 dependent ( Figure 3B). For waxworms treated with SirAct and SRT1460, significantly increased waxworm survival was also observed (Figure 3C), whereas toxicity or no effect was seen with SRT1760 (data not shown). The latter was predicted by the in vitro RLS data (Figure 2). However, for SRT1460 and SirAct, we found that changes in virulence were not Sir2p-dependent because drug-treated waxworms infected with sir2 cells also lived significantly longer compared to sham-infected ( Figure 3D; Supplementary Figure 4).
Effect on RLS Drugs in Combination with Antifungals
Next we tested the effect of RLS modulating drugs in combination with antifungal therapy. Previous work demonstrated that sensitivity of C. neoformans cells to amphotericin B (AMB) is dependent on the generational age of the cell and enhanced antifungal-efficacy is observed in younger cells (Jain et al., 2009a;Bouklas et al., 2013). Therefore, we tested if concomitant treatment with Sir2p agonists would enhance antifungal efficacy in H99-infected waxworms that were treated with subtherapeutic levels of AMB. Both INAM ( Figure 4A) and resveratrol ( Figure 4C) resulted in significantly increased waxworm survival relative to treatment with RLS or antifungal drug alone. Again, no enhanced survival benefit was observed in the mutant-infected waxworms that received the RLS drug and AMB treatment in combination (Figures 4B,D). Here only, AMB treatment prolonged survival. No significant increase in survival was observed with sirtinol-treated waxworms that received AMB ( Figure 4E); and, this effect was SIR2-dependent ( Figure 4F).
Effect of RLS Drugs on SIR2 Expression
Lastly, quantitation of SIR2 expression by RT-PCR in strains grown with the Sir2p agonists or antagonist confirmed a drugdependent SIR2 regulation in H99. It is noteworthy that RLS modulating drugs do not affect doubling times significantly ( Table 2). Specifically, compared to untreated H99 cells, significant upregulation of SIR2 expression was documented in response to INAM, resveratrol, and SirAct treatment in H99, and significant downregulation of SIR2 expression was documented in response to sirtinol (Figure 5). SIR2 expression was not significantly regulated under SRT1460 or SRT1720.
Sir2p is Involved in Several Biological Processes in C. neoformans Consistent with the Pleiotropic Effects Observed from SIR2 Loss
Lastly, the transcriptome of sir2 and wt H99 cells grown in rich media or under more physiologic CR media was compared by RNA sequencing (GEO accession #GSE74298). Grown in rich media (RM: YEP + 2% glucose), 25 genes were down-and 3 genes were upregulated in sir2 cells (fold change ≤ 1.5-fold, p < 0.05). As expected, more pronounced transcriptional regulation was found in CR media (CR: YEP + 0.05% glucose), where 232 genes were down-and 80 genes upregulated (FC ≤ 1.5-fold, p < 0.05). A heatmap was generated (Figure 6), and GO analysis was performed to assign the genes affected by SIR2 loss to GO categories. In rich media, enriched Results not significant unless stated otherwise.
GO categories included the biological processes of vesicle-and ER to Golgi vesicle-mediated transport, intracellular protein transport, and the cellular components of Golgi apparatus, cell division site, and cell tip. In CR growth conditions, GO analysis assigned genes affected by SIR2 loss to the following distinct biological processes: transmembrane transport, transcription and translation, molecular functions of transporter activity, and the cellular components of cell membrane, ribosome, nucleolus, and mitochondria. Although minimal overlap with respect to specific genes (2%) was noted for the transcriptomes under RM and CR, some common GO pathways overlapped (21%), including transmembrane and transport activity, translation, and the cellular components of mitochondria and ribosomes. Comparison with S. cerevisiae transcriptome data (Cherry et al., 2012) identified common GO categories, such as negative regulation of DNA recombination, regulation of DNA-templated transcription, NAD-dependent histone deacetylase activity, and the nucleolus (asterisk in Figure 6). Thus, a role of SIR2 in transport and several intracellular cell components, particularly under nutrient-limiting conditions is suggested and explains the pleiotropic phenotype observed in the mutant.
DISCUSSION
Recent investigations on replicative aging in C. neoformans (Jain et al., 2009a;Bouklas et al., 2013) indicate that older C. neoformans cells of advanced replicative age are selected in vivo during chronic infection. Furthermore, data indicated that old cells are selected because their phenotype is more resilient in the setting of chronic disease. Accordingly, we demonstrated that 10generation-old cells were more resistant to antifungal therapy and phagocytic killing. In this study, we present evidence that modulation of RLS in the C. neoformans strain, H99, through treatment with Sir2p agonists changes the vulnerability/resilience of the pathogen population, and therefore an impact on virulence, as well as sensitivity to antifungal therapy is observed. The question of whether a shortened or extended RLS confers a benefit to a C. neoformans strain is challenging to investigate because lifespan is a dynamic trait that has to first emerge, and in addition lifespan Jazwinski, 2004;Steinkraus et al., 2008;Bouklas et al., 2013Bouklas et al., , 2015, and virulence (Adler et al., 2011;Haynes et al., 2011;Kronstad et al., 2012;Zaragoza and Nielsen, 2013;Sabiiti et al., 2014) are regulated FIGURE 5 | SIR2 expression was differentially regulated by chemical drugs. H99 cells grown in the presence of Sir2p agonists (INAM, resveratrol, SirAct) showed a higher expression of SIR2 as measured by RT-PCR, compared to cells grown in the absence of agonists, or the presence of the Sir2p antagonist, sirtinol. RT-PCR was performed in quadruplicates and normalized to β-actin. p-values were calculated by Student's t-test. * p < 0.05, * * p < 0.01. by multiple factors. A straightforward reductionist approach, where RLS is modified through loss of a longevity-promoting gene, is therefore not feasible. In addition, a genetic approach is hampered by the fact that longevity-regulating genes in S. cerevisiae also regulate fitness . This is also true for known homologs in C. neoformans; for instance, the tor1 mutant is not viable (Cruz et al., 2001), the ras1 mutant grows slower (Waugh et al., 2002), and the sch9 mutant has an altered polysaccharide capsule (Wang et al., 2004), and would not constitute as adequate targets to answer that question. Therefore, changes in virulence in these C. neoformans mutants cannot be related to changes in RLS.
SIR2 homologs are among the most intensely investigated lifespan modulating genes. In S. cerevisiae, loss of SIR2 shortens RLS, while overexpression results in extension of RLS (Kaeberlein et al., 1999). SIR2 prolongevity effects have been reported in other eukaryotes (Tissenbaum and Guarente, 2001;Wood et al., 2004;Guarente, 2007), and drugs that alter Sir2p function are being actively pursued (Baur et al., 2012). However, Sir2p is a histone deacetylase (Guarente and Kenyon, 2000) that regulates over 100 genes in S. cerevisiae (Cherry et al., 2012). Pleiotropic effects in the sir2 also include loss of fitness .
In this study, we first investigated the loss of Sir2p function in the standard serotype A VNI strain, H99, which is derived from a patient and used for experiments by the majority of laboratories that investigate C. neoformans. Consistent with RLS studies in S. cerevisiae (Lin et al., 2000;Kaeberlein et al., 2004), sir2 C. neoformans cells exhibited a significantly shortened median RLS that was regained with reconstitution. As predicted by S. cerevisiae and other model organisms FIGURE 6 | A loss of SIR2 in calorie-restricted media affects multiple biological pathways consistent with the observed pleiotropic phenotype. A heatmap of transcriptomes shows upregulation of ribosome biogenesis genes, transcription and translation, and NAD-regulating genes, as well as downregulation of mating genes. Common gene ontology (GO) categories were also found from similar transcriptome mining in Saccharomyces cerevisiae and are highlighted with an asterisk. of aging (Wood et al., 2004;Guarente and Picard, 2005;Fontana et al., 2010;Skinner and Lin, 2010), CR was found to prolong lifespan in C. neoformans as well. CR is modeled in yeast by reduction of glucose content from 2 to 0.05% (Lin et al., 2002;Kennedy et al., 2005). Extension of lifespan through CR was dependent on SIR2 in H99. Accordingly, SIR2 was upregulated in H99 cells under CR. It is noteworthy that so far, the majority of lifespan studies under CR have been conducted in primarily fermentative Crabtree-positive yeasts, such as S. cerevisiae and Schizosaccharomyces pombe, and only a few studies in the primarily respiratory Crabtreenegative yeasts, Candida albicans and Kluyveromyces lactis, are emerging (Skinner and Lin, 2010). C. neoformans is an obligate aerobic yeast, but can tolerate some hypoxic stress (Chun et al., 2007). CR in Crabtree-negative and in obligate aerobic yeasts does not activate a fermentation-to-respiration switch. Thus, C. neoformans provides a unique platform to study the respiration switch-independent mechanisms of CR. Sirtuins have been implicated in a wide range of cellular processes beyond aging. Our transcriptome data confirm this. We found that genes involved in many diverse biological processes are regulated. As expected, regulation is greatly enhanced under CR conditions. Important virulence-associated traits that were altered in H99 sir2 cells include a mating defect and impaired growth, which is physiologically more relevant in the host environment. Notably the growth defect was not significant under CR growth conditions. Other virulenceassociated properties, such as melanization, H 2 O 2 resistance, phagocytosis, and killing in macrophages were not affected by loss of SIR2 in young cells. The capsule difference was judged as minor, and capsule was inducible regardless of SIR2 loss. Impaired growth in the mutant cells underscores the aforementioned predicament, namely that RLS mutants cannot be used to determine if the length of RLS affects virulence. Fortunately, the sir2 was virulent in the G. mellonella infection model despite mildly attenuated growth, which allows us to use this mutant as a valuable control. Even in rodents, the sir2 mutant could be used as a control in the CNS model because it is virulent and can cause death in mice.
Over the past two decades, genetic approaches using diverse organisms have identified 100s of aging genes and highlighted evolutionary conservation among longevity pathways between disparate species (Managbanag et al., 2008). Although the major driving force of aging research is its application to novel therapies against chronic disease and direct extension of human lifespan, our intention was to test SIR2 modifying anti-aging drugs with respect to their ability to alter the median RLS of eukaryotic pathogen populations. Given that cells of advanced generational age exhibit enhanced resistance to phagocytic killing, and to antifungals (Jain et al., 2009a;Bouklas et al., 2013), it was reasonable to hypothesize that in a C. neoformans strain with a prolonged RLS, the resilient old age phenotype would emerge later, and therefore would contribute to decreased resilience and virulence of the pathogen population (Figure 7).
Significantly, increased longevity was achieved in H99 cells in vitro with four of the five tested Sir2p agonists. As expected, the opposite effect on RLS was observed with the Sir2p inhibitor, sirtinol. SRT1460 and SRT1720 are high affinity small molecules that were designed to bind to the human Sir2p analog, Sirt1p (Milne et al., 2007), and therefore it was not surprising that SRT1720 did not prolong RLS in C. neoformans. SRT1460 had a statistically significant prolongevity effect. Both did not induce fungal SIR2 in vitro. Resveratrol, a stilbenoid, is an established anti-aging drug that has a significant prolongevity effect on the RLS of S. cerevisiae (Howitz et al., 2003), and also on other model organisms (Baur et al., 2006;Bass et al., 2007). In C. neoformans, we documented a significant prolongevity effect on RLS as well. The prolongevity effect of resveratrol in S. cerevisiae is SIR2-independent (Kaeberlein and Kennedy, 2007;McClure et al., 2012), but this is strain dependent. Our data, however, indicate that for the C. neoformans strain, H99, the prolongevity effect was dependent on SIR2. Future studies with the sir2 in other C. neoformans strain backgrounds would have to be done to confirm consistent dependence on SIR2.
INAM is a nicotinamide isostere that extends RLS in S. cerevisiae by alleviating nicotinamide, an NAD+ precursor and feedback inhibitor to Sir2p's deacetylation function (McClure et al., 2012). This drug is thought to extend RLS only through the action of Sir2p in S. cerevisiae. In C. neoformans, this was confirmed. SirAct is a carboxamide, which was developed to treat aging related diseases in humans (Nayagam et al., 2006). Our results demonstrate that this drug also has a significant Sir2p dependent effect on C. neoformans RLS.
Based on our in vitro data, we sought to test the effect of RLS modulating drugs in an in vivo virulence model in Galleria. Indeed, these data confirmed that RLS modulating drugs could have an impact on virulence. We demonstrated that RLS prolonging drugs increase survival and decrease virulence in Galleria, whereas RLS shortening drugs decrease waxworm survival when infected with H99 wt cells. In addition, our experiments showed that RLS modulating drugs could enhance the antifungal efficacy of amphotericin B in C. neoformans infected Galleria. This effect is not seen when sirtinol, which shortens RLS, or in all the cases where waxworms were infected with sir2 cells instead of the wt. We propose that prolongation of RLS alters vulnerability in vivo as it shifts the median RLS to a younger pathogen population, which has not acquired the old age phenotype yet.
One concern is that the drugs could have an independent effect on the host's virulence. Resveratrol, for instance, inhibits laccase activity and melanization in C. neoformans cells (Fowler et al., 2011). However, the fact that sir2 infected Galleria do not exhibit the same changes in virulence suggests that the decreased virulence is dependent on fungal-specific Sir2p. It is also noteworthy that doubling times are not significantly affected by drug treatment. RNA transcriptome comparison of H99 in CR conditions demonstrates upregulation of SIR2 under CR. Most importantly, published transcriptome data from CNS derived C. neoformans cells (Chen et al., 2014) also demonstrate that SIR2 is upregulated in yeast derived from the CSF. Hence, future studies in rodent models constitute a rational approach to further explore this expanding class of drugs (Zhai et al., 2012). An additional concern is that the drugs could have an independent effect on mammalian cells, which share the homologous Sirt1p. Recent successful designs of human specific Sirt1p agonists suggest that with proper medicinal chemistry (Nayagam et al., 2006;Milne et al., 2007), it may be possible to produce fungalspecific Sir2p analog(s) that have minimal off-target effects. Our data with Galleria suggest that SRT1460 has off-target and Sir2pindependent effects on the host that may affect survival. This effect, however, is not observed for INAM or resveratrol.
Finally, our data further support a more complex understanding of pathogenesis, whereby the median RLS of a strain may not matter per se, but age-related resilience should be viewed as an emerging virulence trait of a pathogen population that may come as a trade-off for fitness. This naturally acquired old age phenotype, once selected could become dominant in the pathogen population, and indeed impact outcome and affect persistence (Figure 7). Our data strongly suggest that this process can be harnessed and targeted with drugs, which opens up a new class of antifungal drug targets. Importantly, this novel concept of generational phenotypes that promotes their selection within a pathogen population may be relevant to other eukaryotic pathogen populations, many of which cause chronic diseases that are notoriously difficult to treat, and for which new drug targets are desperately required. Aging related phenotypes are not present in overnight cultures and only become relevant in the host environment because a highly selective host response has to be present in vivo to drive selection, and permit the emergence of older cells in the host.
AUTHOR CONTRIBUTIONS
BF, TB, and NJ conceived and designed the work. TB performed the experiments and collected the data. BF, TB, and NJ performed data analysis and interpretation. BF and TB drafted and revised the article. BF, TB, and NJ gave final approval of the version to be published.
FUNDING
BF is supported by NIH award R01 AI059681. | 2017-05-04T00:01:28.275Z | 2017-01-30T00:00:00.000 | {
"year": 2017,
"sha1": "14da9e4d53dc6223a59be96676c7a7edfc7a3de0",
"oa_license": "CCBY",
"oa_url": "https://www.frontiersin.org/articles/10.3389/fmicb.2017.00098/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "14da9e4d53dc6223a59be96676c7a7edfc7a3de0",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
} |
14443393 | pes2o/s2orc | v3-fos-license | Searches for the Higgs Boson and Supersymmetry at the Tevatron
The D0 and CDF experiments at the proton-antiproton collider Tevatron have extensively searched for the Higgs boson and signals of supersymmetry using a wide range of signatures. The status of these searches is reviewed with a focus on recent measurements.
Introduction
At the Tevatron collider, one of the main challenges is the search for the Higgs boson and for supersymmetric particles. The high integrated luminosities being collected by both the CDF and D0 experiments enable searches with unprecedented sensitivity. At the beginning of 2007, both experiments have recorded data sets of more than 2 fb −1 . Recent results obtained with up to 1.1 fb −1 are presented in this note. All limits quoted are at 95% confidence level.
Searches for the standard model Higgs boson
In the standard model (SM) the Higgs mechanism is responsible for the electroweak symmetry breaking, thereby generating the masses of the Z and W bosons. As a consequence of this mechanism a single neutral scalar particle, namely the Higgs boson, remains after the symmetry breaking. Assuming the validity of the standard model, global fits to the electroweak data prefer a relatively low mass for the Higgs boson, m H = 85 +39 −28 GeV 1) , while direct searches at the LEP collider set a lower bound on the mass of 114.4 GeV 2) .
At low masses, m H < ∼ 135 GeV, the SM Higgs boson dominantly decays via H → bb. For the main production channel, which is the gluon-gluon fusion process gg → H this leads to signatures which are irreducible from QCD production of bb pairs. Therefore, at the Tevatron the highest sensitivity for low mass Higgs bosons is obtained from the associated production of the Higgs boson with the weak bosons, i.e. W H and ZH. At high masses the SM Higgs boson predominantly decays into W W boson pairs, which has a manageable background for the gg → H production mode.
Low-mass Higgs boson, m H ∼ < 135 GeV
Both the CDF and D0 collaborations searched for low mass Higgs bosons using the W H → ℓνbb, ZH → ννbb, and ZH → ℓℓbb production and decay modes. Dominating backgrounds in these searches are the associated production of the weak bosons with bb pairs, W bb and Zbb, as well as the associated production W jj and Zjj with jets originating from light-flavor quarks, which are falsely identified as b-jets.
The CDF collaboration recently presented a search for the Higgs boson in W H → ℓνbb production based on an integrated luminosity of 1 fb −1 3) . The event selection required a reconstructed electron or muon with a transverse momentum p T > 20 GeV, two jets with transverse energy E T > 15 GeV and large missing transverse momentum E / T > 20 GeV. The jets were identified to originate from b quarks using secondary vertex (SV) and neural network (NN) tagging algorithms. A resonant peak in the dijet mass distribution, M jj , indicative of H → bb was searched for. The M jj distribution for events with two heavy-flavor jets identified using the SV tagger is shown in Fig. 1 together with the background prediction and the expected Higgs signal. Upper limits on the production cross sections, σ 95 , were derived as function of Higgs boson mass m H . For m H ∼ 115 GeV the cross section limit from this measurement alone compared to the SM prediction, σ SM , corresponds to a sensitivity of σ 95 /σ SM ∼ 20. The search for ZH → ννbb production has also a notable sensitivity to W H → ℓνbb as the lepton might be undetected. Based on a data sample of 1 fb −1 , the CDF collaboration searched for the Higgs boson in events with large E / T and two jets, of which one was required to be tagged 4) . In addition to Zjj, a large background contribution was found to be due to QCD multijet production. For m H ∼ 115 GeV a sensitivity of σ 95 /σ SM ∼ 30 was separately obtained for ZH and W H production. Combining both production modes the sensitivity was σ 95 /σ SM ∼ 16.
The ZH → ℓℓbb channel is disfavored due to the low Z → ℓℓ branching fraction. Nevertheless, the clear event topology provides good background separation. The D0 collaboration recently presented a search in this channel based on an integrated luminosity of 0.9 fb −1 5) . The analysis required a reconstructed ee or µµ pair with a dilepton mass consistent with the Z boson mass and at least two jets which were required to be identified as b jets using the NN tagger. For central pseudorapidities, |η| < 1.5, a b-tagging efficiency of 72% at a light-jet fake rate of 4% was obtained. This search and a similar CDF measurement 6) were found to have sensitivities σ 95 /σ SM ∼ 25 − 30 at m H ∼ 115 GeV.
High-mass Higgs boson, m H ∼ > 135 GeV
The dominant decay mode for m H ∼ > 135 GeV is H → W W ( * ) . The W decays into an electron or muon are used to suppress the QCD multijet background. As the Higgs boson has spin-0, the final-state leptons are predominately produced with small azimuthal separation due to spin-correlations between them. Therefore, the Higgs signal can be discriminated from the electroweak production of W W boson pairs.
The D0 collaboration performed a preliminary search based on 0.95 fb −1 using the ee, eµ, and µµ final states 7) . At m H ∼ 160 GeV, where this channel has optimal sensitivity, a cross-section ratio σ 95 /σ SM ∼ 4 was obtained, which excludes models with four fermion families 8) for m H ∼ 150 − 185 GeV.
Combined limits on Higgs boson production
The CDF and D0 limits on SM Higgs production were combined for the first time in summer 2006 9) . Fig. 1 shows the cross-section ratio σ 95 /σ SM as function of assumed Higgs boson mass m H . The combination does not yet include all searches presented above. After the conference a new combination with significantly improved cross-section limits was obtained, which also includes additional results obtained since then.
Searches for neutral supersymmetric Higgs bosons
Models with two Higgs-doublets, such as the minimal supersymmetric extension of the standard model (MSSM), predict five physical Higgs bosons, of which three (h, H, A) have neutral electric charge. The phenomenology at large tan β (the ratio of the Higgs vacuum expectation values) is remarkable: The cross section for the gluon-gluon fusion process gg → H and the associated production bbH is largely enhanced and the CP -odd A boson is nearly massdegenerate with either the light or heavy CP even state, h or H, respectively. The leading decay modes of the two mass-degenerate states, both denoted as φ, are φ → bb (∼ 90%) and φ → τ τ (∼ 10%). Despite the smaller branching fraction, Higgs searches in the di-τ channel have the advantage of a much smaller background level from multi-jet production.
Supersymmetric Higgs in multi-jet events: bbφ → bbbb
The D0 collaboration searched for the supersymmetric Higgs boson in the channel bbφ → bbbb using the dijet mass distribution in events with three identified heavy-flavor jets. The published analysis 10) based on an integrated luminosity of 260 pb −1 excludes a region at high tan β, e.g. for m A ∼ 120 GeV the constraint on tan β is tan β ∼ < 50 − 60 (depending on the assumed mixing in the scalar top quark sector). The preliminary update based on 0.9 fb −1 found exclusion limits improved by about a third 11) .
Supersymmetric Higgs decaying to tau pairs: φ → τ τ
Both, the CDF and D0 collaborations searched for the MSSM Higgs boson decaying via φ → τ τ using data samples of 1 fb −1 each. Whereas the CDF collaboration analyzed τ -decays leading to eµ, eτ h , and µτ h final states 12) (with τ h denoting hadronically decaying τ 's), the D0 selection 13) required one τ decaying into a muon. The CDF collaboration observed a small excess of events (< 2σ, only eτ h and µτ h channels) in the visible mass distribution, which approximates the mass of the hypothetical di-τ resonance. This nonsignificant excess was not confirmed by the D0 search. The exclusion regions in the plane given by m A and tan β are shown in Fig. 2. The exclusion regions depend only very mildly on assumptions on the sign of the Higgs mass term µ and the mixing in the scalar top quark sector. Supersymmetry (SUSY) is one of the most appealing extensions of the SM, as it solves the hierarchy problem and could provide a candidate for cold dark matter. Supersymmetric models predict the existence of scalar leptons and quarks and spin-1/2 gauginos as super-partners of the standard model leptons, quarks and gauge bosons. R-parity is introduced as a new multiplicative quantum number to differentiate between standard model (R = 1) and supersymmetric (R = −1) particles. As a consequence of the assumption of R-parity conservation, supersymmetric particles are produced in pairs and the lightest supersymmetric particle (LSP) needs to be stable. In supersymmetric models inspired by supergravity, the lightest neutralinoχ 0 1 , which is a mixture of the super-partners of the neutral electroweak gauge and Higgs bosons, is usually assumed to be the LSP and is a candidate for cold dark matter. In the following only searches for supersymmetry inspired by minimal supergravity (mSUGRA) and with the assumption of R-parity conservation are presented. Both, the CDF and D0 collaborations performed many searches within other supersymmetric models.
Gaugino pair production
The associated production of a chargino-neutralino pair,χ ± 1χ 0 2 , can lead to event topologies with three leptons, which has a low SM background. The third lepton might be relatively soft, depending on the mSUGRA parameter space.
Both, the CDF and D0 experiments have searched for the tri-lepton signature taking into account all three lepton flavors and using integrated luminosities up to 1.1 fb −1 14, 15) . The sensitivity could be increased by not requiring explicit lepton identification for the third lepton and by including final states consisting of a same-sign di-lepton pair. Both experiments derived limits on the cross-section times branching fraction, shown in Fig. 3, which are compared to different mSUGRA inspired scenarios to obtain lower bounds on the chargino mass.
Scalar quark and gluino production
If sufficiently light, squarks and gluinos could be produced in pairs at the Tevatron. If M (q) < M (g), mostly pairs of squarks would be produced, which decay viaq → qχ 0 1 , resulting in an event signature of two acoplanar jets and E / T . If M (g) > M (q), gluinos would decay according tog → qqχ 0 1 and their pair-production would give topologies with many jets and E / T . In the case of M (g) ≈ M (q) andqg-production the final state is expected to often consist of three jets and E / T . The D0 collaboration searched for the production of squarks and gluinos using three different event selections which were targeted at the scenarios described above 16) . The exclusion region in the plane given by the squark and gluino masses is shown in Fig. 4. For the most conservative assumptions (and for tan β = 3, A 0 = 0, µ < 0) squark and gluino mass limits of mq > 375 GeV and mg > 289 GeV, respectively, were derived. When interpreting the cross-section limits within mSUGRA the constraints on the common scalar and gaugino masses at the unification scale, m 0 and m 1/2 , could be improved with respect to limits from LEP.
Scalar top and bottom quark production
Due to a possible large mixing between the super-partners of the left and right handed top (bottom) quarks, the lighter eigenstate of the scalar top (bottom) quark might be significantly lighter than the super-partners of the other quarks. Both experiments searched for the pair production of scalar bottom and scalar top quarks 17,18,19) . The scalar bottom quarks were assumed to decay viab → bχ 0 1 and the scalar top quarks via the loop induced decayt → cχ 0 1 . Exclusion regions in the plane given by the sbottom (stop) and neutralino masses were derived reaching mb ≈ 220 GeV and mt ≈ 130 GeV, respectively.
Conclusions and Perspectives
The CDF and D0 experiments at the Tevatron collider have performed a multitude of searches for the standard model and supersymmetric Higgs boson as well as for signals of supersymmetry. At the time of the conference, the searches for the SM Higgs boson which include luminosities up to 1 fb −1 reached a sensitivity of a factor 10 (3) times the SM expectation at m H ≈ 115 GeV (m H ≈ 160 GeV). Imminent improvements of the limits are expected from the increased luminosity and refinements in the b-tagging and the event selection. The "hint" of an MSSM Higgs boson at m A ≈ 160 GeV obtained by CDF was not confirmed by D0. No signal for supersymmetry has yet been found at the Tevatron and stringent limits, which are significantly improved compared to Run I, were set. At the beginning of 2007 both experiments have recorded integrated luminosities exceeding 2 fb −1 and are expected to collect much larger data sets during the full period of Run II. Thus, the sensitivity to the production of the Higgs boson and supersymmetric particles will substantially improve in the following years. | 2007-10-01T09:50:27.000Z | 2007-10-01T00:00:00.000 | {
"year": 2007,
"sha1": "f4bc7110f25a4d65cb0eb11ea96f8b9b300e7b10",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "da578923f77d94f179c6ae7bfbac123e9e8b00f1",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
102350148 | pes2o/s2orc | v3-fos-license | An integrated transcriptomics and proteomics analysis reveals functional endocytic dysregulation caused by mutations in LRRK2
Background: Mutations in LRRK2 are the most common cause of autosomal dominant Parkinson's disease, and the relevance of LRRK2 to the sporadic form of the disease is becoming ever more apparent. It is therefore essential that studies are conducted to improve our understanding of the cellular role of this protein. Here we use multiple models and techniques to identify the pathways through which LRRK2 mutations may lead to the development of Parkinson's disease. Methods: A novel integrated transcriptomics and proteomics approach was used to identify pathways that were significantly altered in iPSC-derived dopaminergic neurons carrying the LRRK2-G2019S mutation. Western blotting, immunostaining and functional assays including FM1-43 analysis of synaptic vesicle endocytosis were performed to confirm these findings in iPSC-derived dopaminergic neuronal cultures carrying either the LRRK2- G2019S or the LRRK2-R1441C mutation, and LRRK2 BAC transgenic rats, and post-mortem human brain tissue from LRRK2-G2019S patients. Results: Our integrated -omics analysis revealed highly significant dysregulation of the endocytic pathway in iPSC-derived dopaminergic neurons carrying the LRRK2-G2019S mutation. Western blot analysis confirmed that key endocytic proteins including endophilin I-III, dynamin-1, and various RAB proteins were downregulated in these cultures and in cultures carrying the LRRK2-R1441C mutation, compared with controls. We also found changes in expression of 25 RAB proteins. Changes in endocytic protein expression led to a functional impairment in clathrin-mediated synaptic vesicle endocytosis. Further to this, we found that the endocytic pathway was also perturbed in striatal tissue of aged LRRK2 BAC transgenic rats overexpressing either the LRRK2 wildtype, LRRK2-R1441C or LRRK2-G2019S transgenes. Finally, we found that clathrin heavy chain and endophilin I-III levels are increased in human post-mortem tissue from LRRK2-G2019S patients compared with controls. Conclusions: Our study demonstrates extensive alterations across the endocytic pathway associated with LRRK2 mutations in iPSC-derived dopaminergic neurons and BAC transgenic rats, as well as in post-mortem brain tissue from PD patients carrying a LRRK2 mutation. In particular, we find evidence of disrupted clathrin-mediated endocytosis and suggest that LRRK2-mediated PD pathogenesis may arise through dysregulation of this process. the process of CME in dopaminergic neurons remains poorly described. Here we present data from an integrated proteomic and transcriptomic analysis of induced-pluripotent stem cell (iPSC)-derived dopaminergic cultures from PD patients carrying the LRRK2-G2019S mutation, which reveals dysregulation of endocytosis in these cells. We then demonstrate that levels of the endocytic proteins clathrin and endophilin are reduced in iPSC-derived dopaminergic cultures from LRRK2-G2019S and LRRK2-R1441C mutation carriers, as well as in 22-month-old LRRK2 BAC transgenic rats carrying these same mutations.
Introduction
Parkinson's disease (PD) is a common neurodegenerative disorder in which dopaminergic neurons of the substantia nigra pars compacta (SNpc) are lost, leading to the classic motor symptoms of the disease. Mutations in LRRK2 are the most common cause of late-onset autosomal dominant forms of PD and are clinically indistinguishable from sporadic cases (Funayama et al., 2005;Zimprich et al., 2004). Single nucleotide polymorphisms at the LRRK2 locus have also been identified as PD risk factors through genome wide association studies and have recently been shown to lead to increased kinase activity in the sporadic disease (Chang et al., 2017;Di Maio et al., 2018;Nalls et al., 2014). The LRRK2 protein contains both kinase and GTPase domains as well as regions involved in protein-protein interactions. Multiple LRRK2 mutations have been associated with PD; these include the G2019S mutation in the kinase domain of the protein, and the R1441C mutation in the GTPase domain.
Clathrin-mediated endocytosis (CME) is an important cellular function required for the recycling of synaptic vesicles and certain plasma membrane components (Saheki and De Camilli, 2012). Neurons can undergo periods of extensive synaptic transmission which is highly dependent on the fast and efficient recycling of synaptic vesicles. Previous work has shown that LRRK2 fractionates with important synaptic proteins such as synapsin, synaptophysin, NSF, dynamin-1, and VAMP2 (Arranz et al., 2014;Belluzzi et al., 2016;Carrion et al., 2017;Piccoli et al., 2014). Further to this LRRK2 has been shown to directly interact with some key endocytic components including endophilin, dynamin and auxilin (Matta et al., 2012;Nguyen and Krainc, 2018;Piccoli et al., 2014). However, the extent to which mutations in LRRK2 alter the process of CME in dopaminergic neurons remains poorly described.
Here we present data from an integrated proteomic and transcriptomic analysis of induced-pluripotent stem cell (iPSC)-derived dopaminergic cultures from PD patients carrying the LRRK2-G2019S mutation, which reveals dysregulation of endocytosis in these cells. We then demonstrate that levels of the endocytic proteins clathrin and endophilin are reduced in iPSC-derived dopaminergic cultures from LRRK2-G2019S and LRRK2-R1441C mutation carriers, as well as in 22month-old LRRK2 BAC transgenic rats carrying these same mutations. Furthermore, we identify clear changes in the levels of key RAB proteins in both models and demonstrate the detrimental functional impact of LRRK2 mutations on CME in iPSC-derived dopaminergic neurons. Finally, we demonstrate that clathrin and endophilin are both dysregulated in post mortem striatal brain tissue from PD patients carrying the LRRK2-G2019S mutation. Together, these findings demonstrate that LRRK2 mutations lead to perturbations in CME, and present a plausible mechanism for the development of PD pathogenesis.
Participant recruitment and LRRK2 mutation screening
Participants were recruited through the Oxford Parkinson's Disease Centre Discovery clinical cohort and gave signed informed consent to mutation screening and derivation of iPSC lines from skin biopsies (Ethics committee: National Health Service, Health Research Authority, NRES Committee South Central, Berkshire, UK, REC 10/H0505/71). All UK patients fulfilled the UK Brain Bank diagnostic criteria for clinically probable PD at presentation. Healthy controls were age-matched to within a decade where possible.
Culture and reprogramming of primary fibroblasts
Low passage fibroblast cultures were established from participant skin punch biopsies and reprogrammed using the CytoTune-iPSC Sendai Reprogramming kit (Invitrogen). JR207-3 was reprogrammed using an alternative Sendai system SeVdp(KOSM)mir301L, which contains a target for mir302, which is expressed in pluripotent cells, but not in the originating fibroblasts, ensuring effective removal of exogenous genetic material within a few passages. Retrovirally reprogrammed lines used in this study had been reprogrammed using Yamanaka reprogramming vectors, as described previously (Hartfield et al., 2012). Clones were transitioned from initial culture on mitotically inactivated CF1 Mouse Embryonic Feeders (Millipore) to feeder-free culture in mTeSR™ medium (StemCell Technologies), on hESC-qualified Matrigel-coated plates (BD).
Characterization of previously unpublished iPSC lines was performed on bulk frozen stocks (Fig. S2) Genome integrity and cell identity tracking used the Human CytoSNP-12v2.1 beadchip array or OmniExpress24 array (Illumina) on genomic DNA (All-Prep kit, Qiagen) analysed with GenomeStudio and Karyostudio software (Illumina).
Differentiation of iPSCs to dopaminergic neurons
Differentiation of iPSCs was carried out as previously described (Beevers et al., 2017), with the minor modification that, at day 20 of the protocol, cells were plated at 5 × 10 5 /cm 2 . Cells were maintained in final medium as previously described until 35 or 56 DIV.
Sample preparation and LC-MS/MS analysis for proteomics analysis
Cells were lysed in denaturing urea lysis buffer (9 M urea, 0.1 M Tris HCl pH 8.5, 2 mM EDTA, 1× PhosStop (Roche), and phosphatase inhibitor cocktails 2 and 3 (Sigma), then sonicated. Lysates were processed and labelled with TMT-10plex reagents (ThermoFisher Scientific). Samples were then fractionated using a 4.6 mm × 250 mm Extend-C18 column (Agilent), on an Agilent 1200 Series HPLC. The fractions were pooled in a checkerboard manner, dried down, desalted by an Empore C18 stage tip, and re-suspended in 0.1% formic acid.
LC-MS/MS analysis
LC-MS/MS analysis was performed using a QExactive HF mass spectrometer (ThermoFisher Scientific) coupled to an EASY-nLC 1000 system (ThermoFisher Scientific). Peptides were separated on a 75 μm by 50 cm EASY-Spray analytical column (Thermo Fisher Scientific) at 50°C. The mass spectrometer was set to acquire in a data-dependent mode (Top10). Full scans were acquired at 60,000 resolution, with a target of 3 × 10 6 ions with a maximum injection time of 20 ms. The top intense ions were fragmented by MS 2 scans were fragmented by HCD (NCE 33%) and acquired at 60,000 resolution, with a target of 1 × 10 6 ions and a maximum injection time of 60 ms. MaxQuant version 1.5.3.30 was used to process MS data. The false discovery rate (FDR) was set at < 0.01, enzyme was set to trypsin, and missed cleavages was set at < 2. The Human UniProt FASTA database (March 2015) was used for peptide identification, with cysteine carbamidomethylation as a fixed modification and N-acetylation and oxidation of methionine as variable modifications. TMT quantification was performed by MaxQuant (Max Planck Institute of Biochemistry), using correction factors supplied with TMT reagents, reporter mass tolerance set to 0.01 Da, and a parent ion fraction (PIF) filter at 0.75.
Bioinformatics analysis of proteomics data
Protein lists exported from MaxQuant software were used for statistical analysis. The data normalization, principal component analysis (PCA), and statistical analysis of proteomics data were performed using R programming language (version 3.4.0, www.r-project.org). The raw data were normalised to the median intensity of all proteins within the TMT 10-plex set. The normalised intensity for each protein was then transformed to a relative ratio by dividing by the mean normalised intensity of the protein across the TMT 10-plex set. The LIMMA package (Ritchie et al., 2015) was used for statistical analysis of differential abundance. The p-values were adjusted for multiple comparisons using the Benjamini and Hochberg method (Benjamini et al., 1995); a protein was considered significantly different between groups if it had a FDRadjusted p-value < 0.05.
RNA library preparation for RNA sequencing
RNA extraction was conducted using the RNeasy Micro kit (QIAgen). All RNA used for analysis conformed to a RIN of 8.8 or higher. 500 ng of high quality RNA was used as starting for poly-A library construction. Automated poly-A library construction was completed using TruSeq Stranded mRNA sample Preparation kit (Illumina). Individual libraries were quality controlled for size distribution and concentration using a LabChip GX and KAPA library quantification kit (Kapa Biosystems). Pooled libraries were clustered on a cBot (Illumina) using HiSeq PE Cluster kit V4 (Illumina) and sequenced on a HiSeq 2500 Sequencing system (Illumina) using HiSeq SBS kit V4 (Illumina) with 50/50 paired end sequencing at a read depth of approximately 30 million fragments in high output mode. Sequencing run quality control parameters were uploaded to BaseSpace (Illumina) and reviewed. Completed runs were considered high quality if > 85% of bases above Q30 and Cluster Pass Filter > 85%.
Bioinformatics analysis of transcriptomics data
Sequencing reads were aligned to reference genome (hg38) using OSA aligner (OmicSoft Sequence Aligner (Hu et al., 2012). Quality control for the sequence alignment involved the analysis of sequence quality, GC content, and 5′-3′ gene body coverage. Aligned reads were then counted against gene model annotation (gencode v23) to obtain gene level expression values by using RSEM (Li and Dewey, 2011). DESeq2 (Love et al., 2014) was used for gene expression normalization. The regularised log transformation function in DESeq2 was used to transform the raw count data to the log2 scale, minimising differences between samples for rows with small counts (transcripts with low expression), and normalising to library size. These values were used to perform PCA analysis for biological QC and downstream differential analysis. DESeq2 generalized linear model (GLM) was used for differential analysis. Transcripts with > 0.5 Fragments per Kilobase of transcript per Million mapped reads (FPKM) were considered to be robustly detected. Differentially expressed gene (DEG) signature was defined by using the following criteria: FPKM > 0.5, false-discovery-rate (FDR)adjusted p-value < 0.05 and absolute fold change (FC) > 1.5.
Combined bioinformatics analysis of proteomics and transcriptomics data for dual omics analysis
Proteomics data was multiplied by 1.4 to compensate for TMT-10plex labelling compression. Transcriptomics and proteomics datasets from 35 and 56 DIV for iPSC-derived dopaminergic neuronal cultures were combined to produce integrated FC and adjusted p-values. Briefly, if a protein was significantly differentially expressed at 35 DIV, then 35 DIV protein data was used; if not, and the protein was significantly differentially expressed at 56 DIV, then 56 DIV protein data was used. If the protein was not significantly differentially expressed at either time point, and the gene was significantly differentially expressed at 35 DIV, then 35 DIV gene data was used. If this also was not significant, but significantly different gene expression was seen at 56 DIV, then the 56 DIV gene data was used. Enrichment analyses of gene ontology (GO) terms and KEGG pathways were performed on this integrated dataset by applying Ensemble of Gene Set Enrichment Analyses (EGSEA) (Alhamdoosh et al., 2016) and utilizing all differentially expressed proteins/genes with p < 0.05 and a FC > 1.5 in either direction as input. Significant pathways with adjusted p-value < 0.05 were reported.
Live imaging
iPSC-derived dopaminergic neurons were grown until 48 DIV and were then imaged with FM1-43 (Thermo Fisher Scientific). Cells were washed in HBSS−/− supplemented with 5 mM glucose and 10 mM HEPES. 2 μM FM1-43 was then applied for 1 min before uptake was induced with 75 mM NaCl and 10 mM KCl and left for 1 min before being washed in HBSS−/− solution (Thermo Fisher Scientific). Images were acquired at 37C in 5% CO 2 on the Opera Phenix (Perkin Elmer) at 63× magnification. Images were quantified in ImageJ by measuring the fluorescence intensity of puncta at each timepoint.
Neurite regrowth assay
iPSC-dopaminergic neuron cultures were grown until 33 DIV in 96well plates (Griener). A scratch was applied through the centre of each well, and medium replaced. Brightfield images were captured every 24 h for 7 days using the Phenix Opera (Perkin Elmer). Cell coverage of the well over time was measured using the Harmony software (Perkin Elmer).
Western blotting
Samples were homogenised in RIPA buffer containing complete protease inhibitor cocktail (Roche) and phosphatase inhibitors (Sigma Aldrich) and a BCA assay was used to determine concentration. Samples were further diluted, Laemmli buffer added and boiled for 5 min. Samples were loaded and ran on 4-15% Criterion-TGX gradient gels (BioRad) and transferred to PDVF membranes. Membranes were blocked in 4% milk for one hour at room temperature followed by primary antibody incubation at +4°C overnight. Membranes were then developed using immobilon western chemiluminescent HRP substrate (Millipore) and visualised on a ChemiDoc (BioRad).
Immunofluorescence
Rats were terminally anesthetised and transcardially perfused and iPSC derived neurons fixed in 4% paraformaldehyde solution (Sigma). Brains were dehydrated through an ethanol gradient prior to being paraffin-embedded, sectioned to 8 μm thick sections and dewaxed preceding citrate antigen retrieval and blocking. Following this brain sections were washed and then incubated with the appropriate primary and secondary antibodies (see table above). Nuclei were stained with DAPI and sections were mounted with fluorsave (Millipore). All analysis was done blind. Clathrin and endophilin puncta analysis was carried out in ImageJ by creation of a macro that first converted images to 8-bit prior to thresholding and particle analysis was set at 2-50 pixels.
Electron microscopy
Electron microscopy (EM) was performed as previously described in (Janezic et al., 2013). Briefly, rats were terminally anesthetized and transcardially perfused using 4% PFA and 0.1% glutaraldehyde. Coronal free floating sections were cut to 60 μm using a vibratome and then incubated with tyrosine hydroxylase primary antibody (Chemicon). Dopaminergic terminals were revealed by silver-intensified immunogold conjugated secondary antibodies (Nanoprobes) and samples were dehydrated and embedded in Durcapan ACM resin (Fluka). Serial sections of dorsal striatum~50 nm were cut and collected onto copper grids. Prior to examination under the electron microscope grids were lead stained and samples were blinded. Terminals containing five or more immunogold particles were identified and imaged at 12,000×. A total of 50 terminals across two grids were imaged per rat, achieving 150 terminals per genotype. EM image analysis was carried out using the PointDensity and PointDensitySyn plugins in ImageJ (Anwar et al., 2011;Larsson and Broman, 2005). TH-positive profiles were identified as containing five or more immunogold particles and the perimeter delineated. The centre of each synaptic vesicle was labelled and was counted, provided that 50% or more of the membrane was visible.
Animal procedures
All animal procedures were carried out under the United Kingdom Animals (Scientific Procedures) Act (1986). Previously described LRRK2-expressing BAC transgenic rats were housed with littermate controls (Sloan et al., 2016). Rats were housed in a 12-h light-dark cycle with ad-libitum access to food and water. Both sexes were used throughout this study.
Human tissue
Paraffin embedded post-mortem tissue from LRRK2-G2019S Parkinson's cases and age matched controls were supplied by the Queens Square brain bank as 5 μm sections.
Statistics
Graphpad 7 software was used for statistical analysis. All data were analysed for statistical significance using unpaired t-test, one-way ANOVA or two-way ANOVA. All data are presented as means ± SEM.
Integrated transcriptomic and proteomic analysis of LRRK2-G2019S iPSC-derived dopaminergic cultures reveals dysregulation of endocytosis and axon guidance
To understand the effects of the LRRK2-G2019S mutation on the transcriptome and proteome of iPSC-derived dopaminergic cultures, control and LRRK2-G2019S iPSC-derived dopaminergic neurons were analysed using RNA-seq and mass spectrometry. To investigate any effect of neuronal maturity, cells were analysed at both 35 and 56 DIV.
Transcriptomic analysis revealed that a total of 16,893 genes were detected in samples harvested at 35 DIV and 18,146 genes were detected in samples harvested at 56 DIV. Principal component analysis (PCA) was conducted for these data and demonstrated clear separation of the two genotypes at each timepoint (Fig. S3A, B). Further investigation revealed that at 35 and 56 DIV, 2238 and 2572 genes, respectively, were differentially expressed between genotypes (false-discovery rate (FDR) adjusted p < 0.05; fold change (FC) > 1.5).
Proteomic analysis was conducted using LC-MS/MS. To enable multiplex analysis, samples were labelled using TMT-10plex tags to create TMT sets for each timepoint. A total of 10,502 proteins were detected in samples harvested at 35 DIV, and 10,501 proteins were detected in samples harvested at 56 DIV. Similar to the transcriptomic (caption on next page) N. Connor-Robson, et al. Neurobiology of Disease 127 (2019) 512-526 analysis, PCA of these data demonstrated a clear separation between genotypes at each timepoint (Fig. S3C, D). Comparison of data from the two genotypes revealed that at 35 and 56 DIV, 2231 and 1439 proteins, respectively, were differentially expressed (FDR adjusted p < 0.05; FC > 1.5).
Given that data from lines expressing the LRRK2-G2019S mutation clustered separately from the controls at the protein and gene levels at both timepoints, the data were combined into a single dataset (the integrated omics dataset) for further analysis. PCA of the integrated dataset demonstrated clear separation of the LRRK2-G2019S and control groups (Fig. 1E). Interestingly, we noted that, although both controls and LRRK2-G2019S cells cluster separately by PCA analysis, the controls cluster more tightly in comparison to the LRRK2-G2019S cells possibly suggesting some heterogeneity in cellular effects of the LRRK2-G2019S mutation. Ensemble of Gene Set Enrichment Analyses (EGSEA) (Alhamdoosh et al., 2016) of the integrated dataset revealed endocytosis and axon guidance as the two most significantly perturbed pathways, both of which were predicted to be inhibited in the presence of the G2019S mutation (Fig. 1F).
The LRRK2-G2019S mutation has previously been demonstrated to disrupt axon guidance in iPSC-derived dopaminergic neurons (Borgs et al., 2016;Reinhardt et al., 2013;Sánchez-Danés et al., 2012;Su and Qi, 2013). We therefore conducted a neurite regrowth assay, in which a scratch was applied to the cultures and neurites would grow to fill the scratch area, to confirm this effect in our cultures, as well as in cultures from patients carrying the less common LRRK2-R1441C mutation ( Fig. 1 G, H). Owing to the wealth of literature surrounding the effect of LRRK2 mutations on neurite structure, we then focussed on understanding the impact of LRRK2 mutations on endocytosis.
LRRK2 mutations regulate expression of endocytic machinery in iPSCderived dopaminergic cultures
Our integrated omics approach provided the power to identify a much more extensive array of altered endocytic genes and protein expression levels than previous studies have reported (Fig. S4). The magnitude and significance of the changes in key endocytic genes/ proteins in the integrated omics dataset are highlighted in Fig. 1I (enlarged version Fig. S5). Of particular note, endophilin-III, which is essential for sensing the curvature in the region of membrane that is to be endocytosed, and dynamin-1, which acts to aid budding and scission of vesicles formed during CME, were significantly downregulated in the presence of the LRRK2-G2019S mutation (FC 0.54, p = 0.012; FC 0.68, p = 0.008; respectively). Additionally, clathrin light chain b, but not clathrin light chain a or heavy chains 1 or 2, was also downregulated in the LRRK2-G2019S lines (FC 0.67, p = 0.013).
We also detected significant changes in the expression levels of 25 RAB proteins, accounting for over a third of the 70 members of the RAB family (Fig. S4). These included RABs previously linked to LRRK2 biology; RAB5B (FC 0.78, p = 0.016), which localises to the early endosome; whereas RAB7, which localises to the late endosome and is important for its fusion with the lysosome, only showed a non-significant trend towards downregulation (FC 0.86, p = 0.06).
Western blot analysis confirmed that the changes detected in the integrated omics dataset in LRRK2-G2019S neurons were robust (Fig. 2). In agreement with the integrated omics data, levels of the proteins endophilin I-III, were found to be significantly decreased in LRRK2-G2019S iPSC-derived dopaminergic cultures compared with controls at both DIV 35 and 56 ( Fig. 2A-C). Dynamin-1 demonstrated a trend towards downregulation at DIV35 and reached significance at 56 DIV in LRRK2-G2019S cultures ( Fig. 2A, D, E, Fig. S6). RAB5B and RAB7 were also downregulated ( Fig. 2A, F-I, Fig. S6) with RAB7 reaching significance at both DIV35 and 56 in LRRK2-G2019S cultures compared to controls. Finally, we investigated whether the same effects were seen in iPSC-derived dopaminergic cultures carrying the LRRK2-R1441C mutation to probe the effect of GTPase domain mutations on the endocytic pathway ( Fig. S6). At this point in the study only two LRRK2-R1441C iPSC lines were available, precluding statistical analysis, and the data are shown for indicative purposes only. However, in all cases the LRRK2-R1441C iPSC-derived dopaminergic cultures demonstrated a similar pattern of change for all four proteins consistent with the findings obtained for the LRRK2-G2019S mutation in the dual omics analysis and by western blotting.
Clathrin-mediated endocytosis is impaired in iPSC derived dopaminergic cultures carrying the LRRK2-G2019S or LRRK2-R1441C mutation
Given that three of the most critical proteins required for CMEclathrin, endophilin and dynamin -were downregulated in our LRRK2-PD iPSC-derived dopaminergic neurons, we predicted CME would be impaired in these cells. Following neuronal synaptic vesicle release, CME is initiated to recover and recycle synaptic vesicle components from the plasma membrane. Uptake of the lipophilic dye FM1-43 can be used to measure this process as described in materials and methods. iPSC-derived dopaminergic cultures were exposed to FM1-43 for a minute before the addition of potassium chloride to induce synaptic vesicle release. iPSC-derived dopaminergic cultures from LRRK2-G2019S and LRRK2-R1441C patients had significantly reduced uptake of the FM1-43 dye compared with control neurons, demonstrating a reduction in CME (Fig. 3).
Key endocytic protein levels are perturbed in aged rats carrying LRRK2 mutations
To further understand the relevance of our findings in dopaminergic cultures we employed an in vivo model of PD. Striatal tissue from 22month old BAC transgenic rats expressing the human LRRK2 wild-type (hWT), LRRK2-G2019S or LRRK2-R1441C transgene (Sloan et al., 2016) were studied for changes in endocytic protein levels.
Terminals of dopaminergic neurons originating in the SNpc project to the dorsal striatum and, due to their high synaptic demand and arborisation, are highly dependent on effective endocytic recycling (Bolam and Pissadaki, 2012;López-Murcia et al., 2014). In agreement with our findings in iPSC-derived dopaminergic neurons, western blot analysis revealed significantly reduced levels of clathrin heavy chain and endophilin I-III in rats expressing LRRK2 mutations compared with non-transgenic (nTG) controls; however, levels were not significantly different from rats expressing hWT-LRRK2 (Fig. 4A-C). Conversely, RAB5B, RAB7 and RAB10 were upregulated in LRRK2-G2019S and LRRK2-R1441C rats compared with those expressing hWT-LRRK2 (Fig. 4A, D-F). RAB3A and RAB11, which are involved in synaptic Graphs show mean ± SEM from four controls and five LRRK2-G2019S lines, from three independent differentiations. Significance was assessed using a t-test between controls and G2019S lines *p < 0.05, **p < 0.01. Fig. 3. Endocytic function in LRRK2 iPSC-derived dopaminergic cultures is reduced. (A) Representative images of FM1-43 uptake across genotypes at DIV47 with quantification shown in (B). Images were taken over a period of 10 min and puncta were analysed from two separate wells per line, with 10 puncta being analysed over time per well. Graphs show mean fluorescent intensity ± SEM. *p ≤0.05 effect of genotype, 2-way ANOVA; n = 3-4 iPSC lines per genotype.
vesicle exocytosis and the recycling endosome, respectively, and were downregulated by the LRRK2-G2019S mutation in the integrated omics analysis, were unaltered (Fig. S7). Dynamin-1 and Caveolin-1 were also unaltered in 22-month old LRRK2 BAC rat striatal tissue (Fig. S7). This endocytic phenotype was only present in aged rats; no changes in levels of these endocytic proteins were seen in 12-month old rats (Fig. S8).
To further investigate changes in endophilin and clathrin protein levels, dorsal striatal sections of aged rats were immunostained for endophilin I-III and clathrin heavy chain and the number of puncta quantified for each. The total number of puncta for each protein was significantly reduced in hWT-LRRK2 expressing rats whilst showing a clear reduction in LRRK2-G2019S and LRRK2-R1441C rats compared with nTG rats, and the number of endophilin puncta were also reduced with the LRRK2-G2019S and LRRK2-R1441C mutations. In each case, the average size of these puncta was unchanged (Fig. 5). Interestingly, these data demonstrate that overexpression of hWT-LRRK2 has an impact on the levels of these proteins suggesting a fundamental role of LRRK2 in endocytosis. (caption on next page) N. Connor-Robson, et al. Neurobiology of Disease 127 (2019)
Mutations in LRRK2 lead to changes in synaptic vesicle dispersal
Previous work with these LRRK2 BAC transgenic rat lines revealed dopaminergic signalling deficits including an age-related reduction in dorsal striatal dopamine release despite no alterations in total dopamine levels, and a motor impairment which is corrected with L-DOPA treatment (Sloan et al., 2016). Due to the apparent deficits we observe here in the endocytic pathway, which is crucial for recycling of synaptic vesicles (SVs), and the previously reported dopaminergic signalling phenotypes, we analysed dopaminergic profiles of the dorsal striatum using immunogold electron microscopy in 22-month old LRRK2 BAC transgenic rats (Fig. 6A). There were no major morphological changes measured by profile area, perimeter or synapse length (Fig. S9). However, LRRK2-G2019S rats demonstrated both a significant reduction in the number of synaptic vesicles and synaptic vesicle diameter compared to controls (Fig. 6D-E). In both LRRK2-G2019S and LRRK2-R1441C rats there are significantly fewer synaptic vesicles close together and a greater number spaced further apart compared to controls indicating an altered distribution of synaptic vesicles ( Fig. 6B-C).
In an attempt to understand this change in synaptic vesicle dispersion, we next investigated the levels of synapsin I, a protein key to tethering synaptic vesicles to each other and to actin filaments (Cesca et al., 2010). Although the total levels of synapsin I were unaltered across genotypes, levels of phospho-S603 synapsin I, which decides the proteins binding status to synaptic vesicles, were severely reduced in aged, but not young, rats expressing either the LRRK2-R1441C or LRRK2-G2019S mutations (Fig. 6D-F).
Clathrin and endophilin levels are increased in LRRK2-G2019S PD patient post mortem striatum
To determine whether key proteins involved in the CME pathway were also dysregulated in LRRK2 patients, we stained human postmortem striatal tissue from PD patients carrying the LRRK2-G2019S mutation and age matched controls. There was an increase in the number of clathrin puncta and a trend towards an increase in the number of endophilin puncta in the putamen, but not the globus pallidus, of patient samples compared with controls (Figs. 7, S10). In PD patients, the putamen is heavily affected by disease pathology whereas the globus pallidus generally escapes degeneration (Hardman and Halliday, 1999a;Hardman and Halliday, 1999b;Jellinger and Attems, 2006). In both regions, as seen for the aged rats, there was no change in puncta size (Fig. S10). These findings suggest that the endocytic pathway is dysregulated in late stage disease.
Discussion
Our use of a novel integrated -omics analysis provided the power to demonstrate extensive dysregulation of the endocytic pathway by the LRRK2-G2019S mutation. This dysregulation is sufficient to lead to functional impairment of CME in LRRK2-G2019S and also in LRRK2-R1441C iPSC-derived dopaminergic cultures as demonstrated by reduced uptake of FM1-43. Furthermore, similar perturbations of this pathway were demonstrated in aged LRRK2 BAC transgenic rats and finally in LRRK2 PD post-mortem tissue. The importance of the endocytic pathway in PD pathology has been highlighted by previous work with LRRK2 but also other disease causing mutations in genes such as VPS35, DNAJC6, SYNJ1, GAK and Rab7L1 ( The unbiased integrated omics approach revealed axonal guidance and endocytosis as the two most perturbed pathways. We confirmed the presence of neurite regrowth phenotypes in our mutant LRRK2 iPSCderived dopaminergic neurons (Fig. 1), and then focussed on understanding the effect of LRRK2 mutations on endocytosis. It should be noted that the endocytic pathway is crucial to the process of axonal outgrowth and so it is likely that impairment in endocytosis plays a role in the development of this phenotype (Tojima and Kamiguchi, 2015).
As neuronal activity results in the insertion of SVs into the membrane, these cells are highly reliant on efficient CME to retrieve SV proteins and maintain the plasma membrane. Therefore, changes in levels of key proteins related to this process are likely to have a detrimental impact on the cell. In both our iPSC-derived dopaminergic cultures and aged LRRK2 BAC transgenic rats we demonstrated that the presence of LRRK2 mutations significantly reduces levels of both clathrin and endophilin, two critical CME proteins. These reductions would be expected to impair normal CME, and this was demonstrated through the use of FM1-43 to be the case in iPSC-derived dopaminergic cultures from LRRK2-R1441C and LRRK2-G2019S PD patients, compared with cells from healthy controls. Reductions in clathrin levels by as little as 20% are also able to reduce synaptic transmission through reduction in the size of the ready releasable SV pool (RRP) and through reduction in quantal size (López-Murcia et al., 2014;Moskowitz et al., 2005). We observed a reduction in the total number of SVs in our LRRK2-G2019S rats compared with controls; however, it should be noted that true vesicle pools are hard to define in dopaminergic neurons and that previous work demonstrating a reduction in the size of the RRP have concentrated on other cell types in a neuronal Lrrk2 knock-down model (Piccoli et al., 2011). We also demonstrated a change in the distribution of SVs in dopaminergic neurons in LRRK2-R1441C and LRRK2-G2019S rats compared to control rats, and evidence that tethering of vesicles is impaired in the presence of the LRRK2-G2019S and LRRK2-R1441C mutations. Furthermore, a previous study using these rat models demonstrated a reduction in dopamine release from the dorsal striatum using FCV despite no alterations in total dopamine concentrations being measured (Sloan et al., 2016). This may suggest a reduction in quantal size due to reduced levels of CME.
As previously discussed, endophilin levels were also significantly reduced in iPSC-derived dopaminergic neurons and in aged rats carrying LRRK2 mutations, compared to the respective controls. These results corroborate the recent findings of Nguyen & Krainc who saw similar reductions in endophilin levels in iPSC from patients carrying the LRRK2-R1441G mutation (Nguyen and Krainc, 2018). Previously LRRK2 has been shown to phosphorylate endophilin, altering its membrane association, with both hyper-and hypophosphorylation leading to impairments in endocytosis (Matta et al., 2012). In our LRRK2 mutant iPSC-derived dopaminergic neurons, we also saw a reduction in dynamin 1 levels, which suggests that scission of newly formed clathrin coated pits may be inhibited in these cells. Our integrated -omics analysis also revealed significant changes in numerous other endocytic proteins including the clathrin adapter protein AP-2, early endosome markers and the caveolin proteins. Interestingly, all members of the caveolin family were upregulated in this analysis; these are involved in clathrin-independent endocytosis and may compensate for the apparent deficit in CME demonstrated here (Parton and del Pozo, 2013). It has been previously demonstrated that LRRK2 is able to localise to caveolae, where caveolin proteins are localized, suggesting that LRRK2 may also have a function in this part of the endocytic Fig. 5. Transgenic LRRK2 rats have significantly reduced numbers of endophilin and clathrin puncta. (A-P) Representative images of 22-month old rat striatal sections across genotypes of clathrin and endophilin I-III staining with the addition of TH (green) in the merged images. Insets show zoomed in regions showing puncta in more detail. White arrowheads identify puncta and puncta co-localising with TH in merged images, scale bar represents 100 μm. Quantification of clathrin puncta number (Q) and size (R) as well as endophilin puncta number (S) and size (T). Graphs show mean ± SEM, *p ≤0.05, **p ≤0.01 ANOVA, Tukey's post-hoc; n = 3 animals per genotype. pathway (Alegre-Abarrategui et al., 2009). Other highly upregulated proteins in our analysis included CHMP4C and PSD4 (also known as EFA6) which are required for the ESCRT-III complex and clathrin independent membrane recycling and remodelling respectively again suggesting that in the absence of efficient clathrin mediated endocytosis other mechanisms of membrane and receptor recycling are upregulated. CHMP4C is a crucial component of the ESCRT-III complex which is required for efficient multivesicular body sorting and formation (Schmidt and Teis, 2012). The ESCRT-III complex has previously been implicated in PD for its role in the transport of α-synuclein for degradation and shown that disruption to the complex leads to increased α-synuclein exocytosis (Spencer et al., 2016). Previous work has demonstrated that the PSD4 acts as an exchange factor for Arf6 which is an important mediator of endocytic processes (D'Souza-Schorey and Chavrier, 2006). Interestingly PSD4 is known to be able recruit endophilin to the flat areas of the plasma membrane and is considered to regulate the recycling of certain membrane receptors (Boulakirba et al., 2014;Casanova, 2007).
Despite decreases in endophilin and clathrin protein levels in the presence of LRRK2 mutations in our in vitro and in vivo models, increases in both of these proteins were seen in post-mortem tissue from PD patients carrying the LRRK2-G2019S mutation compared with controls. Post-mortem tissue is representative of the very late stages of disease when most dopaminergic input to the striatum has been lost. Neither our in vitro or in vivo models recapitulate this loss. It is also possible that other neuronal cells types are compensating for the loss of dopaminergic innervation and so undergoing more synaptic activity. Nevertheless, our findings demonstrate a clear perturbation in the endocytic machinery in the brains of LRRK2-G2019S PD patients.
Strikingly, levels of 25 of the approximately 70-member family of RAB proteins were found to be significantly altered in the presence of the LRRK2-G2019S mutation in our integrated omics analysis. RABs are small proteins critical for intracellular vesicle trafficking and function within the endocytic pathway, and neuronal-specific RABs have been shown to have a predominant function at the synapse (Chan et al., 2011;Ridvan Kiral et al., 2018). We confirmed dysregulation of RAB5B, RAB7A and RAB10, which are localized to the early and maturing endosome, across our iPSC-derived dopaminergic cultures and aged rat models. Levels of these proteins were reduced in mutant iPSC-derived dopaminergic neurons and increased in striatal tissue of mutant LRRK2 BAC transgenic rats, versus the relevant controls. This difference may be explained by the difference in disease stage represented by these two models. The iPSC-derived dopaminergic neurons likely represent a very early stage of PD pathogenesis, whereas the aged BAC transgenic rats represent an intermediate phase. In this context, our findings suggest that LRRK2 has an important role in the regulation of RAB protein levels and that LRRK2 mutations may have a biphasic effect over the time-course of PD pathogenesis. Providing further support to this hypothesis, levels of total RAB5 and RAB7 have also been shown to be increased in post-mortem brains of sporadic Alzheimer's disease patients compared with controls (Cataldo et al., 2000;Ginsberg et al., 2011;Ginsberg et al., 2010). This may be due to an over activation of the endocytic system or a blockage further up the endo-lysosomal pathway. A subset of Rab proteins have recently been identified as LRRK2 kinase substrates (Jeong et al., 2018;Liu et al., 2018;Steger et al., 2016) and previous work has identified interactions between LRRK2 and certain Rab proteins (Dodson et al., 2012;MacLeod et al., 2013;Shin et al., 2008). Interestingly, a recent study from Di Maio et al. has shown that WT LRRK2 kinase activity is upregulated in postmortem nigral tissue from sporadic PD patients and that this is associated with increased RAB10 phosphorylation (Di Maio et al., 2018;Fan et al., 2018). Our results taken together with recent literature in the field highlight the importance of LRRK2 in the regulation of RAB proteins.
Given that alterations in the endocytic system can impact on the organisation and number of SVs, we investigated dopaminergic terminals of the dorsal striatum in 22-month-old transgenic rats. In the LRRK2-G2019S rats we observed significantly fewer synaptic vesicles which were significantly larger. This suggests a compensation by those synaptic vesicle remaining as previous work has demonstrated no loss of striatal dopamine in these rats (Sloan et al., 2016). We identified changes in the spatial dispersal of SVs in the presence of the LRRK2-R1441C or LRRK2-G2019S mutation compared with controls. This may be explained by the observed clear lack of synapsin phosphorylation at S603 present in our LRRK2-G2019S and LRRK2-R1441C rats which has also been previously demonstrated by others in Lrrk2-G2019S knockin mice (Beccano-Kelly et al., 2015). Synapsin orchestrates the tethering of SVs to each other and to actin filaments and, when phosphorylated at S603, reduces its binding, thus priming vesicles for release (Cesca et al., 2010;Fornasiero et al., 2012). In the almost complete absence of phosphorylation of synapsin in both aged LRRK2-R1441C and LRRK2-G2019S rats SVs are likely less primed for release which may explain the previously published reduced dopamine release measured by FCV in these models (Sloan et al., 2016).
Conclusions
We have used an integrated transcriptomics and proteomics approach to reveal the extensive dysregulation of the endocytic pathway caused by mutations in LRRK2. Together, our findings across PD LRRK2 models and post-mortem brain tissue from PD patients carrying LRRK2 mutations demonstrate that LRRK2 mutations lead to clear and substantial changes in the endocytic pathway. Our findings also suggest that wild-type LRRK2 has an important role in regulating normal endocytic function. Further studies will be required to interrogate the more intricate role of LRRK2 in this pathway.
Ethics approval
All animal procedures were carried out under the United Kingdom Animals (Scientific Procedures) Act (1986).
Human tissue was used in accordance with the local research ethics committee.
Consent for publication
Not applicable.
Availability of data and material
The datasets used during the current study are available from the corresponding author on reasonable request.
Competing interests
Authors declare no conflict of interest.
Funding
The work was supported by the Monument Trust Discovery Award from Parkinson's UK. HB was supported by an MRC Industrial CASE studentship. Samples and associated clinical data were supplied by the Oxford Parkinson's Disease Centre study, funded by the Monument Trust Discovery Award from Parkinson's UK, a charity registered in England and Wales (2581970) All authors read and approved the final manuscript. | 2019-04-06T13:10:32.445Z | 2019-04-04T00:00:00.000 | {
"year": 2019,
"sha1": "6e064cd4f54e8285f20647cbb9bb88d37de9393c",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1016/j.nbd.2019.04.005",
"oa_status": "HYBRID",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "a1edd2d8b6bb98bfd886a3fbcac9cf8513438005",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Medicine",
"Biology"
]
} |
212696697 | pes2o/s2orc | v3-fos-license | UDK 316.343 THE DEFINITION OF "TOP-MANAGER" AND HIS MAIN FUNCTIONS
In the given work defines the approaches to the essence of the category «top-managrt», based on which their own statement has been provided. The content of the main functions of the top-manager and their relationship have been disclosed.
Today, the period of rapid market growth of organizations in various industries and directly reduced their pace. There comes a time when the economic success of any enterprise depends heavily on its sound management. And now, first and foremost, both professional skills and personal qualities of people whose profession is called the top-manager appear. Usually, these are the first persons of the enterprise, that is, directors, presidents, or chairmen of the board. Sometimes they can be directly owned or co-owned by the business. Other professionals may also be involved in the profession, such as: Commercial or CFO, Production or Development Director, Marketing Manager, Security Director, or Information Officer. The top manager must ensure the sustainability of the enterprise and the prospect of development of the proposed business. In this regard, there is a need to clarify the concept of «top-manager», as well as to define its main functions. Since the top manager is a fairly high position, he needs to fully possess all the necessary not only professional, but also personal qualities.
It is known that the top-manager is the central figure of the company. It is who called upon to organize and successfully promote her business. Top-managers can be presidents or general directors, as well as managers of a management company in large corporations or holdings.
In the overall structure of the enterprise motivation system, a top manager is one of the most important elements of a company's working capital, the cost of which is payable and the creation of favorable conditions of activity is a special type of investment. Increasing the value of most of the resources of an enterprise represented by tangible objects is limited by time frames, while the value of human capital becomes more important, and the system of development and motivation of highly qualified personnel becomes the most important tool for improving the efficiency of the enterprise itself.
In this regard, there is a need to clarify the concept of "top manager", as well as the definition of its main functions and their relationship.
Introduction.
Creating and maintaining the competitive advantage of the enterprise is possible only if the formation and use of the personnel of the organization. Modern socio-economic development shows that the success of an enterprise, of any form of ownership, depends largely on a competent and competent management. Therefore, one of the topical trends of today is the formation of high-quality potential of a top manager, his knowledge, skills and skills, which is a major resource in an unstable economy for the enterprise.
In the modern scientific and non-fiction literature, the laws and practices of companies use different terms to refer to the senior executives of a large enterprise. Thus, there is a need for a clearer definition of this concept and systematization of its existing functions.
Analysis of basic research and publications. Today, in the field of personnel management, research is being actively developed. Thus, well-known scientists of the past and present, such as DP Bogin, VM Grineva, MS Doronin, AL Zhukov, devoted their attention to the problematic aspects of the interpretation of the notion of "top-manager" and systematization of its functions, AM Kolot, OM Krasnonosov, VD Lagutin, ND Lukyanchenko, LA Lutai, GV Nazarov, VS Ponomarenko, MV Semikina, OM Yastremskaya and others. and other.
At the same time, many of the presented scientists pay more attention to the definition of the concept of manager, not senior manager.
Issues of research on the notion of "top manager" and its main functions were addressed by such prominent foreign scientists: G. Emerson, E. Mayo, A. It should be noted that the study of the concept of "top manager" paid less attention, both domestic authors and foreign. In such circumstances, the study becomes particularly relevant.
Goal. Defining approaches to the essence of the category "top-manager", on the basis of which to propose your own statement. Reveal the content of the core functions of the top-manager and their relationship.
Materials and research results. In a modern organization, top-managers hold key positions. Director, Team Leader, Head, Chief, etc. -all these words denote positions, and the people who hold these posts can be united by the general concept of "top-manager", since it is possible to define such common features of their activities: the top-manager supervises the work of one or more employees; the manager administers a part or all of the organization in which he/she works; the manager receives certain powers and makes decisions within these powers that will have consequences for other employees. So, let us consider the analysis of "manager" as the basic concept of research in more detail.
In the Unabridged Explanatory Dictionary of the Ukrainian language, a top-manager means a person who is responsible for the coordination and control over the labour organization; who manages an industrial, commercial, financial, and other enterprises [1].
According to V. Razanov, the top-manager is "a subject who performs managerial functions" [2].
A. Vikhanskyi believes that "the top-manager is defined as a member of the organization engaged in management activities and solves management tasks" [3].
V. Bereha notes, "Since the top-manager is not a direct producer or creator of material goods, but only optimizes the process of their creation, the product of the top-manager's labour can be qualified as a universal service for creating conditions for production and creativity" [4]. V.A. Kravchenko gives the following definition of the top-manager concept: this is a specialist who is professionally engaged in management activities in a particular area of the enterprise, holds a permanent management position and is vested with certain powers [5].
Having considered various approaches to the interpretation of the basic concept of the study, we can conclude that the manager is a specialist who occupies a permanent leadership position and is empowered to make decisions on certain activities of the organization, whose main function is his/her effective management, which is carried out in order to obtain the results desired.
The general and established classification of the top-manager's functions does not exist, but most experts agree that there is a minimum set of functions inherent in all levels of the management pyramid. We present this list in the following sequence (Table).
The planning function is the main management function, on which all other functions depend to a certain extent. The top-manager, engaged in managerial activities, outlines the goal of the organization and seeks to determine the best ways to achieve it. He analyzes budgets, schedules, information on the state of the industry and the economy as a whole, the resources at the disposal of the enterprise, and the resources it is able to acquire. An important aspect of planning is careful evaluation of the output. Since the enterprise develops largely under the influence of conditions prevailing in the past, the changes predetermine the need for new methods of enterprise activity. This feature requires the manager to possess analytical skills. Choosing the future direction of the organization as a whole and its individual subdivisions in particular, as well as making decisions about how to achieve the desired results based on the collection and analysis of the necessary information. By planning, the goals of the organization are established and the ways and timelines for achieving these goals are determined.
Organization
Making decisions about the necessary actions that will lead to the achievement of goals; the allocation of human resources to working groups; and the appointment of the top-manager for each of the groups; finally, providing the organization with all kinds of resources that are necessary for its activities.
Leadership
Direct and practical management of subordinates in the process of performing their duties, which includes: informing subordinates about the activities, orders, and instructions; motivation of subordinates to the effective and efficient performance of their duties.
Staff relations
The process of selection, preparation, development, use, and remuneration of people for the work done for the organization.
Control
The process of comparing actual performance with planned targets, as well as the development and application (if necessary) of corrective measures in order to achieve the established goals.
The function of the organization is to ensure the activities of the enterprise (organization) by coordinating the actions of the labour collective, taking into account its existing formal and informal components, forming the corporate spirit of the enterprise. At the same time, management places a person in the spotlight. Carrying out organizational activities, the topmanager operates in a complex structure of the enterprise, the main components of which are: formal organization; informal organization; employee; labour collective; corporation.
Many management specialists often consider the function of working with staff as an integral part of an organization; however, we draw attention to its importance for any organization, which justifies its allocation to a separate object of consideration. This is also supported by the fact that the concept of "personnel management" has emergeda Human Resource Management.
The performance of the control function is necessary to ensure that other managerial functions are also performed effectively and efficiently.
Conclusions. So, the main managerial functions in practice are closely interrelated with each other. Their interaction ensures the successful functioning of the enterprise in market conditions, despite external or internal impact factors.
Based on the features of the professional activity of a top-manager, which consist in the fact that he/she performs specific functions of management and organization of the production process, dealing with people, then, in our opinion, the solution of professionally significant tasks depends on a certain level of formation of his emotional culture as an important component of personal professional skills. | 2020-02-27T09:08:29.402Z | 2020-02-10T00:00:00.000 | {
"year": 2020,
"sha1": "908bcb56f9ff5c4390a2b1957fa3416ec5aa4640",
"oa_license": null,
"oa_url": "http://visnik.snu.edu.ua/index.php/VisnikSNU/article/download/270/247",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "bebe09ec4e55a367b1480dda74d3376fdd22d29f",
"s2fieldsofstudy": [
"Business"
],
"extfieldsofstudy": [
"Computer Science"
]
} |
54905782 | pes2o/s2orc | v3-fos-license | RAMAN IDENTIFICATION OF PIGMENTS IN THE WORK OF THE CHILEAN CONTEMPORARY VISUAL ARTIST IGNACIO GUMUCIO
The main pigments in the painting by the Chilean artist Ignacio Gumucio were identified by using micro-Raman spectroscopy. Three works by the artist, two paintings with wooden stand and a wall painting were analyzed. Pigments were vibrationally identified from cross-section samples of the contemporary paints on wood Río and Saludo and the wall painting Mental. The blue color Raman bands corresponds to cooper phthalocyanine. In the green areas, the Raman signals were ascribed to the azopigment acetoacetic arylide and copper phthalocyanine; the green color is then the result of a combination of the yellow and blue pigments. Raman bands in the pink areas were assigned to the azopigment β-naphthol. The dark green color in the Saludo paint is due to a chlorinated copper phthalocyanine pigment. Other materials in the artworks were also identified: rutile (TiO2) in Saludo in white areas and calcite (CaCO3) in Mental and Río. On this basis and taking into account the identification of the state of conservation a protocol for the preservation of the artworks Río and Saludo can be assessed.
INTRODUCTION
A better understanding of our civilization, and an improvement of the restoration and conservation methods, necessarily considers the knowledge of the materials involved in artistic and cultural heritage.Nowadays, the study of artworks is performed through several scientific techniques between them the spectroscopic analyses which are the most frequently used 1,2 .Raman spectrometry is one of the most powerful due to its characteristics.The unique properties of this technique involve non-destructive nature, no sample preparation, reliability, specificity and sensitivity [3][4][5] .Raman spectrometers have become the instrument of choice when analyzing archaeological artifacts and pigments on art works 6 .It is also amenable to in situ analysis, based in the development of fiber optics technology [7][8][9] .Most of the Raman spectrometers are today coupled with a microscope thus making possible to observe the sample and scan the spectrum from a small area down to one micrometer 10,11 .The use of micro-Raman spectrometers gives information on the microstructure of analyzed samples also improving the spatial resolution.Nevertheless, its meaningful disadvantage still lies in the formation of undesirable fluorescence, which is an accompanying phenomenon of measurements of diverse materials and which is very difficult to forecast.The main applications of Raman in heritage studies concern the identification of pigments and dyes on various support materialities [12][13][14] .Pigments together with binders and fillers give important information about artists, artistic schools or technological evolution.The development of contemporary painting involves the use of industrial materials, experimentation with different supports, and incorporation of new concepts and techniques.An example of this painting is the Chilean visual artist Ignacio Gumucio 15 , who performs a work that oscillates between instinct and its pictorial projection to the wall, through an exhaustive series of formal experiments 16 .This paper reports on the micro-Raman spectroscopic investigation of colors used in three paints of Ignacio Gumucio.The choice of this artist is based on the following fundamental aspects.He is an artist recognized and respected by the Chilean academic circle of art.Moreover, the painter is inserted into the tradition of the School of Art of the University of Chile, which has maintained a historical-critical link with painting and has continued the work of Juan Francisco Gonzalez, whose modern and experimental character projected to artists like Pablo Burchard, Adolfo Couve and Gonzalo Diaz.Gumucio relates to this pictorial tradition.Finally, the artist represents a formal complexity since uses poor quality materials, which are a problem for future conservation.Thus, the work of Gumucio proposes a critical turning point, linking the contemporary pictorial tradition and experimentation, challenging conventional strategies in the paint restoration.A wall painting Mental and two paintings with wooden stand Río and Saludo of Gumucio, were analyzed.As far as we know, this is the first time in Chile that this noteworthy kind of paints has been investigated by means of micro-Raman spectroscopy.The Mental wall paint (2012) is located at the Yono Gallery, Providencia, Santiago, covered by several layers of painting; Río (2006) and Saludo (2011) works, in the studio of the artist, were painted on playwood and agglomerate wooden supports, respectively; both supports easily degraded.
Mental
The temporary wall paint was performed on four walls 15 .Details of the wall painting displaying archetypal figures of the Chilean landscape such as river, willow and waterfall are shown in Fig. 1a.The piece was synthetically represented in a non-realistic form.A wall of concrete was the support for the Mental paint.This material is obtained by mixing cement, sand and water in specified proportions.The cement is a mixture of lime, clays and other calcined and pulverized materials.The artist also uses spackling paste covering, and an acrylic resin as pictorial layer.
Río
In this painting of 40x40 cm, it is possible to distinguish landscapes organized from a non-naturalistic viewpoint.The mixed technique paint was elaborated on plywood.This material consists in few wooden planks millimeters thick, hitting on each other, usually alternating fiber direction until a desired thickness.The most common disadvantages are due to moisture, which are reflected in the occurrence of cracks in the sense of the fiber, as well as of general material deformations.Detail of the paint is given in Fig. 2b.
Saludo
The artist uses the same visual operation that in Río, that is a superposition of some elements from different pre-manufactured supports.The painting (32,3x60 cm) was elaborated on agglomerate wood consisting in wood flakes bonded at low pressure.In general, this material is a low stable support being easily deteriorated by the moisture and displaying a feeble mechanical behavior; humidity and interactions with other kind of woods induce irreversible structural deformation.The paint is shown in Fig. 3.
Sample collection
Samples of dimensions 0.5-1 mm 2 , collected in January 2014, were extracted from the three selected paintings following internationally accepted procedures 17 .Samples were selected according to the main colors displayed in the artworks.Figs.1-3 display the spots where the samples were collected.In the case of samples Río and Saludo, the paint layer was analyzed separately from the wood and the supporting material.A similar procedure was used to study samples from the wall.
To prepare the cross-sections, samples were embedded in an acrylic resin and polished using micromesh polishing cloths up to 12000.The Raman spectral scanning was performed for the cross-sections directly deposited on a slide.
Raman spectroscopy
The Raman measurements were performed using a Renishaw micro-Raman RM 1000 spectrometer, equipped with laser lines 514, 633 and 785 nm.The apparatus is coupled to a Leica microscope and a CCD camera cooled electrically.The Raman signal was calibrated to the 520 cm -1 line of silicon through a 50x objective.The laser power on the sample was 0.14 mW.Acquisition time was set between 10 and 20 s per accumulation; the average of accumulations was 10 with spectral resolution of 4 cm -1 .The spectra were recorded between 200 and 1800 cm -1 .Spectral recording conditions and the choice of the laser line were selected in order to avoid degradation (photobleaching or photodecomposition) of the sample; the 785 nm line was used.The main identified species are described in Figs.4-6 and
B. Raman spectrum of pigments
The inorganic compound calcite, CaCO 3 , was identified in several samples of the pink and sky-blue areas of the wall painting Mental; in fact, bands at 1090, 713 and 278 cm -1 are ascribed to calcite 26 , probably used as filler material, see Fig. 4d.Rutile, TiO 2 , was identified 23 in various white areas of the Saludo paint displaying bands at 607, 442 and 258 cm -1 see Fig. 6a.The set of vibrations of rutile it is also visible in the dark-green area at 609, 442 and 260 cm -1 .No organic binders were identified in the artworks.No bands ascribed to the acrylic resin used to prepare the cross-sections were identified in the Raman spectra of pigments.The wall paint Mental.The intense blue color in a cross-section segment of the sample is due to copper phthalocyanine (CuPc), see Figs. 4a and 4c.In fact, the spectral profile is in perfect coincidence with the Raman spectrum published by Scherrer et al. 19 reference C.I. 74160:4.Calcite was identified in a low tonality pink area along with another azopigment, β-naphthol (1-(4-methyl-2-nitrophenylazo)-2-naphthol), see Figs. 4b, 4d and 4e.This azopigment displays bands at 1537, 1443, 1390, 1322 and 1278 cm -1 that are in good agreement with those observed in the published reference Raman spectra [22][23] , for β-naphthol commercially named PR 3 or Hansa Scarlet RNC.The sky-blue tonality area is also dominated by the calcite bands, displaying additional very weak bands at 1535, 748 and 681 cm -1 attributable to CuPc, which probably is the responsible of the observed tonality.
The paint Río.Several colored cross-section segments of the sample display different green tonalities, see Fig. 5.The Raman spectrum analysis of the most intense green area suggests the coexistence of at least two different pigments: the blue CuPc and the yellow monoazopigment acetoacetic arylide, see Figs. 5a and 5b.In fact, the intense bands at 1620, 1501, 1312, 1250, 1144 and 960 cm -1 along with the general spectral profile, Fig. 5c, is highly consistent with the Raman data reported 19 for acetoacetic arylide C.I. 11741 and the Hansa yellow reported by Burgio and Clark 21 , reported as the pigment PY65 by Vandenabeele et al. 22 .Other bands in the spectrum mainly those at 1546, 742 and 687 cm -1 are consistent with the presence of CuPc.The bands assignment is displayed in Table 2 and it is based on published Raman data [18][19][20][21][22][23][24][25][26] .According to the molecular structure of CuPc and acetoacetic arylide, various functional chemical groups can be distinguishing.On this basis, it is possible to differentiate the corresponding Raman signals.This is the case for instance for the amide I and amide III vibrational modes (1650-1675 cm -1 and 1230-1280 cm -1 , respectively), the CH 3 deformation modes in the spectral range 1340-1400 cm -1 , NO 2 deformation modes (620-650 cm -1 ) and the aromatic-chlorine stretching, between 300 and 400 cm -1 of acetoacetic arylide.In the case of CuPc some vibrations such as those involving the isoindolic moiety around 1312 and 776 cm -1 , the metal nitrogen stretching mode at 276 cm -1 and macrocycle ring deformation at 393 cm -1 can be differentiate.Several other bands arise from vibrations involving similar structural moieties such as the aromatic rings.Other bands were identified as belonging to the filler CaCO 3 .Thus, the green color area in the Rio paint sample resulted from a combination of the yellow and blue pigments.It has been observed that the wavenumbers of the green color spectra in Fig. 5c, are not exactly the same of the references 19,21,22 ; this is interpreted in terms that there is probably a rather feeble chemical interaction between both chemicals.The paint Saludo.White fragment of the cross-section in this paint correspond mainly to rutile, see Fig. 6a; the polymorph anatase displays bands at 641, 517, 400 and 196 cm -1 .Green areas are dominated by a green CuPc, see Fig. 6b.Bands of the green pigment are intense, which is probably the reason of the dark green color fragments.The present spectrum is highly coincident with that reported by Chaplin et al. 27 and Poon et al. 28 for a green chlorinated copper phtalocyanine.Chaplin et al. 27 concluded that in the case of the screen alone, in a large painted leather screen and two illuminated title pages in 17 th century books of ordinances of the Worshipful Company Barbers, London, that a restoration in the 1980s was carried out with different pigments -haematite, green Cu phthalocyanine, rutile, and a mixture of azurite, malachite and barium sulfate.Neither bands ascribed to chromium green Cr 2 O 3 x2H 2 O, normally active at 611, 552 and 348 cm -1 , nor bands of the green chlorinated copper phthalocyanine C.I. 74250 reported by Scherrer et al. 19 were detected, thus discarding those compounds in the Saludo paint.
CONCLUSIONS
Three works of the artist Ignacio Gumucio, a wall painting Mental and two paintings with wooden stand Río and Saludo, were analyzed by using micro Raman spectroscopy.The blue, dark green and pink areas in the painting were identified as due to CuPc, acetoacetic arylide/CuPc, chlorinated CuPc and β-naphthol, respectively.Other materials in the artworks were also identified, the pigment rutile (TiO 2 ) in white areas and the filler calcite (CaCO 3 ).On this basis and taking into account the identification of the main pigments a protocol for the preservation of the artworks Río and Saludo could be proposed.In the case of the wall paint Mental a photographic record and or video describing the creative process of the artist and on the time that the artwork was exhibited are available on request.Solutions and strategies of a conservation protocol are multidisciplinary allowing entering to the specificity of the debate generated by new issues in the production of contemporary art.In the present case, the Raman analysis allowed to identify and characterize the materials used by Gumucio, thus being a contribution to conservation procedures, considering that many artists are not concerned with the composition of the materials used.The present work remains open to other scientific analyzes that complement and enable further study in other artworks in Chile allowing to develop complete conservation protocols.
Figure. 1 .
Figure. 1. a) Detail of the Mental wall paint.b) Red circles indicate places where the samples were extracted.
Figure. 2 .
Figure. 2. a) Río paint.b) Red circle indicates the place where the sample was extracted.
Figure. 3 .
Figure. 3. Saludo paint.Red circle indicates the place where the sample was extracted.
Figure. 4 .
Figure. 4. Mental paint detail.a) Cu(II) phthalocyanine and b) β-naphthol model structures, and Raman spectra of the c) blue and d) pink colored areas.e) Detail of the Raman spectrum of the pink colored area in the 1800-1100 cm -1 spectral region.
Figure. 6 .
Figure. 6. Saludo paint cross-section.Raman spectra of the a) white and b) dark-green colored areas.
Table 2 .
Raman wavenumbers (cm -1 ) and bands assignment of the spectrum in Fig.5cfor the green colored areas., symmetric deformation; ρ, out-of-plane deformation; T, external vibration of the CO 3 group involving translatory oscillations of the group. | 2018-12-07T23:25:17.327Z | 2016-09-10T00:00:00.000 | {
"year": 2016,
"sha1": "b01f3f2581e306b3fde0f9b450269a12778cf227",
"oa_license": "CCBYNC",
"oa_url": "https://scielo.conicyt.cl/pdf/jcchems/v61n3/art16.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "b01f3f2581e306b3fde0f9b450269a12778cf227",
"s2fieldsofstudy": [
"Chemistry",
"Art"
],
"extfieldsofstudy": [
"Materials Science"
]
} |
253876712 | pes2o/s2orc | v3-fos-license | Comparative Efficacy in Challenge Dose Models of a Toxin Expressing Whole-Cell Vaccine against Eight Serovars of Actinobacillus pleuropneumoniae in Pigs
Simple Summary Vaccine use is considered an integral control method to prevent respiratory disease in pigs caused by the economically significant Actinobacillus pleuropneumoniae. As 19 different distinct subtypes (serovars) of this bacterium exists, and several of these can be present at the same time in the same farm, effective serovar-independent protection is desirable. Vaccines based on the killed bacterial contents of a few of the serovars, including three virulent toxins normally produced during the natural infection of pigs, may potentially provide cross-serovar protection. However, little data is available on multi-serovar vaccine protection. Such a commercially available vaccine, the C-vaccine (Coglapix®, Ceva, France) was tested in a total of 13 similar infection studies mimicking on-farm situations with the most common serovars (1, 2, 4, 5, 6, 7, 9/11, and 13) in trials with identical design of detailed lung lesion investigations. Reliability of trial design was tested by high reproducibility in different studies on same serovar. The C-vaccine was producing highly significant protection against lung lesions following all serovar-infections. The trial design was determined highly reliable. We conclude that the C-vaccine gives high serovar-independent protection against disease and is suitable for this use in the field. Abstract Actinobacillus pleuropneumoniae is a major economically significant bacterial respiratory pig pathogen, and whole cell vaccines are used to prevent disease. However, there is little data available on multi-serovar whole cell vaccine protection. Therefore, we determined the protective efficacies of a whole-cell A. pleuropneumoniae serovar 1 and 2 vaccine comprising ApxI-III toxins (C-vaccine, Coglapix®, Ceva, France) against serovars 1, 2, 4, 5, 6, 7, 9/11, and 13. The infection doses used induced disease representative of endemic field conditions, and standard protocols were used for all studies. Protection against homologous serovars 1 and 2 significantly reduced lung lesion scores (LLS) compared to positive controls: p = 0.00007 and p = 0.00124, respectively. The protection against heterologous serovars 4, 5, 6, 7, 9/11, and 13 also significantly reduced LLS: range p = 2.9 × 10−10 to p = 0.00953. As adjudged by the estimated random effect, reproducibility between studies was high. A highly significant serovar-independent reduction of pathological lung lesions by the C-vaccine was found for all the serovars tested (1, 2, 4, 5, 6, 7, 9/11, and 13). We conclude that the C-vaccine gives high serovar-independent protection against disease and is suitable for this use in the field.
Introduction
Actinobacillus pleuropneumoniae, the aetiological agent of swine pleuropneumonia, is responsible for high morbidity and potentially high mortality, causing substantial economic losses in the global pork industry. The peracute and acute forms are easily diagnosed due to the evident and distinct clinical signs and appearance of dead pigs. Also, the chronic form, likely to develop from any form of pleuropneumonia, is easily diagnosed via slaughterhouse investigations [1][2][3][4]. Depending on the quality of the on-farm monitoring of animals, the far less distinct clinical signs and low fatality of the subacute form are easily missed and erroneously considered as subclinical; interpreted as "no pleuropneumonic issues". Even truly subclinical pleuropneumonia will involve pathological pneumonic lesions [3,5], and despite the lack of clinical signs, average daily weight gain (ADWG) and feed efficiency can be negatively affected [5]. These parameters are often further reduced due to development of chronic lesions, commonly seen at the abattoir [6]. Focusing only on clinical signs in an A. pleuropneumoniae endemic farm, where all different manifestations are in principle present in a herd over time, will not reveal the full pleuropneumonic impact [7,8]. Lung lesion scoring is considered highly relevant for estimating severity and losses of respiratory disease, such as caused by A. pleuropneumoniae, at the farm level [9][10][11][12]. To investigate pleuropneumonia in all its possible manifestations, pathological evaluation of lung lesions appears to be the least biased method. Performing this evaluation close to pneumonic infection would seem to reveal the most accurate validation of the degree of pleuropneumonic impact on the individual pig and is widely accepted as the endpoint measure of A. pleuropneumoniae induced disease [13][14][15][16][17][18][19][20][21][22][23][24][25]. In challenge studies, a dose-response relation is shown [13,17,18,26] and any stage of disease from absolute mortality [14][15][16][17]27,28], even in bacterin vaccinated pigs [16], to subacute [13,19] and subclinical pleuropneumonia [13,16,19]. However, high variation in disease inside challenged groups was evident [13,[16][17][18][19][22][23][24][25][26].
In many cases the exact A. pleuropneumoniae infection status of the individual pig production unit is unknown to the farmer, however the bacterium is endemic world-wide being present in 80-90% of swine farms, up to seven different serovars having been reported on the same farm [29]. The prevalence of serovars varies between countries, regions of countries, and by year of investigation [29][30][31][32][33]. So far, 19 A. pleuropneumoniae serovars have been classified worldwide [34]. In reality there are likely 18 serovars, as serovars 9 and 11 can be considered as one: serovar 9/11, as the difference in the complete capsule polysaccharide loci is only one amino acid and they have identical toxin profiles (ApxI, ApxII) [35]. Different serovars are considered to have quite variable inherited virulence partly due to different Apx-toxin profiles [2,[29][30][31]36,37].
A. pleuropneumoniae has several virulence factors, some are well described, and several are under investigation. The three exotoxins: ApxI-III and lipopolysaccharide (LPS) are of major importance both in the development of lung lesions and protective immunity [2,37,38]; ApxI, II, and III are, together, the antigens considered capable of inducing cross-protection [1][2][3]37,39]. Many other virulence factors have been described [37,39], including membrane proteins, some of which are immunogenic and therefore can add to the vaccine protective capacity [40]. ApxIV, which is only produced during pneumonic infection, is considered to have a role in both disease and protective immunity [37,41].
Modified live vaccines are currently under investigation on an experimental basis only. Their potential as commercial vaccines need further investigation [2]. Several commercial vaccines are available which differ in their composition and can be classified broadly into one of three categories: (1) killed A. pleuropneumoniae whole-cell components only (bacterins, including autogenous vaccines); (2) subunit vaccines containing ApxI-III toxins only; and (3) a combination of these [40]. With distinct differences in efficiency, they all reduce clinical signs, but none can fully prevent infection and colonization [42]. Antibodies against ApxI-III are responsible for the serovar-independent protection against lung lesions [3,6,39,40]. Due to limited cross protection between the serovars, bacterin vaccines lack efficacy compared to ApxI-III combined bacterin vaccines, and pure toxoid vaccines lack general protective capacity due to lack of LPS and other cell wall components [3,38,[43][44][45].
A combination of the three exotoxins, ApxI-III with LPS, and likely more of the abundant cell-wall based antigens [3,37,43,45], induce a strong and specific cell mediated immune response that can confer serovar independent protection [3]. This is considered a design for an efficacious serovar-independent vaccine, feasible for A. pleuropneumoniae prophylaxis to: increase animal well-being, reduce antimicrobial use, and reduce losses due to pleuropneumonia in all its manifestations at any A. pleuropneumoniae-endemic farm at any time [3,6,40]. It is evident that a cross-protective (serovar-independent) A. pleuropneumoniae vaccine with high protective capacity is desirable for the global swine production. Nonetheless, whole-cell based vaccines including Apx toxins are questioned on their ability to confer heterologous, A. pleuropneumonia cross-protection [40,46,47]. Also, partly for logistic reasons (time, expense, number of animals), there is comparatively little data available on the efficacy of such vaccines against different serovars.
The primary aim of this multi-study analysis was to compare the extent of protection against multiple prominent A. pleuropneumoniae serovars provided by a vaccine (the Cvaccine) comprising whole cell components of A. pleuropneumoniae serovars 1 and 2 together with ApxI, ApxII and ApxIII expressed during the production process. The aim was achieved by comparison of C-vaccine trials carried out over a 9-year period using a standard protocol based on a predetermined individual challenge dose of eight serovars, measuring the reduction in lung lesions using standardised and repeatable models. The data available also allowed estimation of the relative virulence between serovars.
Ethics Declarations
The trial designs were all the same and in close accordance with the European Pharmacopoeia [48]. All trials were performed by Ceva Research and Development (R & D) Department or Ceva Scientific Support and Investigation Unit (SSIU) in Hungary.
All studies followed local law and regulations. Authorization was provided by the Government Office of Baranya County Food Chain-Safety and Animal Health Department, Hungary. Individual study approval IDs are noted in Table 1. The
Selection of Trials
To be able to demonstrate the widest possible range of A. pleuropneumoniae serovar challenge studies and provide the highest possible reliability in study, trials selected for this multi-serovar analysis all had the same challenge trial design in accordance with the European Pharmacopoeia [48] and identical protocols for lung lesion scoring. All trials were performed by Ceva Research and Development (R & D) or Ceva Scientific Support and Investigation Unit (SSIU) in Hungary. A total of 13 studies each including one of the eight A. pleuropneumoniae serovars 1, 2, 4, 5, 6, 7, 9/11, and 13 performed over the period of 2011 to 2020 were available (Table 1).
In an attempt to provide the most comprehensive reflection of on-farm A. pleuropneumoniae endemic situations, a standardised weighted lung lesion score (LLS) was selected as the endpoint. At a practical level, due to the variety in serovar virulence, the impact of dead pigs on the LLS standard deviation and the number of pigs possible to be included per trial and serovar, trials providing a "medium" impact, i.e., being non-devastating but still causing mortality, were selected.
Where multiple studies with the same serovar were available, the weighted LLSs of the challenged vaccinated pigs (Vac) versus the challenged non-vaccinated pigs in the positive control group (Pos) were pooled and analysed while taking the potential effect of study into account (Table 1). Also, variance between studies on the same serovars were analysed to estimate quality of repeatability of the aerosol chamber (AC) challenge model, hence the reliability of data. * & ** Coglapix ® in 50 mL and 500 mL presentations, respectively. + Stability test one week after 1st use.
Test Centre and Pig Sources
All studies were performed at a high-biosecurity, professional swine test centre (Prophyl Ltd., Mohács, Hungary). Pigs were recruited from one of two high-health, highbiosecurity farms that monitor clinical signs daily and carry out regular infectious disease testing (see inclusion criteria). On-farm post-mortems are regularly performed on fatalities, and incidents of concern are subject to laboratory investigation. The farms were, and still are, free of A. pleuropneumoniae, Mycoplasma hyopneumoniae, toxin positive Pasteurella multocida (progressive atrophic rhinitis), Porcine reproductive and respiratory syndrome virus, Aujeszky's disease virus, Classical swine fever virus and African swine fever virus, based on regular PCR and/or serology tests performed either by government university or private labs. Actinobacillus suis has never been diagnosed at the farms either by clinical signs, post-mortem, or culturing. Routine testing for Swine influenza virus was not carried out but included as part of the standard diagnostics following clinical signs of respiratory disease. Piglets were not used if they had any respiratory or any other clinical signs.
Pigs at 5-6 weeks of age of either sex and of different breeds ( Table 1) from one of the two high-health pig source farms, and that had been declared free of infectious disease (see above) were recruited into the studies. Animals also had to have no previous clinical history of infection by Streptococcus suis and Glaesserella parasuis.
Any animal selected was serum negative in the ApxIV ELISA (IDEXX APP-ApxIV Ab) test and retested at the end of the trial. ApxIV is an immunogenic protein that is A. pleuropneumoniae-specific and produced by all serovars [49]. This, together with history, based on careful disease monitoring on the source farm substantially reduces the risk of non-sero-converters and "hidden" carrier pigs.
The Vaccine
The vaccine tested was Coglapix ® (Ceva Santé Animale, Libourne, France), hereafter referred to as C-vaccine, which comprises whole cells of A. pleuropneumoniae serovars 1 and 2 expressing ApxI, ApxII and ApxIII, during the production process. Apart from the cross-protective Apx-toxins, this vaccine contains all principal cell wall structures of A. pleuropneumoniae in undetermined quantities which contribute to A. pleuropneumoniaeprotective immune responses: LPS, outer membrane proteins and several other cell wall components; all details available on the EMA web site [50].
Over the span of the 9 years of these studies, the vaccine composition and quality control did not change.
Characterisation and Preparation of the Challenge Strains
The challenge strains were all field strains isolated from pigs that had died in severe acute outbreaks of swine pleuropneumonia, and all were considered as clinically virulent by local stake holders (veterinarians, farm owners/managers and staff) ( Table 1). The serovar of all the A. pleuropneumoniae challenge strains was confirmed in a multiplex-PCR based on capsular loci carried out as described previously [34,51].
Strains were assessed for their ability of growth in liquid culture in a Tryptic soy broth supplemented with yeast extract and nicotinamide adenine dinucleotide solution in shake-flasks rotated at 180 rpm and kept at 37 • C. Their growth curve was analysed using sampling at pre-determined sampling points and subsequent optical density (OD) measuring at different wavelengths using a standard laboratory photometer. At each sampling point, the cultures were subjected to colony forming unit (CFU) counts using standard bacteriological techniques. The OD and CFU values were then aligned, and the strain-specific, optimal wavelengths were determined. After this initial procedure, in each case when a challenge trial was performed, the strain used was prepared in shake flasks under regular OD monitoring and stopped when reaching the desired live titre based on the OD-CFU calibration curve.
Aerosol Dosing Technique
A standardised AC model developed at Ceva Phylaxia by V. Palya and J. Benyeda, inspired by previous AC work, was used [13,27,52]. The system consists of a box, a nebulizer and tubing with fan to transport the aerosol to the chamber. The plywood box has doors on both sides enabling one way movement of the animals during the challenge. Also, it has an acrylic observation window and a slot for aerosol sample collection. Aerosol is produced by an ULTRAfogger TM P4 nebulizer (ME International Installations GmbH, Achim, Germany), attached with experimental sample container, tubing and a fan to transport the mist into the aerosol chamber.
Challenge strains were propagated and used for the test when 10 9 CFU/mL concentration was reached. The A. pleuropneumoniae stock was diluted in sterile PBS to achieve the optimal required 10 6 , 10 7 or 10 8 CFU/animal treatment dosage as shown in Table 1. Actual calculations were made at the test site, using the following parameters to introduce a definite dose/animal during the aerosol treatment in the chamber: • Pig body weight [53] and volume; • Number of pigs placed in the chamber for one run (6-10); • Volume of chamber; • Volume of liquid, turned to aerosol by the ultrasonic nebulizer in 10 min (usually 100-150 mL, depending on air temperature and humidity).
Before the first run of a serovar challenge, the AC was moisturized by running the nebulizer with cold distilled water, to avoid precipitation of challenge material onto dry surfaces. Before each run piglets introduced to the AC were given a couple of minutes of ease to ensure normal respiration before the doors were closed and the challenge initiated. The pigs were evenly distributed and secured in the AC by partition fences; the aerosol created by the nebulizer was uniformly dispersed by internal ventilation. After 10 min of treatment, the pigs were kept in the AC for an additional 2 min with the nebulizer switched off, to allow complete uptake of the aerosol droplets (fresh air was provided during this time to allow normal breathing). Between trials the box and the nebulizer were thoroughly rinsed with water, disinfected with Virkon S, water-rinsed again, and left to fully dry out.
Intranasal (IN) Challenge
Production of the challenge strain and calibration of the challenge dose was as described above. The cultures were prepared in 10× concentration of the desired challenge titre and diluted in sterile PBS to reach the working concentration. Each animal received 5 mL of challenge dose into each nostril using intranasal cannulas; the exact individual animal dose is shown in Table 1.
Determination of Individual A. pleuropneumoniae Strain Challenge Dose
Prior to using the strains in vaccine challenge trials, but not as part of these trials, challenge dose calibration studies were performed. In these trials, three groups of 10 non-vaccinated A. pleuropneumoniae-negative pigs were challenged with doses of 10 6 , 10 7 or 10 8 CFUs, monitored daily for clinical signs and euthanized one week later. Mortality and LLS were evaluated to select the optimal challenge dose. The best challenge dose was the one on being non-devastating but still capable of causing mortality, thereby also reflecting on-farm A. pleuropneumoniae endemic situations.
On top of that, for A. pleuropneumoniae serovar 2 and A. pleuropneumoniae serovar 9/11 the pleuropneumonic impact of different concentrations of challenge dose were investigated via LLS to evaluate the A. pleuropneumoniae protective capabilities of the vaccine for different challenge loads.
Trial Design
All challenge trials were performed using the same overall study design; criteria as specified above. The only difference was the use of a standardised AC challenge model by SSIU and IN application by R & D detailed above and in the overview (Table 1).
In all studies, pigs at the age of 6-7 weeks were randomly assigned to either a vaccinated and challenged group (Vac), or a non-vaccinated and challenged, positive control group (Pos). In the AC model studies, a non-vaccinated, non-challenged negative control group (Neg) was included.
Pigs were housed indoors, with controlled temperature and ventilation. Groups were allocated to different pens in the same barn; without direct contact but sharing same air space.
Until 2019, group sizes were chosen only to comply with the requirements of the European pharmacopeia: minimum 7 pigs in each of the Vac and Pos groups; no Neg group required [48]. From 2019, onwards, following advice from a statistician, group sizes of Vac and Pos were increased to 20, to acquire a statistical power of at least 80% based on calculation on the previous trials. The Neg group included in the AC model challenges were chosen as half the size of the Vac and Pos groups.
Each pig of the vaccine group (Vac) received the first 2 mL dose of the C-vaccine by intramuscular injection (D0), at an age of 6-7 weeks. Three weeks later, D21, the pigs of the Vac group received a second 2 mL dose intramuscular of the same vaccine; the Pos and Neg group pigs received no treatment. Pigs were randomised according to bodyweight and staff responsible for the daily care and monitoring of the pigs were not involved in vaccination and unaware of which pigs belonged to which test-groups.
At D42, 12-13 weeks-of-age, all pigs individually received pre-determined equal doses of the relevant virulent A. pleuropneumoniae strains either by application in an AC, or by the IN route, as described above.
At D49, one-week post-challenge, the trials were terminated. All live pigs were humanely euthanized and pathoanatomically evaluated to establish the individual lung lobe lesions to calculate the individual LLS. Persons performing the pathoanatomical evaluation were not involved in vaccination and unaware of which pigs belonged to which test-groups.
For A. pleuropneumoniae serovar 2, three studies, for A. pleuropneumoniae serovars 4, 6 and 9/11, two studies, and for the remaining A. pleuropneumoniae serovars, one study were included in the analyses.
Post-Mortem Evaluation of Weighted Lung Lesion Score (LLS) and Other Data
In the vaccination-challenge trials, all animals euthanised on day 7 post-challenge (D49) were subjected to necropsy by a pathologist and investigated for all pneumonic pathological lesions including those that are characteristic of actinobacillosis. Evaluation of the post-mortem lesions in the lungs and on the pleura were performed blind and in accordance with a previously described scoring system [10]. All seven lobes of the lung of each pig in trial were examined and each lobe scored on prevalence of pathological lesions of pneumonia and/or pleuritis (pleuropneumonia). Score valuing was according to the size of the affected area: absence = score 0, 1-20% = score 1, 21-40% = score 2, 41-60% = score 3, 61-80% = score 4, and 81-100% = score 5 [10].
Weighting factors were applied on all individual lung-lobe scorings according to the relative size of each lobe in the lung of each pig: right-cranial = 0.07, right-middle = 0.15, right-caudal = 0.35, accessory-lobe = 0.05, left-cranial = 0.04, left-middle = 0.09, and left caudal = 0.25 [54]. Pigs that had died during the week following challenge and before termination of the trial were given the maximum LLS of 5. This way each pig lung ended up with a total LLS of 0-5, the more lesions the higher the score.
Statistical Analyses
The effect of vaccine on LLS was analysed using linear (mixed) models. Each A. pleuropneumoniae serovar was analysed using its own separate model. If more than one study was available for an A. pleuropneumoniae serovar, a random effect of study to account for the possible clustering of effects within a study was included. To assess the importance of the between-study variation when more than one study was available for an A. pleuropneumoniae serovar, the intraclass correlation coefficient (ICC) was calculated as the proportion of the total variance attributed to the random effect of study (σ 2 study ), i.e., ICC = σ 2 study /(σ 2 study + σ 2 res ), where the total variance was the sum of the random effect of study and the residual variance (σ 2 res ).
For the outcome (LLS), a limit of detection (LOD) was defined as half the minimum observed LLS. The LOD was added to all LLS before it was log transformed to improve the underlying assumption about normal distribution of data. All analyses were done in R [55], using the lme4 [56] package for statistical analyses of mixed effects models with the lmerTest [57] package for testing of significant effects.
Serovar-Independent Protection
The protection of the C-vaccine against the homologous A. pleuropneumoniae serovars 1 and 2 strains was demonstrated with highly significant reductions of LLS in the Vac group compared to the Pos group: p = 0.00007 and p = 0.00124 respectively ( Table 2). The protection of the C-vaccine against the heterologous A. pleuropneumoniae serovars 4, 5, 6, 7, 9/11, and 13 was demonstrated with equally highly significant reductions of LLS in the Vac group compared to the Pos group: p = 2.9 × 10 −10 to p = 0.00953 (Table 2). LLS was absent in all Neg groups except some pleurisy in the 2019 serovar 6 group. This group revealed growth of Streptococcus spp. from these unexpected lesions. Table 2. Results presented by sample size, mean weighted Lung Lesion Score (LLS) and standard deviation (SD) for the vaccinated, challenged (Vac) groups and the non-vaccinated, challenged (Pos) groups for A. pleuropneumoniae serovars 1, 2, 4, 5, 6, 7, 9/11, and 13. The p-value is for the test of a difference between Vac and Pos within each A. pleuropneumoniae serovar. Significance is considered when the p value < 0.05.
High Repeatability and Reliability of AC Challenge Model
In all studies applicable (all AC model studies), the Neg group stayed sero-negative in the A. pleuropneumoniae ApxIV ELISA, and in all studies except one, all pigs in the Neg group were without any LLS at the time of autopsy; in the second serovar 6 study from 2019 pleurisy (and polyserositis) was observed in the Neg group together with, what was considered unusually increases of pleurisy in both the Vac and Pos groups. Subsequent extended diagnostic investigations in pigs of all three groups revealed growth of Streptococcus spp. from such lesions. Some variation in mean LLS was observed between A. pleuropneumoniae serovars, supporting the decision to analyse each A. pleuropneumoniae serovar separately (Table 2). For A. pleuropneumoniae serovar 2, the ICC = 0.026, i.e., only 2.6% of the total variation was due to differences between studies. For A. pleuropneumoniae 9/11 the ICC = 0.022, i.e., 2.2%, and for A. pleuropneumoniae serovar 4 ICC = 0.048, i.e., 4.8%. This suggests a standardized set-up, where the effect of study essentially can be ignored in the analyses. However, for A. pleuropneumoniae serovar 6, the ICC = 0.35, i.e., 35%, suggesting that there were marked differences between these two studies.
Other Data
Clinical signs including rectal temperature were observed and recorded, but to differing protocols in different trials, and not considered for efficacy evaluation.
Discussion
The concept of combining the ApxI, ApxII and ApxIII [3,6,39,40] with cellular components of A. pleuropneumoniae [38,44] has been demonstrated to result in a highly effective and highly significant reduction in LLS from homologous as well as heterologous serovars in this study. Reduction of mortality as well as improved productive performance have been demonstrated previously in field studies compared to non-vaccinated [7] as well as pigs given a subunit vaccine [58]. The use of a vaccine with these characteristics will increase animal well-being and reduce both antimicrobial use and economic losses due to pleuropneumonia in A. pleuropneumoniae endemic swine farms [3,6,40].
Clinical signs have been recorded in our studies as in almost all similar challenge studies. However, the scoring of clinical signs is commonly performed according to different, non-standardised protocols of high variety, preventing comparison between studies. This was unfortunately also the case for our studies. Rectal temperature is under influence of micro-climatic conditions, individual stress level, and individual pigs may have quite variable individual base line temperature. Furthermore rectal temperature is not unambiguously correlated to behaviour, well-being, and appetite. For that reason rectal temperature data was not considered to provide additional value to post-challenge clinical evaluation and lung lesion scoring, in line with other authors [14][15][16][17][18][19][20][21]24,[26][27][28]44].
Weighing of pigs, other than at the time of randomisation, was not considered relevant, as the short observation period prevent an ADWG calculation from being a meaningful parameter.
LLS as endpoint data was considered the most comprehensive, reliable (measurable), and valuable parameter in describing the impact of A. pleuropneumoniae induced pleuropneumonic disease mimicking the situation on an endemic farm; studies were selected fitting this purpose. A combination of limitations of group sizes/number of studies, and variation in strain virulence prevented mortality from being a relevant parameter. If mortality had been the endpoint, it would have prevented LLS from being a meaningful parameter due to quite extreme variations (standard deviation). Finally, the true mortality can be biased by the commonly short duration of post-challenge investigation, whether humanely euthanasia is performed or not, and the personal threshold of when to euthanise. In small groups, like in A. pleuropneumoniae challenge studies one dead pig more or less, have a great impact on mortality rate and subsequent statistical analysis.
To our knowledge, based on publicly available information, this is the most exhaustive testing on any A. pleuropneumoniae-vaccine; experimental or commercial. We have analysed the efficacy of the C-vaccine in protecting against lung lesions from field strains of eight different serovars, all isolated from animals in outbreaks considered clinically virulent to all local stake holders, and of high relevance in the swine production at large, i.e., six heterologous serovars (4, 5, 6, 7, 9/11, and 13) and two homologous (1 and 2) on which the vaccine is based. We found a significant reduction in LLS for the Vac groups compared to the Pos groups. This implies that the vaccine is capable of inducing serovar-independent protection, a valuable characteristic for optimizing the control of A. pleuropneumoniae-related pig health problems.
The two R & D studies used IN challenge according to requirements of licensing authorities. These studies were included to expand the range of serovars tested. When considering challenge models, for most investigators, the choice is between IN or AC. IN has an inherited accuracy in applied dose but is labour intensive and comparatively more expensive. In addition, dependent on pig handling and dose application (e.g., sedation/non-sedation), IN is potentially more stressful which can increase respiratory rate, hence respiratory volume, and can affect the planned dose. With AC, skilful pig handling can ensure acceptance of the animals to the chamber and less stress. Our results indicate that reproducible protection studies can be performed with AC when using the described standardised AC model in A. pleuropneumoniae challenges and to our knowledge is the only one validated for reproducibility using the intraclass correlation coefficient (ICC). The determined, reproducibility of the challenge studies was high, and the data produced are of high reliability. Hence, accurate individual challenge dose calculations, and lung lesion scoring based on standardized methodology [10], adapted to the biological appearance of the lung [54] is reflected in a standardized, reproducible, weighted LLS from both the IN and the standardised AC model used in this multi-study analysis. Subsequently the majority of studies presented here, all post release studies performed by SSIU, and future studies will use the standardised AC model.
The variations attributable to differences between studies (all AC model challenges) are very low: 2.6%, 2.2% and 4.8% for three A. pleuropneumoniae serovar 2, two serovar 9/11 and two serovar 4 challenge studies, respectively. An outlier is the 35% of variation attributable to trial between the two serovar 6 challenges. An explanation could be that excessive pleuritis was generally observed in a larger proportion of the animals in the 2019-study. Bacteriology demonstrated the presence of Streptococcus spp. in these samples. Significant improvement in LLS compared to the control group was still observed in this trial alone, and even more so when analysed together with the 2018-study. That infection with other pathogens, e.g., Bordetella pertussis, can affect lesion score in A. pleuropneumoniae challenged animals has been documented by others [25]. The absent lung lesions in all Neg groups, except the one with pleuritis of likely Streptococcus spp. origin combined with sustained negative ApxIV serology, indirectly confirms that the lung lesions in the Vac and Pos groups originate from the specific serovar used for challenge only.
Searching for AC challenge studies with A. pleuropneumoniae to compare serovar virulence and AC models, sixteen relevant papers were found [13][14][15][16][17][18][19][20][21][22][23][24][25][26][27][28]. In total, five common and important serovars were investigated: seven on serovar 1 [14][15][16][17][26][27][28], four on serovar 7 [21,[23][24][25], two on serovar 2 [18,19], one on serovar 5 only [20], one on serovars 2, 5 and 6 [13], and one on serovars 2 and 9 [22]. Serovars 2, 5 and 6 were considered as being of moderate to high virulence [13], but this was based on small numbers of animals being investigated, and the result with serovar 5 can be considered surprising given that this is normally considered as of high virulence [59]. Serovar 7 was considered as moderately virulent [25]. Based on very high doses in identical trial designs, serovars 1 and 5 appear comparable in virulence measured on mortality only [15,20]. When comparing dosage and outcome empirically across the heterogenous trial designs, serovar 1 stands out as the most virulent closely followed by serovars 5 and 9, placing serovars 2, 6 and 7 as moderate to highly virulent. However, most of these studies, like ours, were not designed to reveal differences in virulence, rather dosing was aimed at obtaining similar disease severity distributions in the positive control groups to enable assessment of vaccine protection. Nonetheless, our data indicates broad agreement with the literature in that serovars 1, 5, 9/11 are the most virulent, serovars 2 and 13 of slightly lesser virulence, and serovars 4, 6 and 7 as moderate-to-highly virulent. It should be noted that the serovar 2 isolate we used was from Europe which expresses ApxII and ApxIII being of higher virulence than serovar 2 isolates from North America which typically only express ApxII [30]. A definitive rating of the virulence of different serovars by AC would require fully standardised extensive head-to-head trials to be carried out in a reproducible challenge model similar to that presented and validated in this publication.
Whatever the A. pleuropneumoniae serovar, strains over the years tend to cluster closely with very little genomic variation [60,61]. Therefore the time span of up to 9 years between isolation of strain and challenge, up to 14 years until present date, should be considered of little, if any consequence.
In this study we have used the standardised weighted LLS model to assess vaccine efficacy against multiple serovars of A. pleuropneumoniae after AC-challenge in all postrelease studies, as well as in the IN-challenge studies of R & D. In the 16 AC-challenge papers discussed above, 28 tests groups can be identified, with five reporting mortality and describing lung lesions in general pathological terms [14,[25][26][27], three use in-house models considering other organs than the lungs and the pleura (hart + pericardium) [13,16,17], two calculate percentage of lung tissue affected [14,20]. Only seven score lung lesions with the standard scheme of Hannan and colleagues [10,18,19,[21][22][23][24][25]. None of them used a weighted LLS which takes into account the size of the individual lung lobes for optimal comparison between pigs and groups.
Also, the days from challenge to scoring varies substantially between the 28 groups: twelve are in the interval of 15 to 22 days and only focused on chronic lesions [14,[16][17][18][21][22][23][24][25]28], three are intermediate from 12-14 days [14,28], nine are focused on acute, subacute and subclinical lesions in the interval of 5-7 days [14,15,[18][19][20]22,24,26,27], and one was assessed only at 24 h [14]; in three publications comparisons were made between groups where dead animals were not part of the analyses, are not included [17,22,24]. Finally, numbers are predominantly small with only 5/28 groups using 10 animals or above [16,17,25], another five used 8 pigs [14,18,20,21,23], and the majority using less pigs in a test group [13][14][15][17][18][19]22,[24][25][26][27][28]. Thus, the variation in both dosing and assessment methodology severely complicates comparisons with other reported AC-challenge studies. Further comparative studies would best be undertaken in a highly standardised model with reproducible methodology, as reported here. Also, further research in methods to validate A. pleuropneumoniae induced pleuropneumonic losses in general is key, but of particular interest for further evaluation of the subclinical/subacute forms [3,13,16,19]. In a world of reducing antimicrobial use, the ability to perform exact cost-benefit analyses on different A. pleuropneumoniae control strategies are already of great importance, and involvement of systematic LLS evaluation is likely to be an integral component of such schemes [9].
Limitations of this study include: The lack of standardised, comparable, and objective scoring protocols for clinical signs, the practical and statistical limitations in producing highly reliable data on both LLS and mortality from the same challenge study of most, if not all, A. pleuropneumoniae serovars, and the lack of challenge strain re-isolation from pathological lesions. However, the latter is mitigated by the monitoring and biosecurity in the farms delivering the test animals as well as the test facility, the lack of lung lesions in the Neg group animals, and their sustained ApxIV negative serology throughout the studies. Thus the LLS can be considered to originate from the specific field strain investigated. In addition, the combination of both IN and AC challenge studies in the same multi-study analysis, could be considered a limitation. However, both challenge models had high dosing reliability and endpoint measurements were to the same standardised, objective, and comparable scoring protocol.
Conclusions
The C-vaccine was clearly effective, providing serovar-independent and highly significant reductions of LLS in multiple challenges with different A. pleuropneumoniae serovars 1, 2, 4, 5, 6, 7, 9/11, and 13, tested in either IN-challenge or a standardised AC-challenge model. In both models measured on standardised data of high reliability. To our knowledge, this is the largest single published report of efficacy against multiple A. pleuropneumoniae serovars using a standard validated reproducible protocol. In addition, to our knowledge, the standardised AC model is the only one validated for reproducibility in A. pleuropneumoniae challenge studies, providing definitive pig challenge doses and weighted LLS for accurate biological evaluation of disease and vaccine protection against A. pleuropneumoniae.
Informed Consent Statement:
The animals used in these studies were privately owned by Prophyl Ltd. and as such did not require study specific owner informed consent as signature by the CRO's representative on the study protocol is sufficient.
Data Availability Statement:
The data presented in this study are available from the corresponding author on reasonable request. | 2022-11-25T17:01:32.826Z | 2022-11-23T00:00:00.000 | {
"year": 2022,
"sha1": "920b3dc097e079693aae04a0bd48f7b1572bab6f",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2076-2615/12/23/3244/pdf?version=1669182824",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "6c867628a368215700bc4e13ba145f9fe5a38c64",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
199537204 | pes2o/s2orc | v3-fos-license | Antimicrobial resistance pattern and molecular genetic distribution of metallo-β-lactamases producing Pseudomonas aeruginosa isolated from hospitals in Minia, Egypt
Background: Pseudomonas aeruginosa (P. aeruginosa) represents a great threat to public health worldwide, due to its high ability to acquire resistance to different antibiotic classes. Carbapenems are effective against multidrug resistant (MDR) P. aeruginosa, but their widespread use has resulted in the emergence of carbapenem-resistant strains, which is considered a major global concern. This study aimed to determine the prevalence of carbapenem resistance among P. aeruginosa strains isolated from different sites of infection. Methods: Between October 2016 and February 2018, a total of 530 clinical specimens were collected from patients suffering from different infections, then processed and cultured. Isolates were tested for extended spectrum β-lactamase (ESBL) and metallo-β-lactamase (MBL) production using double-disk synergy test, modified Hodge tests, and disc potentiation test. PCR was used for the detection of selected OXA carbapenemases encoding genes. Results: Of 530 samples, 150 (28.3%) P. aeruginosa isolates were obtained. MDR strains were found in 66.6% (100 of 150) of isolates. Of 100 MDR P. aeruginosa isolates, 54 (54%) were ESBL producers and 21 (21%) carbapenem resistant P. aeruginosa. MBL production was found in 52.3% (eleven) carbapenem-resistant isolates. CTX-M15 was found among 55.5% of ESBL- producing P. aeruginosa. Carbapenemase genes detected were blaIMP (42.8%, nine of 21), blaVIM (52.3%, eleven of 21), blaGIM (52.3%, eleven of 21), blaSPM (38%, 8/21). In addition, isolates that were positive for the tested genes showed high resistance to other antimicrobials, such as colistin sulfate and tigecycline. Conclusion: Our study indicates that P. aeruginosa harboring ESBL and MBL with limited sensitivity to antibiotics are common among the isolated strains, which indicates the great problem facing the treatment of serious infectious diseases. As such, there is a need to study the resistance patterns of isolates and carry out screening for the presence of ESBL and MBL enzymes, in order to choose the proper antibiotic.
Introduction
Pseudomonas aeruginosa is an opportunistic pathogen that can cause outbreaks of hospital-acquired and life-threatening infections, especially among immunocompromised and critically ill patients. 1 P. aeruginosa can cause respiratory tract, burn, wound infections and otitis media. 2 P. aeruginosa infections are commonly associated with high mortality, attributed to its intrinsic resistance to many classes of antimicrobial agents and ability to acquire resistance by mutation and horizontal transfer of resistance determinants. 3 The rapid emergence of penicillin and cephalosporin resistance among P. aeruginosa strains has become a serious clinical problem worldwide. Carbapenems (imipenem and meropenem), potent antipseudomonal drugs, have been used as the last resort for the treatment of infections associated with multidrug resistant (MDR) P. aeruginosa isolates. 4 Resistance to carbapenems has developed through decreased permeability, overexpression of the efflux-pump system, alterations in penicillin-binding protein and carbapenemhydrolyzing enzymes (carbapenemases). 5 Carbapenemases represent three classes of β-lactamase (BL). Ambler class A and D (serine carbapenemases) and class B (zinc-dependent). These enzymes require zinc for their catalytic activity and are inhibited by metal chelators, such as EDTA and thiol-based compounds, and are called metallo-BLs (MBLs). MBL enzymes are able to hydrolyze all β-lactam antibiotics, with the exception of monobactams. The genes encoding these enzymes have found to be carried on highly mobile elements, which is the main cause of their dissemination in the hospital environment. MBLs are mainly plasmid-mediated and in some cases chromosomally mediated. The most common MBLs enzymes belong to the Verona integron-encoded MBL (VIM), imipenemase (IMP), São Paulo MBL (SPM), German imipenemase MBL (GIM), Seoul imipenemase MBL, and New Delhi MBL families. 6 Infections caused by MBL-producing organisms are associated with high morbidity and mortality rate, especially in hospitalized and immunosuppressed patients. 7 Recently, many studies reported the prevalence of P. aeruginosa strains harboring both extended-spectrum BL (ESBL) and MBL genes, which is considered a great challenge for antimicrobial therapy. 8 In addition, it is difficult to detect ESBLs phenotypically. 9 As such, molecular techniques are required to analyze the coexistence of carbapenemases and ESBLs in the same strain. The aim of this study was to study the prevalence and DR profile of carbapenemresistant P. aeruginosa (CRPA) isolates obtained from hospitalized patients with various infections
Bacterial isolates
A total of 150 (28.3%) P. aeruginosa isolates were isolated from 530 samples collected from hospitalized patients with various infections as part of routine hospital-laboratory procedures. Samples were processed and cultured on blood agar at 37°C and 42°C for 24 hours. One colony was picked and subcultured on MacConkey agar plates and cetrimide agar. Isolated colonies were further identified according to colony morphology, lactose fermentation, and biochemical characteristics (oxidase, triple sugar iron, urease tests, sulphide-indole-motility). Colonies were able to grow on cetrimide agar, show positive reactions on catalase and oxidase tests, grow at 42°C (used to distinguish P. aeruginosa from other lactose nonfermenting Gram-negative rods), and show negative results in triple-sugar iron and glucose-fermentation tests. 10,11 P. aeruginosa colonies were purified by streaking, and pure colonies were stored at 4°C.
Phenotypic detection of ESBL production
Detection of ESBL production by P. aeruginosa strains was performed by double-disk synergy test (DDST). 13 Disks of ceftazidime, cefotaxime, aztreonam, and cefepime (30 µg each) were placed at a distance of 30 or 20 mm (center to center) from an amoxicillin 20 µg-clavulanic acid 10 µg disk. Increase in zones of inhibition toward amoxicillin-clavulanic acid antibiotic disks is indicative of the presence of ESBL.
Phenotypic detection of MBL production 14
Imipenem-EDTA combined disk synergy testing was used for identification of MBL-producing isolates according to Lee et al. 14,15 A solution of 0.5 M EDTA (pH 8) was prepared by dissolving 18.61 g of EDTA in 100 mL distilled water and adjusting its pH to 8 using NaOH, then, autoclaving. The tested organisms were cultured on the surface of Müller-Hinton agar plates. Two 10 µg imipenem disks or two 10 µg meropenem disks were placed on the surface of agar plates and 5 µL EDTA solution added to one imipenem and one meropenem disks. Zones of inhibitions around discs with EDTA were examined after 16-18 hours' incubation at 35°C and compared to those without EDTA. An increase in zone diameter of at least 7 mm around the imipenem-EDTA disc and meropenem-EDTA disks were considered positive results.
Amplification of ESBL-CTX-M15 and MBL genes
Boiling was used to prepare DNA templates of genes. Specific primerscefotaximase (bla CTX-M15 ), bla VIM , bla IMP , bla GIM , and bla SPM (Table 1)were used for PCR amplification of the genes. PCR amplification was done using 25 µL reaction mixture containing 0.2 µL Taq polymerase 5 U/µL 1 pmol of each forward and reverse primer, 2.5 µL dNTP mix (2 Mm), 3 µL DNA template, and 14.8 µL DNase-free and RNase-free water. PCR reactions were performed using a Mastercycler personal 5332 (Eppendorf, Hamburg, Germany). Amplified products were analyzed by electrophoresis in 2% agarose gel at 80 V for 45 minutes in Tris-Borate-EDTA buffer containing ethidium bromide under ultraviolet irradiation. 16,17 Results and discussion P. aeruginosa is commonly associated with hospitalacquired infections. With regard to the specimen site, of 530 samples, 150 (28.3%) were positive for P. aeruginosa, which was similar to results reported by Al-Haik et al 18 and Mansour et al 19 19 of 38 (50%) of patients admitted to the intensive-care unit (ICU). Our results showed high incidence (68.4%) of P. aeruginosa among samples collected from patients suffering from otitis media, which was higher than reported by Umar et al, 21 who found that 23.2% of samples of otitis media were positive for P. aeruginosa. The distribution of isolates across major hospitals in Minia Governorate was analyzed. High incidence of P. aeruginosa was observed among samples collected from the chest hospital, while all samples obtained from Minia General Hospital were negative for P. aeuginosa (Figure 1) P. aeruginosa possesses MDR against a wide variety of antibiotics. Resistance of P. aeruginosa is usually accompanied by the production of many BLs, active expulsion of antibiotics by efflux pump, and alteration of outer-membrane protein expression. 9,22 Resistance to variety of β-lactam antibiotics is a growing problem, due to their continuous mutation, which makes BLs production the most common cause of DR and antimicrobial therapy failure. 23 Among BLs, ESBLs are widely distributed among Enterobacteriaceae members. They are also found in Acinetobacter baumannii and P. aeruginosa. At first, TEM-type ESBLs and SHV-type ESBLs were the most dominant among Gram negative isolates in Europe and other regions. Since the last decade, CTX-M type ESBL has became the most prevalent.
ESBL production is widely spread among Enterobacteriaceae, especially P. aeruginosa. Our study showed that all P. aeruginosa isolates were completely resistant to azlocillin and amoxicillin-clavulanic acid. Of 150 P. aeruginosa isolates, 100 (66.6%) were MDR and 21 (21%) of these were CRPA (eleven isolates were imipenem-resistant and ten meropenem-resistant). Figure 2 shows that 46%, 28.7%, and 28% of P. aeruginosa were resistant to polymyxin B, colistin sulfate, and tigycycline, respectively. In this study, it was found that 54 (54%) isolates of MDR P. aeruginosa were ESBL producers. Similarly high production of ESBL was reported by Ahmad et al, 24 who reported that ESBL production by P. aeruginosa isolates was 61.6%, while lower incidence (27.33%) was reported by Dutta et al. 25 In addition, our results showed that eleven (11%) isolates were MBL-producing P. aeruginosa. Furthermore, MBL-producing strains represented 52.3% (eleven of 21) of CRPA isolates. Coexistance of ESBL and MBL was found among 5% of MDR P. aeruginosa and five of 21 (23.8%) CRPA isolates. Antibiotic-resistance patterns of ESBL-producing strains revealed that all ESBL producers were completely resistant to azlocillin, amoxicillin-clavulanic acid, ampicillin/sulbctam and cefipime. Co-resistance with other antibiotics was observed including colistine sulfate, tigecycline, and polymyxin B (Table 2). Also, MBLproducing strains showed high resistance to cefipime and carbenicillin (72.7% each), but lower resistance was observed against ciprofloxacin, colistin sulfate, and levofloxacin (36.3% each). Ilyas et al 26 showed higher incidence of antibiotic resistance exhibited by MBL-and ESBL-producing P. aeruginosa. They reported that ESBL-and MBLproducing P. aeruginosa isolates were completely resistant to amoxicillin-clavulanic acid, ceftriaxone, ciprofloxacin, and cefepime. Also, they showed higher incidence of MBL-producing P. aeruginosa (25.7%) and lower incidence of ESBL production (8.5%). Mirsalehian et al 27 found that all MBLproducing P. aeruginosa were colistin-sensitive and 37.5% were resistant to aztreonam, while in the present study low incidence of resistance to colistin, ciprofloxacin, and levofloxacin (36.4%) and a resistance rate of 54.5% were reported against azetreonam. Bashir et al 28 reported that all MBL-producing P. aeruginosa isolates were resistant to gentamicin, ceftazidime, carbenicillin, tobramycin, ceftriaxone, ofloxacin, cefoperazone, cefoperazone-sulbactum, and ceftazidime-clavulanic acid and low resistance to polymyxin B.
The rapid spread and the emergence of MBL-and ESBL-producing P. aeruginosa isolated from hospitals is of great concern and threat. In addition, differences in resistance patterns among strains isolated from different countries may be attributed to antibiotic use, horizontal gene transfer, and environmental conditions. Therefore, it is important to test isolates for MBL and ESBL production and to test for antibiotic susceptibility before antimicrobial therapy. Table 3 shows that the highest incidence of ESBL production was observed among MDR P. aeruginosa samples isolated from ear infections (80%), followed by those isolated from chest infections (75%), and ICU patients (70%). The highest incidence of MBL production was observed among MDR P. aeruginosa samples isolated from wound infections (19%) followed by those isolated from ear infections (14.3%). Nithyalakshmi et al 29 reported that the frequency of occurrence of ESBL among P. aeruginosa isolates was 21.96%, and most ESBL producers were obtained from urine samples (27.7%), followed by respiratory infection (23.68%), and wound infection (22.95%).
All MDR P. aeruginosa isolates were tested for CTX-M15 and carbapenem-resistance genes: bla IMP , bla VIM, bla GIM , and bla SPM . It was found that 55.5% (30 of 54) of ESBL-producing P. aeruginosa isolates were harboring CTX-M15, which was higher than another study 17 reporting that out of 200 MDR P. aeruginosa isolates, 19 were positive for CTX-M15, of which 64.28% were ESBLpositive. Although carbapenem resistance was found among 21 P. aeruginosa isolates, only eleven were found to harbor MBL genes. Of 21 carbapenem-resistant strains, 42.8% (nine of 21) were positive for bla IMP , 52.3% (eleven of 21) positive for bla VIM , 52.3% (eleven of 21) positive for bla GIM , and 38% (eight of 21) positive for bla SPM .
The distribution of carbapenem-resistance genes and bla CTX-M15 among MDR P. aeruginosa-producing ESBL and/or MBL isolates were tested (Table 4). Of eleven MBL-producing MDR P. aeruginosa, three (27.2%) were CTX-M15, nine (81.8%) positive for bla IMP , four (36.3%) for bla VIM , five (45.4%%) for bla SPM and six (54.5%) for bla GIM . Lower incidence was found by Zubair et al 30 who reported that among 22 isolates positive for MBL production phenotypically, only five were harboring MBL genes. Furthermore, they reported that bla VIM was the predominant gene, and none of the other genes were detected.
Also, It was found that 55.1% of ESBL/non-MBLproducing MDR P. aeruginosa isolates were positive for CTX-M15, while none of these strains was found to harbor bla IMP . On the other hand, bla VIM was the most common carbapenem-resistance gene (14.2%). Rafiee et al 31 and Laudy et al 32 showed that all ESBL-producing isolates were negative for CTX-M gene, while Ahmed et al 33 reported a lower incidence of bla CTX-M production (10.7%) among P. aeruginosa strains isolated from Makkah hospitals. All MBL/non-ESBL-producing P. aeruginosa harbored bla IMP -like genes and 50% were positive for bla GIM , while 33.3% only were positive for both bla VIM and bla SPM (Table 4). Similar findings were shown by Abiri et al 34 . In contrast, Mirsalehian et al 27 reported that bla VIM was the most prevalent carbapenemase gene among MBL-producing P. aeruginosa, while 25% of MBL isolates were positive for bla IMP and all MBL isolates negative for bla GIM and bla SPM . Our results showed that five isolates of MDR P. aeruginosa were ESBL and MBL coproducers. Three isolates (60%) were found to have bla CTX-M15 , bla IMP , bla GIM , and bla-SPM , and two (40%) were positive for bla VIM . MDR P. aeruginosa samples were classified into seven groups according to the number of carbapenem-resistant genes harbored by MBL-producing P. aeruginosa isolates, in order to study their demographic, phenotypic, and genotypic features: group A comprised MBL-producing P. aeruginosa isolates harboring two genes (bla-IMP and bla-GIM ), group B isolates positive for bla IMP , bla VIM , and bla SPM , group C including those which were positive for bla IMP , group D isolates positive for bla IMP and bla SPM , group E isolates positive for bla IMP , bla GIM , and bla SPM , Group Fisolates positive for bla IMP and bla VIM , group G MBL-producing P. aeruginosa isolates positive for bla VIM , bla GIM , and bla SPM , and group H including isolates positive for bla VIM and bla GIM (Table 5).
Our study showed that all MBL-producing P. aeruinosa isolates in groups A-H were obtained from Minia University Hospital except one isolate that had been obtained from a chest hospital. Of eleven MBL-producing P. aeruginosa, five were ESBL producers and obtained from surgery and ICU units of Minia University Hospital. Of these, three (two from surgery unit and one from ICU) were positive for CTX-M15 gene. The isolate obtained from the ICU unit showed resistance to meropenem, polymyxin B, tigecycline, gentamicin, amikacin, and ceftazidime, which represents a great challenge for antimicrobial therapy patients. The other two isolates (surgery unit) showed resistance to gentamicin, ceftazidime, meropenem, imipenem, tigycycline, and colistin sulfate. Furthermore, the isolate obtained from the chest hospital belonged to group H, was positive for ESBL but negative for CTX-M15, and showed resistance to ceftazidime, cefoperazone, gentamicin, amikacin, tigycycline, polymxin B, and meropenem. Chaudhary et al 35 found that the frequency of bla IMP and bla VIM among MBL-producing strains was 28.73% and 47.12%, respectively. Coexistence of MBL and ESBL was found among 14.3% of isolates, of which 17.5% were positive for TEM and IMP genes and 14.8 positive for AMP-C and VIM. Also, they found that isolates coproducing ESBL and MBL were highly resistant to cefepim, piperacillin-tazobactam, ceftazidime, meropenem, and imipenem.
Our study showed the prevalence of ESBL-and MBLproducing P. aeruginosa with limited sensitivity to antibiotics among the isolated strains, which indicates the great problem in the treatment of serious infectious diseases. In addition, there is a need to study resistance pattern of isolates and carry out screening for the presence of ESBL and MBL enzymes, in order to choose the proper antibiotic.
Study limitations and future recommendations
We detected the distribution of genes only among resistant strains. Quantitative PCR assays are recommended for future studies, and should be performed to verify expression differences of different resistance genes in MDR P. aeruginosa.
Conclusion
Using carbapenems in clinical practice was initially the solution to treatment of serious bacterial infections caused by β-lactam-resistant bacteria. Due to their widespread use, the emergence of MBL-producing strains and strains coproduce both ESBL and MBL was observed. As found in our study, strains showed high resistance to the commonly used antibiotics, which emphasizes the need to know the resistance patterns and testing for the coexistence of these enzymes, in order to design newer policies for antimicrobial chemotherapy.
Disclosure
The authors report no conflicts of interest in this work. | 2019-08-10T12:08:40.522Z | 2019-07-16T00:00:00.000 | {
"year": 2019,
"sha1": "52fc0a077f307be7f38c43ee22d47b3601f0f440",
"oa_license": "CCBYNC",
"oa_url": "https://www.dovepress.com/getfile.php?fileID=51280",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "b24a9cedd1563ec9e4f2c247b9f45732051f298b",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
} |
18744493 | pes2o/s2orc | v3-fos-license | Combined Cognitive-Psychological-Physical Intervention Induces Reorganization of Intrinsic Functional Brain Architecture in Older Adults
Mounting evidence suggests that enriched mental, physical, and socially stimulating activities are beneficial for counteracting age-related decreases in brain function and cognition in older adults. Here, we used functional magnetic resonance imaging (fMRI) to demonstrate the functional plasticity of brain activity in response to a combined cognitive-psychological-physical intervention and investigated the contribution of the intervention-related brain changes to individual performance in healthy older adults. The intervention was composed of a 6-week program of combined activities including cognitive training, Tai Chi exercise, and group counseling. The results showed improved cognitive performance and reorganized regional homogeneity of spontaneous fluctuations in the blood oxygen level-dependent (BOLD) signals in the superior and middle temporal gyri, and the posterior lobe of the cerebellum, in the participants who attended the intervention. Intriguingly, the intervention-induced changes in the coherence of local spontaneous activity correlated with the improvements in individual cognitive performance. Taken together with our previous findings of enhanced resting-state functional connectivity between the medial prefrontal cortex and medial temporal lobe regions following a combined intervention program in older adults, we conclude that the functional plasticity of the aging brain is a rather complex process, and an effective cognitive-psychological-physical intervention is helpful for maintaining a healthy brain and comprehensive cognition during old age.
Introduction
Normal aging is associated with cognitive decline in various domains, such as executive control, working memory, and episodic memory, and has been linked with structural morphology and functional changes in the brain [1,2]. There has been accumulating evidence that older adults exhibit structural volumetric decreases, thinning of white matter tracts, and changes in functional activation patterns in several brain regions, specifically the prefrontal cortex (PFC) and medial temporal lobe (MTL) [3][4][5][6][7]. Fortunately, the aging brain exhibits experience-dependent plasticity, and increasing studies find that cognitive, physical, or social activity is greatly beneficial for the elderly to promote cognitive performance and optimize brain structure and function [8][9][10].
Neuroimaging studies have demonstrated that cognitive training can counteract age-related brain structural and functional losses. For instance, memory training can increase the cortical thickness [11] and induce greater activation in brain regions associated with self-initiated semantic strategy use [12] in healthy older adults. Moreover, memory training has been shown to enhance hippocampal activity during memory retrieval in patients with mild cognitive impairment (MCI) [13] and attenuate the differences in brain activation patterns between MCI patients and healthy controls [14].
Neural Plasticity
Executive function training can induce functional alterations in cognitive control-related brain regions [15,16]. Apart from cognitive training, physical exercise training has also been found to induce significant alterations in structural morphology and cerebral function [17][18][19][20].
Given that multiple factors and conditions have shown beneficial effects on the aging brain [2], it is expected that a combined cognitive-psychological-physical intervention including cognitive, physical, and social activities should be a more efficient approach to improve brain function in old age [21][22][23]. Following this assumption, a previous study from our group explored the effects of combined intervention including cognitive training, physical exercise, and group counseling on the functional plasticity in restingstate interregional connectivity in healthy older adults [24]. The results showed that the combined intervention enhanced the functional connectivity between the MTL and medial prefrontal cortex (mPFC), and the interregional connectivity changes were correlated with individual cognitive performance. This study confirmed that the combined intervention can improve the functional connectivity between the MTL and mPFC in the older adults. Nevertheless, these findings do not answer the question of whether or not the combined intervention could induce regionally brain functional plastic changes.
In recent years, the resting-state fMRI (RS-fMRI) has increasingly become a widely used method to investigate the brain functional plasticity [25]. RS-fMRI can be used to explore the intrinsic functional architecture of the brain without the need for participants to perform a specific task [26][27][28]. Previous studies usually use the resting-state functional connectivity (RSFC) analysis to examine the intervention effects on functional organization changes between distinct brain regions [24,29]. However, the RSFC method can only be used to measure the interregional connectivity between spatially remote brain regions and strongly rests upon the regions of interest (ROI) defined with prior knowledge [30]. Regional homogeneity (ReHo) analysis is a unique RS-fMRI method for evaluating the local temporal synchronizations of spontaneous low frequency BOLD signals, which measures the similarity between the time series of a given voxel and its nearest neighbors [31]. It has been shown that the variations of ReHo values have neurobiological and structural basis [32]. As a data-driven approach, the ReHo method has high test-retest reliability [33,34] and can detect regional changes that are induced by different conditions across the whole brain without requiring any prior knowledge [31]. It is widely applied in exploring brain function of healthy people and can predict individual performance in cognitive tasks [35][36][37]. Moreover, ReHo has been used to monitor disease progression in patients with Alzheimer's disease (AD) and MCI [38,39]. Thus, the ReHo could be a potentially important complimentary method to examine the regional plasticity in aging brain.
In the present study, we aimed to further explore the regionally functional plasticity by using the ReHo method to do an exploratory analysis in the whole brain. The intervention group attended a combined intervention program consisting of cognitive training, physical exercise, and group counseling which has been introduced in Li et al. [24]. The cognitive training component focused on mnemonic and executive function training. Tai Chi, as a typical form of physical exercise, has been demonstrated to effectively improve the cognitive function [40], and optimize the regionally functional homogeneity of the intrinsic brain architecture in older adults [41]. In addition, group counseling was adopted to promote the psychological well-being of older adults, given that positive mood states could influence individual cognition and brain function [42]. We expected that this combined intervention program could alter the local functional homogeneity of the brain regions, as reflected by the changed ReHo values in the participants who attended the intervention, and we further speculated that these changes in local functional homogeneity would reflect improved individual cognitive performance.
Methods
In this study, healthy older adults' brain activity and neuropsychological performance were assessed before and after a 6-week combined cognitive-psychological-physical intervention. The effects of combined intervention on the functional plasticity in resting-state interregional connectivity have previously been reported for the participants in Li et al. [24]. This paper focused on the intervention effects on functional changes in the patterns of local spontaneous brain activity.
2.1.Participants.
Forty-five healthy older adults were recruited from two communities near the Institute of Psychology, Chinese Academy of Sciences. After baseline evaluation, one community was randomly allocated to the intervention group ( = 26), and the other community formed the control group ( = 19). Participants were blind to the group allocation and study design. Participants were included in the study according to the same screen criteria used in Li et al. [24]. Of these, 11 participants were excluded from further analysis based on these criteria or for other reasons, and thus data for 34 participants were finally analyzed, with 17 in the intervention group (mean age: 68.59 years) and 17 in the control group (mean age: 71.65 years) [24].
The study protocol was approved by the Ethics Committee of the Institute of Psychology, Chinese Academy of Sciences. Written informed consent was obtained from all participants, and they were paid 200 Yuan for participation. The study was registered in the Chinese Clinical Trial Registry (ChiCTR) (http://www.chictr.org/): ChiCTR-PNRC-13003813.
Outcome Measures.
A battery of neuropsychological tests was used to evaluate the intervention effects on cognitive function, health status, social support, and subjective wellbeing. The tests of cognitive function included the Montreal Cognitive Assessment-Beijing Version (MoCA-BJ) [43], Paired Associative Learning Test (PALT) [44], Digit Span Forward and Digit Span Backward [45], Trail Making Test (TMT) [46], Stroop Test [47], and Category Fluency Test (CFT) [48], which were used to assess the global cognition, episodic memory, working memory, executive function, and language ability, respectively. Health status was measured using the Medical Outcomes Study Short Form-36 (MOS SF-36) [49]. The level of social support was measured using the Social Support Rating Scale (SSRS) [50]. Subjective wellbeing was measured using the Satisfaction with Life Scale (SWLS) [51] and the Index of Well-Being (IWB) [52]. The test examiners were blind to the group allocation of participants (control or intervention). Figure 1 displays the procedure for the study. All participants were subjected to a battery of neuropsychological tests and MRI scanning individually before and after the intervention. During the intervention process, the intervention group received a 6-week combined cognitive-psychological-physical intervention, including cognitive intervention, Tai Chi exercise, and group counseling, while the control group attended two 120 min lectures related to health and aging. The 18 one-hour cognitive training sessions were administrated three times per week and consisted of mnemonic training (MT; nine sessions) and executive function training (EFT; nine sessions). MT was designed to teach older adults elaborate encoding and retrieval strategies, such as interactive imagery, sentence generation, and the method of loci. EFT was designed to train older adults three components of executive function: inhibition, switching, and updating. The 18 one-hour physical exercise sessions required participants to learn the Yang Style 24-form Tai Chi three times per week. Group counseling sessions aimed at promoting the psychological well-being of older adults through reminiscence and were conducted as six weekly sessions of 90 min each. Please refer to Li et al. [24] for details about the intervention program.
Statistical Analysis of Neuropsychological Data.
The demographic and clinical characteristics of participants in both intervention and control groups were examined by using chi-square, , or nonparametric (Mann-Whitney) tests. The repeated measures two-way analysis of variance (ANOVA) with the within-subject factor of intervention (pre, post) and the between-subject factor of group (control, trained) was conducted on the performance for each test to examine the intervention effect. All statistical analyses were conducted using SPSS 19.0 (IBM Corporation, Somers, NY). [53]. The first five images for each subject were discarded to allow for equilibration of the magnetic field and the acclimatization to the scanning environment. The 195 remaining images were first corrected for intravolume acquisition time differences between slices and intervolume geometrical displacement due to head motion. Participants included in this study were restricted to head motion of less than 2.0 mm in any direction and 2.0 ∘ of angular motion during the restingstate scan. The functional images were then normalized to the standard space of the Montreal Neurological Institute (MNI) and resampled to a voxel size of 3 × 3 × 3 mm 3 . Following this, detrending and temporal band pass filtering (0.01-0.08 Hz) of the fMRI data was carried out to reduce the effects of lowfrequency drift and physiological high-frequency noise.
Image Data
Following Zang et al. [31], the ReHo value in the brain was measured using Kendall's coefficient of concordance (KCC) between the time series of a given voxel and its nearest 26 neighbors. Specifically, we first calculated the KCC for each voxel across the whole brain to derive the ReHo map for each subject. For standardization purposes, each ReHo map was then divided by the mean ReHo value of the entire brain. Finally the ReHo maps were spatially smoothed with a 4 mm full-width at half-maximum (FWHM) Gaussian kernel.
For statistical analysis, we first used a one-sample -test to compare the ReHo maps for intervention and control groups both before and after intervention and then performed a whole-brain voxel-wise Group (control, trained) × Intervention (pre, post) ANOVA on the ReHo maps to detect regions showing intervention-related changes. Clusters were considered significant at a level of < 0.01 for the combined voxel-extend threshold of an uncorrected voxel and cluster extent >486 mm 3 , as determined using the Monte Carlo simulation with AlphaSim correction to < 0.01. In further studies, the regions showing a significant Group × Intervention interaction were defined as regions of interest (ROIs). We extracted the mean ReHo value in each ROI and used paired sample -tests ( < 0.05) to examine the effects of intervention on regional ReHo in each of the ROIs for the two groups.
Finally, to examine whether intervention-related changes in brain activity are associated with improvements in cognition, the correlations between intervention-related ReHo changes in the ROIs and changes in cognition variables were investigated ( < 0.05, Bonferroni corrected). Betweengroup comparisons were conducted with Fisher's to transformation to directly compare two correlation coefficients.
Demographic and Clinical
Characteristics. Table 1 displays the demographic and clinical characteristics of the participants in the intervention and control groups. The two groups did not differ significantly in age, years of education, gender, or on the MoCA-BJ, Center for Epidemiologic Depression Scale (CES-D) [54], and ADL (Activities of Daily Living) scores [55].
Effects of Combined Cognitive-Psychological-Physical
Intervention on Behavioral Performance. ANOVA analyses revealed that, after the intervention, significant improvements in the PALT and vitality (VT, a dimension of MOS SF-36) and greater improvements in SSRS were found in the intervention group, but no significant or smaller improvements in these tests were found in the control group. In addition, the results also showed that the performance on TMT and SWLS did not change for the intervention group but decreased for the control group after the intervention (refer to Li et al. [24] for detailed behavioral results). Figure 2 demonstrates the ReHo maps before and after intervention for both groups. The ReHo maps displayed very similar spatial patterns to the default-mode network.
ReHo Maps and the Intervention-Related Changes.
The whole-brain voxel-wise Group × Intervention ANOVA analysis showed that there were four regions with significant interactions for Group × Intervention (AlphaSim correction, (Figure 3(b)). (Figure 4(a)). A further analysis directly comparing the correlations between the two groups revealed that the two correlation coefficients were significantly different from one another, Fisher's to = 2.06, = 0.039. When a more liberal threshold of < 0.05 without correction was applied, we found that the intervention-related ReHo changes in the right MTG were negatively correlated with the intervention-related increase in the total PALT scores in the intervention group, = −0.544, = 0.024. In the control group, no significant correlation was observed between the ReHo changes in the right MTG and gains of the PALT, Neural Plasticity = 0.142, = 0.586 (Figure 4(b)). Subsequent analysis revealed that there was significant difference between the two correlation coefficients, Fisher's to = −1.99, = 0.047.
Discussion
We explored the effects of a combined cognitive-psychological-physical intervention on brain functional plasticity in healthy older adults using RS-fMRI. The results revealed improved cognitive performance and alterations in the ReHo of local spontaneous brain activity in the superior and middle temporal gyri and the posterior lobe of the cerebellum for the intervention group. These intervention-related ReHo changes were correlated with individual improvements in cognitive performance. The present study provides evidence that combined cognitive-psychological-physical intervention can induce functional plastic changes in the lateral temporal lobe and cerebellum in healthy older adults.
Based on the whole-brain voxel-wise analysis, the fMRI results revealed that combined intervention significantly altered the coherence of local spontaneous brain activity in the left superior temporal gyrus (STG) and middle temporal gyrus (MTG) in the intervention group. Although previous studies have primarily emphasized the effects of aging on the frontal cortex and MTL, many studies have implied that the lateral temporal lobe is also vulnerable. Decreases in the density of gray matter in the left superior temporal region have been reported [56] and the bilateral temporal cortices show obvious atrophy [57,58] and decreased metabolism [59] in the elderly. The left STG has previously been thought to play an important role in speech comprehension; however, this region is also involved in speech production [60]. Studies of conduction aphasia where there is damage in the left STG have confirmed the role of the STG in speech production, by revealing good comprehension but phonemic paraphasias and naming difficulties [61]. A recent study has demonstrated plasticity in this brain region [62]. The researchers found that the cortical thickness of the left STG increased in those who participate in foreign language training, and this structural change was positively correlated with posttest proficiency in language use.
It is well known that spontaneous resting state brain activity may serve an important role in brain function [27]. Furthermore, previous studies have proved that ReHo measures are associated with individual differences in cognitive performance [35][36][37]. Consequently, it is reasonable to speculate that the present finding of enhanced coherence of local spontaneous brain activity in the left STG for the intervention group suggests that combined cognitive-psychologicalphysical intervention can induce brain functional plasticity in the superior temporal cortex and lead to functional improvements in this brain region associated with speech production.
Neural Plasticity
Correlation analysis further supports this proposal by showing a positive correlation between intervention-related changes in coherence in the left STG and changes in the CFT for the intervention group. The CFT is a typical neuropsychological test used to examine speech production, and performance of the CFT declines with normal aging [63,64]. The CFT greatly depends upon the function of the temporal lobe [65], and patients with semantic dementia and temporal lobe atrophy show poor category fluency [66,67]. The results of an fMRI study revealed greater activation in the left STG during the category fluency tasks, further confirming the important role of this region in language production [68]. In the current study, our correlation results are consistent with an important functional role for the left STG in CFT performance and provide evidence that the patterns of local spontaneous resting brain activity in the left STG could reflect individual CFT performance in older adults.
Interestingly, the present results showed a concomitant increase and decrease in the ReHo of spontaneous brain activity in the left STG and left MTG for the intervention group. Although the exact reasons are still unclear, we believe that the present results to some extent echo heterogeneous functional characteristics in subregions of the lateral temporal lobe. The MTG is thought to be involved in mapping between the phonological forms of words and their meanings and serves as a sound-to-meaning interface [60,69,70], and meta-analyses of neuroimaging literature have confirmed the importance of its role in semantic processing [71,72]. The MTG has extensive structural and functional connectivity with frontal, parietal, and temporal regions in the resting brain, and this is thought to play a central role in language comprehension [73]. Therefore, the present finding of decreased coherence of local spontaneous resting activity in left MTG suggests functional plastic changes in the middle temporal cortex and may reflect greater efficiency of information processing in the intervention group for this brain region involved in language processing.
Although the ReHo of the right MTG did not change significantly after intervention in the intervention group, within a more liberal threshold ( < 0.05), our results showed a significant correlation between coherence changes in the right MTG and the PALT for the intervention group. The PALT is a neuropsychological test used to examine the episodic memory using word pairs [44]. Previous taskbased fMRI studies have found that activity of the right MTG during encoding is correlated with subsequent memory performance, suggesting that brain activity in regions in charge of language comprehension is predictive of episodic memory with narrative materials [74,75]. In the present study, our results further confirm the important role of the MTG in the processing of language materials and provide evidence that intervention-induced changes in patterns of local spontaneous brain activity in the right MTG may also predict individual episodic memory in older adults.
The combined cognitive-psychological-physical intervention also induced enhanced coherence of local spontaneous activity in the left posterior lobe of the cerebellum (PCL) for the intervention group. The traditional views are that the cerebellum is mainly responsible for motor coordination and motor learning, but in recent years, investigators have paid more attention to the higher-order functions of the cerebellum, such as working memory, executive functions, and emotional control [76][77][78]. Damage to the cerebellum, especially in the lateral hemisphere of the posterior cerebellum, will induce cerebellar cognitive affective syndromes (CCAS), which are characterized by impairments in executive, visual-spatial, linguistic abilities, and affective disorders [79]. Functional topography studies within the cerebellum have further indicated that anterior portions of the cerebellum mainly support motor function, whereas the posterior regions of the cerebellum are mainly involved in cognitive and emotional processing [80][81][82].
A substantial body of evidence has demonstrated that the cerebellum also shows age-related decreases in structural morphology and function [83]. For instance, a recent study found age-related decreases in resting-state cerebellocortical functional connectivity, and lower connectivity was associated with poorer cognitive performance in older adults [84]. Our intervention program contains cognitive training (mnemonic and executive function training) and group counseling, so the present findings of increased local coherence of spontaneous brain activity in the PCL suggest that intervention-induced functional optimization of this region contributes to higher levels of cognitive processing and emotional control in the intervention group.
Taken together, the present study confirms that the combined cognitive-psychological-physical intervention induces regionally brain functional reorganization. However, it should be noted that we cannot ascertain the exact contribution of each specific training component to the brain functional plasticity. Interestingly, though our previous study observed enhanced resting-state functional connectivity between the MTL and PFC [24], the present findings did not show significant intervention-induced changes of ReHo values in these two brain regions. These results indicated that the functional plasticity of the aging brain is a complex process. Further studies are required to elucidate the underlying causes and mechanism. In addition, we observed both a positive correlation in the left STG and a negative correlation in the right MTG between intervention-related changes in ReHo and cognitive performance in the present study. The relationship linking changes of ReHo values and individual behavioral performance is extremely complicated. The higher ReHo level does not necessarily result in better performance and vice versa. Evidences can be found in previous correlation studies between ReHo and cognitive function [35,39]. The underlying meaning of increased or reduced ReHo values remains unclear. It seems that both positive and negative correlations could make the inference that the changes of ReHo may reflect individual behavioral performance.
Several limitations should be noted. Firstly, the control group only attended two 120 min lectures and is therefore not a completely active control group. Nevertheless, as noted in Takeuchi et al. [85], the lack of an active control group is actually a commonly used approach in neuroimaging studies involving training. The positive effects induced by intervention cannot simply be attributed to active stimulation [86]. In the present study, the intervention group showed alterations in patterns of local spontaneous brain activity in the brain regions that are vulnerable to aging, and moreover, the ReHo changes were correlated with the gains of individual cognitive performance. We believe that these beneficial effects at least partially reflect the function of the combined intervention. Secondly, it is unfortunate that the participants could not be randomized to the intervention and control groups. This was necessary because the participants were enrolled from two communities, and if there were participants from the same community in each group, it would be possible for them to communicate the intervention contents and thus confound the intervention effects. Thirdly, we found that combined cognitive-psychological-physical intervention could improve the functional organization of the resting brain, and although it could be supposed that this corresponds to an overall increase in cellular activity and metabolic rate in the relevant brain regions [86], the underlying mechanisms for this functional plasticity are still obscure. Lastly, we did not assess the long-term effects of the combined intervention. Future research is needed to determine whether, and to what extent, these beneficial effects can be maintained over time.
Conclusion
In summary, the present study extended our previous findings by showing that combined intervention could optimize the intrinsic functional brain architecture in the temporal cortex and cerebellum in the normal elderly. Moreover, the changes in ReHo of local spontaneous resting state activity could predict improvements in individual cognitive performance. These results further suggest the effectiveness of the intervention in curbing the loss of brain function in older adults. | 2018-04-03T02:55:40.005Z | 2015-02-24T00:00:00.000 | {
"year": 2015,
"sha1": "aed8b09d943688df0fc70695fad97c7d76c272a1",
"oa_license": "CCBY",
"oa_url": "http://downloads.hindawi.com/journals/np/2015/713104.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "6e0983019e2eadf0327be70ce00d588504eeeed9",
"s2fieldsofstudy": [
"Psychology",
"Medicine"
],
"extfieldsofstudy": [
"Psychology",
"Medicine"
]
} |
246671659 | pes2o/s2orc | v3-fos-license | Clinical Profile of Patients with Atrial Fibrillation According to EHRA ( Evaluated Heart Valves , Rheumatic or Artificial ) Categorization in A Middle-Income Country
Atrial fibrillation (AF) is the most common sustained cardiac arrhythmia in clinical practice. As of 2017, a functional EHRA (Evaluated Heart valves, Rheumatic or Artificial) categorization was proposed to replace the terms valvular and non-valvular AF. In our country, despite the incidence of rheumatic valve heart disease, studies on this new categorization are scarce Objective: to assess the clinical profile of patients (pts) with AF, using the EHRA categorization as a parameter. Methods: this is a prospective, observational and cross-sectional study with 475 pts with AF from a university institution. Clinical and laboratory evaluations were carried out, as well as the calculation of risk scores for embolism, bleeding and renal function. Statistical analysis was performed by non-parametric tests, in addition to the chi-square test. Results: the pts were divided into 3 groups, according to the EHRA categorization: EHRA 1, with 144 pts, with mitral stenosis or mechanical prosthesis; EHRA 2, with 46 pts with other valvular heart diseases; and EHRA 3, with 285 pts, without valvular heart disease. Mean ages were 51.5; 57.6 and 62.9 years, respectively (p<0.0001). The proportions of women were 75%; 52.1% and 40.7% (p<0.0001). The presentation of the AF was permanent in 68.1%; 60.9%; 52.3% of pts (p=0.008) and 86.0%; 47.8% and 53.3% of pts were using oral anticoagulants, respectively (p<0.0001). The means of left ventricular ejection fraction were 0.58; 0.57 and 0.46; of left atrium were 55.9; 52.5 and 48.1 mm; ATRIA score of 1.4; 1.3 and 2.1; and the glomerular filtration rate of 91.7; 91.1 and 75.0 mL/min/1.73m2, respectively (p<0.0001). There was no difference among groups regarding blood pressure and heart rate at study entry and regarding the history of embolism. Conclusions: pts from the EHRA 1 categorization were younger, with a higher proportion of women, permanent AF and use of oral anticoagulants. Systolic dysfunction predominated in those without valve disease, who also had a higher bleeding score and greater impairment of renal function.
I. INTRODUCTION
In 2017, a functional EHRA (Evaluated Heart valves, Rheumatic or Artificial) categorization was proposed due to heterogeneity of the definition of valvular and non-valvular atrial fibrillation (AF) [1]. Epidemiological data demonstrate that there has been an increased incidence of degenerative etiology in valvular heart disease in high-income countries. On the other hand, rheumatic etiology is still the main cause of valvular heart disease in low-or middle-income countries, with a great impact on patient morbidity and mortality, especially if associated with AF [2]- [4]. In rheumatic heart valve disease, the prevalence of AF can reach 80% [5]. Therefore, vitamin K antagonist oral anticoagulants are indicated to prevent thromboembolic events in patients with moderate to severe mitral stenosis and AF, while direct oral anticoagulants are contraindicated [3]. This scenario reflects the importance of that categorization for the practical use of the type of oral anticoagulation based on scientific evidence.
There are differences in the burden of rheumatic heart disease. In some countries, there is a decrease in new cases, but with a predominance of chronic presentation of this rheumatic valve disease. Due to the difficulty in accessing medical care in some regions and poor adherence to prevention, there is an unfavorable impact on the clinical evolution and surgical outcomes of these patients [6].
In our country, despite the incidence of rheumatic heart valve disease, studies are scarce about this new categorization. Thus, this study aims to evaluate and compare the clinical profile of patients with AF, according to the EHRA type 1 and type 2 categorization, and patients with AF and without valvular heart disease.
A. Participants and Study Procedure
This is a prospective, observational and cross-sectional study with AF patients over 18 years of age undergoing clinical follow-up at an academic medical center in a middleincome country. The study was done between May 2018 and March 2020. These patients were divided into three groups as follows: -Group 1: EHRA Type 1, which refers to AF patients with mitral stenosis (moderate to severe, of rheumatic origin) or with mechanical replacement of the prosthetic valve; -Group 2: EHRA Type 2, which refers to AF patients with other valvular heart diseases or with biological prosthetic valves without dysfunction implanted more than three months ago; -Group 3: patients without valve diseases or mechanical or biological prosthetic valves. The patients underwent clinical evaluation and laboratory tests (electrocardiogram, echocardiogram and blood tests). Risk scores for embolism (for groups 2 and 3) and for bleeding (for all groups) were taken at study entry through data collected from medical records. Baseline renal function was calculated for all patients. The calculated embolism scores were CHA2DS2-VASc and CHA2DS2-VASc-RAF. The calculated bleeding score was ATRIA. For renal function, creatinine clearance by the Cockcroft-Gault formula (CG) and glomerular filtration rate by the Diet Modification in Kidney Disease (MDRD) equation were used.
B. Ethical Aspects
Before the start of the study, participants were informed about the research and procedures. All patients gave written informed consent. The study protocol complies with ethical guidelines and was approved by the institution's human research committee.
C. Statistical Analysis
Statistical analysis was performed using SPSS Statistical Software, Version 16.0 (SPSS Inc, Chicago, IL, USA). Categorical variables were expressed as frequency (percentage) and compared with the chi square test or Fisher's exact test as appropriate. Continuous data are presented as mean ± standard deviation or median and interquartile range, as appropriate. The Kruskal-Wallis test or ANOVA test was used for comparison among the three groups. Comparison between continuous variables of two groups was performed using the Mann-Whitney test. The normality assumption was verified by Shapiro-Wilk test. All values were two-tailed, and a P value < 0.05 was considered statistically significant.
A. Patient Baseline Characteristics
A total of 475 AF patients were enrolled. The mean age was 59.0 ± 14.8 years, ranging from 18 to 92 years of age. 52.2% of the patients were female. The median left ventricle ejection fraction was 0.56 (interquartile range 0.37 -0.65), ranging from 0.12 to 0.85.
B. Comparison among Groups
The comparison among the three groups is shown in Table I Fifteen patients had mechanical prosthetic valves in group 1; 13 in the mitral position and 2 in the aortic position. The other patients in group 1 had moderate to severe mitral stenosis. The main etiologies of heart disease in patients in group 3 were hypertensive (36.8%), dilated cardiomyopathy of different etiologies, mainly of ischemic etiology (53%), and other etiologies without left ventricular systolic dysfunction, such as coronary artery disease, congenital heart disease, brady-tachycardia syndrome.
The Canadian Cardiovascular Society (CCS) AF Severity Scale, which describes the severity of symptoms related to AF, was similar among the three groups (2.6; 2.6 and 2.8 for groups 1, 2 and 3, respectively, p=0.97).
Regarding the use of oral anticoagulants, 298 patients were using vitamin K antagonists, nine were using rivaroxaban, eight were using apixaban and two were using edoxaban. No patient was using dabigatran at the time of study entry. The oral anticoagulant used by patients in group 1 was warfarin.
IV. DISCUSSION
The major findings of this study are that patients in group 1 (EHRA type 1) were younger, had a higher proportion of permanent AF and a higher frequency of warfarin use, demonstrating agreement with the literature. On the other hand, older patients had a low proportion of oral anticoagulant use despite higher scores for systemic embolism.
The main etiology of mitral stenosis is rheumatic fever, especially in developing countries. There is a predominance of this valvular heart disease in females [3], [7]- [9]. Thus, the greater proportion of women in group 1 in this study is in line with the literature as well as age. Rheumatic valvular heart disease affects more young people, unlike the degenerative etiology that affects older patients [7]- [10].
AF is a very common arrhythmia, with an increase in prevalence with age. Its risk is 4.98 in those aged between 60 and 69 years, and 9.39 in those aged 80 to 89 years, compared to the risk in those aged 50 to 59 years [11]. In patients with rheumatic heart disease, its prevalence can reach 80%, with an average of almost 33% [5]. A recent study of patients with AF rhythm and bioprosthetic valve demonstrated that AF was paroxysmal, persistent, or permanent in 36.9%, 34.6%, and 28.5% of patients, respectively [12]. And in large registries of heart failure, the prevalence of AF ranges from 23.0 to 33.7% in patients with preserved ejection fraction, and it ranges from 32 to 39% in those with reduced ejection fraction [13]. Structural and inflammatory changes in the left atrium and the increase in its pressure due to mitral valve involvement predispose to the rhythm of AF, which explains the higher proportion of permanent AF in group 1. This is corroborated by the greater diameter of the left atrium in this group, with a statistically significant difference.
The association between AF and systemic embolism, including stroke, is well established. Therefore, the indication of oral anticoagulation is imperative. Direct anticoagulants are indicated for the prevention of systemic embolism in patients with non-valvular AF with a CHA2DS2-VASc score ≥ 2. Its use is also acceptable in patients with AF undergoing valve bioprosthetic surgery after more than 3 months. They can be used in those patients with AF and valvular heart disease, except for moderate to severe mitral stenosis and patients with mechanical replacement of the prosthetic valve. Thus, for this group of patients (EHRA Type 1), vitamin K antagonists should be used to prevent thromboembolism [3], [14], [15]. In the present study, the systemic embolism rate reached up to 25.3%, with no difference among groups. Literature data indicated that the most important morbidity in patients with AF is stroke, which is associated with this arrhythmia in up to 25% of the time, but AF may be responsible for 43.9% of disabling or fatal strokes [16].
There are few studies with functional EHRA (Evaluated Heart valves, Rheumatic or Artificial) categorization. Nationwide cohort study with a large sample of patients with valvular heart disease and AF, but with 90% of patients with EHRA type 2, demonstrated a rate of 14.6% and 18.6% of thromboembolism in patients with EHRA type 1 and EHRA type 2, respectively [17]. A study from the same group showed that the rate of previous thromboembolism ranged from 10.0% to 14.6% between patients in both groups [18]. The higher rate of thromboembolism in our study can be explained by the higher proportion of patients with moderate to severe mitral stenosis in group 1, which was 89.5%, compared to the rate of 29.2% in the study mentioned above.
The underutilization of oral anticoagulation is still a current fact. In low-and middle-income countries, the oral anticoagulation rate ranges from 11% to 85%, and in Latin America the average rate is 43.3% [19]. With regard to the use of warfarin, this occurs mainly in elderly people aged at least 75 years, with higher CHA2DS2-VASc score, even in developed countries, with a prescription rate of 50.8%, and anemia is one of the associated independent factors [20]. This rate of warfarin use is similar to that of group 3 in our study, which consisted of older patients without mitral valve disease. In addition, this group also had a higher ATRIA score, which may have influenced the management of oral anticoagulant use. Another factor that may have interfered is renal function, which also showed greater impairment in group 3. Despite the increased risk of stroke in patients with AF and chronic kidney disease, there is an increased risk of bleeding with the use of oral anticoagulants. This risk is greater in patients with end-stage renal disease, in whom the use of warfarin showed a neutral effect on ischemic stroke and increased bleeding and hemorrhagic stroke [21].
Due to the importance of renal function in patients with Permanent AF AF, a cross-sectional study validated by another demonstrated that AF type (persistent and permanent AF) and renal dysfunction (defined as glomerular filtration rate <56 mL/min/1.73 m 2 ) were independent predictors of thrombus in the left atrium [22], [23]. Thus, the CHA2DS2-VASc-RAF score showed better performance which motivated us to calculate this score for our population. However, in the literature there is no comparison between patients with EHRA type 2 and those with non-valvular AF regarding this score. The high anticoagulation rate in group 1 of this study is also similar to the recent published study, in which the rates were 62.5% for patients with moderate to severe mitral stenosis and 100% for those with mechanical replacement of the prosthetic valve [18]. On the other hand, the anticoagulation rate in group 2 was lower than in the aforementioned study, which showed rates between 50.8% and 67.1% for patients in the EHRA type 2 categorization.
Direct oral anticoagulant therapy for AF-related stroke prevention was underutilized in this study. A recently published national registry of 1423 patients with AF and with at least one thromboembolic risk factor in upper-middleincome country demonstrated that 34.6% were using direct oral anticoagulants [24]. Our study was carried out with patients from a university hospital, whose care is provided to those from the national public health system, excluding the care of patients from the private health system. There is no direct oral anticoagulant supply. Therefore, these factors explain the low percentage (6%) of the use of these direct anticoagulants in relation to the total number of patients under oral anticoagulation.
Regarding the proportion of patients with pacemakers, the results of this study are in accordance with the main indications for pacemaker implantation, since the highest proportion occurred in group 3. These indications are sinus node disease and advanced atrioventricular block, conduction disorders that occur in older patients and patients with cardiomyopathy [25], [26]. In the study that compared EHRA type 1 and EHRA type 2 patients, the proportions of patients with pacemakers were up to 12.1% and 11.3%, respectively.
V. LIMITATIONS
The main limitations of this study are the sample size and patient selection bias, since the study was carried out in a single center. The quality of oral anticoagulation using the international normalized ratio (INR) to verify the time in therapeutic range was not measured because it is a crosssectional study.
VI. CONCLUSIONS
Patients in the EHRA 1 categorization were younger, with a higher proportion of women, predominant presentation of permanent AF and a higher frequency of oral anticoagulant use.
Systolic dysfunction was more present in those patients without valvular heart disease (group 3), who also had a higher bleeding score and greater impairment of renal function. | 2022-02-09T16:17:33.469Z | 2022-01-24T00:00:00.000 | {
"year": 2022,
"sha1": "cb75fd79503949358000baf0951c355a77ec9715",
"oa_license": "CCBYNC",
"oa_url": "https://www.ej-clinicmed.org/index.php/clinicmed/article/download/161/106",
"oa_status": "HYBRID",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "6c5e78854d081f8e1f2e351f9a238f4ff0542554",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": []
} |
119254219 | pes2o/s2orc | v3-fos-license | Black versus Dark: Rapid Growth of Supermassive Black Holes in Dark Matter Halos at z ~ 6
We report on the relation between the mass of supermassive black holes (SMBHs; M_BH) and that of hosting dark matter halos (M_h) for 49 z ~ 6 quasi-stellar objects (QSOs) with [CII]158um velocity-width measurements. Here, we estimate M_h assuming that the rotation velocity from FWHM_CII is equal to the circular velocity of the halo; we have tested this procedure using z ~ 3 QSOs that also have clustering-based M_h estimates. We find that a vast majority of the z ~ 6 SMBHs are more massive than expected from the local M_BH - M_h relation, with one-third of the sample by factors>~ 10^2. The median mass ratio of the sample, M_BH/M_h = 6 x 10^{-4}, means that 0.4% of the baryons in halos are locked up in SMBHs. The mass growth rates of our SMBHs amount to ~ 10% of the SFRs, or ~ 1% of the mean baryon accretion rates, of the hosting galaxies. A large fraction of the hosting galaxies are consistent with average galaxies in terms of SFR and perhaps of stellar mass and size. Our study indicates that the growth of SMBHs (M_BH ~ 10^{8-10} Msun) in luminous z ~ 6 QSOs greatly precedes that of hosting halos owing to efficient gas accretion even under normal star formation activities, although we cannot rule out the possibility that undetected SMBHs have local M_BH/M_h ratios. This preceding growth is in contrast to much milder evolution of the stellar-to-halo mass ratio.
INTRODUCTION
Observations have identified more than 200 supermassive black holes (SMBHs) shining as QSOs in the early universe before the end of cosmic reionization, or z 6, with the most distant one being located at z = 7.54 ) and the most massive ones having order ∼ 10 10 M ⊙ . How these SMBHs grow so massive in such early epochs remains a topic of debate. To resolve this, it is key to reveal what galaxies host these SMBHs, because SMBHs and galaxies are thought to co-evolve by affecting each other, as is in-ferred from various correlations between them seen locally (e.g., Kormendy & Ho (2013) for a review).
At high redshifts like z ∼ 6, the parameters of hosting galaxies that are often examined are central velocity dispersion (σ) and dynamical mass (M dyn ), with the latter being a proxy of stellar mass (M ⋆ ). The relations between these parameters and black hole mass (M BH ) are then compared with the corresponding local relations for ellipticals and bulges. It has been found that the M BH -σ relation at z ∼ 6 is not significantly different from the local one (e.g., Willott et al. (2017)). On the other hand, z ∼ 6 SMBHs appear to be overmassive compared with local counterparts with the same bulge mass (e.g., Decarli et al. (2018)), although faint QSOs are on the local relation (Izumi et al. (2018)). Note that these comparisons are not so straightforward because the stellar components of QSOs may not be bulge-like and may also be greatly contaminated by cold gas (e.g., Venemans et al. (2017), Feruglio et al. (2018)).
The relation between M BH and the mass of hosting dark halos (M h ;Ferrarese (2002)) provides different insights into co-evolution, by directly constraining the SMBH growth efficiency in halos. For example, let us assume two cases: (1) that stellar components and SMBHs grow at similarly high paces, or (2) that they grow at similarly low paces. Both cases give similar M BH -M ⋆ relations, but the former predicts a higher M BH -M h relation. Cold gas in a halo is used for both star formation and SMBH growth, with shares and consumption rates being controlled by various physical processes. The M BH -M h relation at high redshifts may lead to the disentangling of some of these processes.
In this Letter, we derive the M BH -M h relation for z ∼ 6 QSOs and compare it with the local relation. We also examine the efficiency of SMBH growth by comparing the growth rate with the star formation rate (SFR) of hosting galaxies and the baryon accretion rate (BAR) of hosting halos. We estimate M h from [C II]158µm line widths, assuming that lines are broadened by disk rotation and that the rotation velocity is equal to the circular velocity of hosting halos. We show that this procedure appears to be valid, using lower-z QSOs.
In Section 2, we calculate M h for a z ∼ 6 QSO sample compiled from the literature. Results are presented and discussed in Section 3. Concluding remarks are given in Section 4. We adopt a flat cosmology with (Ω M , Ω b , Ω Λ , H 0 ) = (0.3, 0.05, 0.7, 70 km s −1 Mpc −1 ) and the AB magnitude system.
2 The average value of the objects with a min /a maj data is 52 • .
This procedure to derive M h from FWHM CII contains several assumptions that cannot be completely verified by current data. One is that [C II] emitting regions are rotating disks. A velocity gradient has been found for several QSOs (e.g., Wang et al. (2013), Willott et al. (2013)). With high-resolution ALMA data, Shao et al. (2017) have derived a rotation curve of the z = 6.13 QSO ULAS J1319+0950 that is flat at 1.5 kpc radii. This object is included in our sample, and we find that the calculated V rot agrees with the flat rotation velocity. On the other hand, Venemans et al. (2016) have ruled out a flat rotation for QSO J0305−3150. In any case, the number of QSOs with high-quality [C II] data is still very limited. We note that if we assume that [C II] line widths are solely due to random motion and if veloc- , we obtain lower V circ and hence lower M h because of √ 2/2.35 < 0.75. As found in Section 3, adopting lower M h values enlarges the offset of our QSOs from the local M BH -M h relation.
Another key assumption that cannot be tested is V rot = V circ . While local spiral galaxies have V rot /V circ ≃ 1.2-1.4, it is not clear whether high-z QSO host galaxies have also similarly high ratios; if they have such high ratios, our procedure will be overestimating M h by a factor of 1.2 3 -1.4 3 ≃ 2-3. On the other hand, Chen & Gnedin (2018) have shown V rot /V circ > 0.6 by imposing that the duty cycle defined as the ratio of the number density of z ∼ 6 QSOs to that of hosting dark halos has to be less than unity.
We cannot thoroughly verify the assumptions one by one, so we indirectly test our procedure as a whole by comparing M h derived from our procedure with those based on clustering analysis at high redshifts. We do so at z < 6 as there is no clustering study at z 6. The best sample for this test is Trainor & Steidel (2012)'s z = 2.7 sample, for which both a clustering-based M h estimate and FWHM data are available. Trainor & Steidel (2012) have obtained a median halo mass of 15 QSOs at z = 2.7 to be M h = 10 12.3±0.5 M ⊙ from cross-correlation with galaxies around them. Among them, 12 have CO(3→2) velocitywidth measurements by Hill et al. (2019) 3 . We apply our procedure to nine of the 12 objects (after excluding three with a complex line profile), finding M h = 10 12.14 - (2012) 10 13.17 M ⊙ with a median of 10 12.71 M ⊙ . This median value is consistent with that from the clustering analysis within the 1σ error in the latter. See Table 1 for a summary of the comparison.
As an additional but less stringent test, we compare M h of QSOs at z ∼ 4.5 with clustering results at similar redshifts. Here, z ∼ 4.5 is the lowest redshift at which the C II line is accessible from the ground, and roughly corresponds to the maximum redshift where clustering data are available. We use nine QSOs with FWHM CII data ( Table 1). We regard this rough agreement as modest support for our procedure, because the M h range of the z ∼ 4.5 QSOs is broad and because FWHM-based and clustering-based masses are compared for different samples.
These comparisons indicate that this procedure can be used as a rough estimator of M h at least in the statistical sense, although the evaluation of its uncertainty is limited by that in Trainor & Steidel (2012)'s mass estimate. Our procedure gives a 0.4 dex higher median mass than that of Trainor & Steidel (2012). However, because this difference is within the 1σ error in their estimate, 0.5 dex, we do not correct our procedure for this possible systematic overestimation. The comparison also indicates that the underestimation by this procedure, if any, appears modest, < 0.5 dex. Our main result that the SMBHs in z ∼ 6 QSOs have higher M BH /M h than local values is robust, because this result holds as long as the systematic underestimation of M h is 0.5 dex.
The M h values of our z ∼ 6 QSOs thus obtained are less than 1×10 13 M ⊙ except for two objects. The median of the entire sample is 1.2 × 10 12 M ⊙ , with a central 68% range of (0.6-3.4) × 10 12 M ⊙ . These relatively low masses are consistent with the halo mass distribution of z ∼ 6 QSOs constrained from the statistics of companion galaxies by Willott et al. (2005). Figure 2 shows M BH against V circ for the 49 z ∼ 6 QSOs, together with local galaxies taken from Kormendy & Ho (2013) for which we convert central velocity dispersions into V circ using the formula given in Pizzella et al. (2005). The very weak correlation seen in the z ∼ 6 sample is partly due to large intrinsic errors in both M BH and V circ . If the observed values are taken at face value, about two-thirds of the z ∼ 6 QSOs are consistent with the distribution of local galaxies, while the remaining one-third have higher M BH . Figure 2, most of the z ∼ 6 QSOs deviate from the local relation (Ferrarese (2002)) toward higher M BH , or lower M h . This is because M h at a fixed V circ decreases with redshift as (1 + z) −1 . Most of the z ∼ 6 QSOs have a 10 times more massive SMBH than local counterparts with the same M h , with one-third by factor 10 2 . Thus, at z ∼ 6 the growth of SMBHs precedes that of hosting halos at least for most luminous QSOs. This is in contrast to a roughly redshift-independent M ⋆ -M h relation of average galaxies (e.g., Behroozi et al. (2018)).
Mass vs. mass
The overmassive trend observed here may be due to selection effects because the sample is biased for luminous QSOs (e.g., Schulze and Wisotzki (2014)). We cannot rule out the possibility that SMBHs at z ∼ 6 are in fact distributed around the local relation with a large scatter and that we are just observing its upper envelope truncated at M h ∼ 10 13 M ⊙ beyond which objects are too rare to find because of an exponentially declining halo mass function (for the halo mass function, see, e.g., Murray et al. (2013)). The results obtained in this study apply only to luminous QSOs detectable with current surveys.
The median M BH /M h ratio of the entire sample is 6.3×10 −4 with a central 68% tile of 1.5×10 −4 -1.8×10 −3 . Even when limited to the objects with relatively reliable M BH and M h data shown by red filled circles, we find a large scatter in M BH at a fixed M h , suggesting a wide spread in SMBH growth efficiency. We calculate the fraction of baryons in the hosting dark halo that are locked up in the SMBH, as is the total mass of baryons in a halo. Our sample has a median f b of 0.4%, with some well above 1%.
In Figure 3(b), QSOs with brighter M 1450 magnitudes tend to have higher M BH /M h ratios. This trend appears to be reasonable because at a given M h , those with a higher M BH can be brighter because the Eddington luminosity is proportional to M BH . Note that some of the faint objects also have very high ratios, far above the local values.
We compare M dyn with M h for 41 objects with size data in Figure 4 4 , finding a nearly linear correlation with a median ratio of M dyn /M h = 0.07 (central 68%: 0.04−0.10). Although our objects are distributed nearly a factor of two above the relation of z = 6 average galaxies (Behroozi et al. 2018), the difference is probably insignificant when various uncertainties in these quantities are considered. For example, M dyn may be significantly contaminated by molecular gas mass as reported for some QSOs (e.g., Venemans et al. (2017), Feruglio et al. (2018)).
We also compare the [C II] emission radii of our objects with the virial radii (r vir ) of the hosting halos (r vir = GM h /V 2 circ where G is the gravitational constant), finding a median ratio of 0.04 (central 68%: 0.02 − 0.07). This result appears to be consistent with rest-ultraviolet (UV) effective radius-to-r vir ratios, typically ∼ 0.03, obtained for z ∼ 6 galaxies (Kawamata et al. (2018)), suggesting that galaxies hosting z ∼ 6 QSOs do not have extreme sizes. Figure 5 shows M BH /M h as a function of z for our sample and several supplementary QSO samples at lower redshifts (whose UV magnitudes are distributed in the range −23.0 > M 1450 > −29.5). This figure indicates that luminous QSOs at z > 2 tend to have overmassive SMBHs irrespective of redshift. We also see a rough agreement of M BH /M h between the clusteringbased and FWHM-based results. Note that the lower-z QSOs plotted here are unlikely to be descendants of the z ∼ 6 QSOs because QSOs' lifetimes, typically ∼ 10 6−8 yr (e.g., Martini (2004)), are much shorter than the time intervals between z ∼ 6 and these lower redshifts.
Growth rate vs. growth rate
We then compare the mass growth rate of SMBHs with the SFR and the mean BAR of hosting halos ( BAR ); we use BAR because halos at a fixed M h can take a wide range of BAR values (e.g., Fakhouri et al. (2010)) and we cannot tell what value each of our objects actually has. For this comparison, we only use 18 objects with broad line-based M BH data and infrared (IR) luminosity data 5 . SMBH mass growth rates (black hole accretion rates: BHARs) are calculated from L 1450 as BHAR = 1−ǫ ǫ L bol /c 2 , where ǫ = 0.1 (fixed) is the mass-energy conversion efficiency, and L bol is the bolometric luminosity estimated using the formula: L bol /erg s −1 = 10 4.553 L 0.911 1450 /erg s −1 (Venemans et al. (2016)). SFRs are obtained from IR luminosities using Kennicutt & Evans (2012)'s conversion formula: SFR/M ⊙ yr −1 = 1.49 × 10 −10 L IR /L ⊙ . Mean BARs BAR = (Ω b /Ω M ) dM h /dt are calcu- Figure 2. Lines with errors indicate the relations for average galaxies at z = 6 (green) and z = 0.1 (black) given in Behroozi et al. (2018); the z = 6 relation at M h > 2 × 10 12 M⊙ has not been constrained. lated using the formula given in Fakhouri et al. (2010). Fakhouri et al. (2010) have obtained dM h /dt at a given M h and a given z from the mean growth of M h over a small time step calculated from main branches of merger trees constructed from the Millennium and Millennium II N -body simulations. Figure 6(a) plots BHAR against BAR . With a large scatter, our QSOs have high BHAR/ BAR ratios with a median of 0.6%. Yang et al. (2018) present time-averaged BHARs as a function of M h over 0.5 < z < 4 using the X-ray luminosity function down to L X = 10 43 erg s −1 combined with the stellar mass function and the M ⋆ -M h relation. Their study covers 44 < log L bol [erg s −1 ] 48.5, including 2 dex fainter objects than our sample, which is in the range 46.0 < log L bol erg s −1 < 48.0. In their BHAR calculation, all galaxies at given M ⋆ are considered. Their results give much lower BHAR/ BAR ∼ 2×10 −5 -1×10 −4 for M h = 10 12 -10 13 M ⊙ roughly independent of redshift. If we assume that z ∼ 6 counterparts to their galaxies also have similarly low time-averaged BHAR/ BAR values, then it is implied that the SMBHs of our QSOs are growing ∼ 10 2 times more efficiently than of average galaxies, maybe being in one of many short growth phases as suggested by Novak et al. (2011). In Figure 6(b), BHAR correlates with SFR relatively well with a typical ratio of BHAR/SFR ∼ 10%, although the correlation may be artificial due to selection effects ). This ratio is close to those from the average relation of bright QSOs at 2 < z < 7 by Wang et al. (2011) (dotted line), but higher than the M BH /M ⋆ of local galaxies. Hence, such high ratios should last only for a short period of cosmic time. Figure 6(c) is a plot of SFR versus BAR , showing that our QSOs are distributed around the average relation of z ∼ 6 galaxies (e.g., Behroozi et al. (2013), Harikane et al. (2018)), or SFR ≈ 0.1 BAR , but with a very large scatter. About an half of the objects are consistent with average galaxies. Objects far above the average relation may be starbursts due, e.g., to galaxy merging (when BAR also increases temporarily); the BHAR of these objects is as high as ∼ 0.1 BAR .
Finally, we compare the specific growth rates of SMBHs, dark halos, and stellar components. The 18 SMBHs grow at ∼ 0.1-1 times of the Eddington limit accretion rate, with BHAR/M BH being comparable to or higher than the specific halo growth rate, BAR /M b ; the SMBHs are growing faster than the hosting halos on average. We also find the BHAR/M BH to be comparable to the specific SFR (=SFR/0.1M b ) but with a large scatter 6 . This means that for z ∼ 6 QSOs, SMBHs and stellar components grow at a similar pace on average, confirming the result obtained by Feruglio et al. (2018) using M dyn .
CONCLUDING REMARKS
We have estimated M h for 49 z ∼ 6 QSOs from FWHM CII . This procedure appears to be valid as a rough estimator.
We have found that the SMBHs of luminous z ∼ 6 QSOs are greatly overmassive with respect to the local M BH -M h relation. This is contrasted with a much milder evolution of the M ⋆ -M h relation of average galaxies over z 6. We have also found that our SMBHs are growing at high paces, amounting to 10 −1 SFR, or 10 −2 BAR , and that the SFR of hosting galaxies is widely scattered around the SFR-BAR relation of average galaxies. A large fraction of the hosting galaxies appear to be consistent with average galaxies in terms of SFR, stellar mass, and size, although this result is relatively sensitive to the accuracy of M h estimates.
Our study indicates that at z ∼ 6 the growth of SMBHs in luminous QSOs greatly precedes that of host-ing halos owing to efficient mass accretion under a wide range of star formation activities including normal star formation, although the existence of faint, undetected SMBHs consistent with the local M BH -M h relation cannot be ruled out. These high mass growth paces can last for only a short period, in order to be consistent with the relatively low M BH /M h and M BH /M ⋆ values of local galaxies.
The trend that SMBHs at z ∼ 6 are overmassive vanishes if we are underestimating M h by factor 10. Although there is currently no hint of such underestimation, future tests of the procedure using high-S/N [C II] data and clustering analysis will be useful. Simulation studies of the internal structure of high-z galaxies may also be helpful 7 .
SMBH evolution has been implemented in many stateof-the-art galaxy formation models, while detailed comparison with our results is beyond the scope of this Letter. An increasing trend of M BH /M h with redshift is seen in the semi-analytical model by Shirakata et al. (2019) (H. Shirakata, private communication). Some hydrodynamical simulations show that M h ∼ 10 12 M ⊙ halos can have an SMBH as massive as ∼ 10 9 M ⊙ (e.g., Costa et al. (2014), Tenneti et al. (2019)), but based on only several examples. Our results can be used to calibrate the efficiency of SMBH growth in the early cosmic epoch. | 2019-02-25T04:16:21.000Z | 2019-02-11T00:00:00.000 | {
"year": 2019,
"sha1": "f7684dfd97ac66e8bfac8c5659daa16462dad08c",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/1902.04165",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "f7684dfd97ac66e8bfac8c5659daa16462dad08c",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
234326963 | pes2o/s2orc | v3-fos-license | Research on Equipment Procurement Contract Performance Evaluation
The equipment procurement contract performance evaluation is an important means to effectively solve the problem like “inefficient progress”, “low targets”, and “increased costs” in equipment procurement. This paper defined the concept and connotation of the performance of equipment procurement contract, and proposed the equipment procurement contract performance evaluation, including influence factors analysis, evaluation indicators establishment, evaluation model construction, calculating results, etc., constructed four-dimension equipment procurement contract performance evaluation indicator system consisting of quality, schedule, funding, and service, and constructed the equipment procurement contract performance evaluation model based on BP neural network, and carried out a case analysis. The research conclusions provide a reference for the evaluation practices of equipment procurement contract performance.
Introduction
In order to fully understand the status of the equipment procurement contract, accurately discover the problems and weaknesses in the performance of the equipment procurement contract, and improve the performance of the equipment procurement contract, it is urgent to carry out evaluation research on the performance status of the equipment procurement contract. This paper put forward the general idea of equipment procurement contract performance evaluation, established the evaluation indicator system of the equipment procurement contract performance, built the evaluation model based on BP neural network, and conducted case analysis, in order to provide a reference for equipment procurement contract evaluation.
The definition of equipment procurement contract performance evaluation
The equipment procurement contract performance refers to the equipment contractor abides by the equipment procurement contract, fulfills the contract obligations, and provides high-quality weapon and equipment. The fulfillment of the equipment procurement contract is an important part of equipment procurement contract management. It is crucial for the military and contractors to execute the ICCBDAI 2020 Journal of Physics: Conference Series 1757 (2021) 012188 IOP Publishing doi:10.1088/1742-6596/1757/1/012188 2 contract and fulfill contract rights and obligations in accordance with the law. It is significant for improving the quality and efficiency of equipment procurement.
The equipment procurement contract performance evaluation refers to methods including both qualitative and quantitative to make objective and scientific value judgments on the equipment procurement contract performance based on the terms and objectives of the equipment procurement contract and evaluation indicators and standards, provides a quantitative reference basis for decisionmaking. Weapon and equipment have the characteristics of complex technology, complex objective, an inefficient system, prolonged lifecycle, and huge investment, which cause great risks in the performance of equipment procurement contracts. Therefore, it is urgent to carry out the equipment procurement contract performance evaluation, supervise the equipment procurement contract performance from multiple dimensions, and improve the quality and efficiency of equipment procurement contract management.
The process of equipment procurement contract performance evaluation
The process of equipment procurement contract performance evaluation includes: analyzing influencing factors, establishing an evaluation indicator system, constructing an evaluation model, and calculating evaluation results, as shown in Fig.1.
Analysis of the factors influencing equipment procurement contract performance
From multiple dimensions, multiple perspectives, and multiple levels, this paper analyzed factors affecting the performance of equipment procurement contracts, including personnel, quality, progress, risks, costs, etc., provides a reference for building an evaluation indicator system.
Evaluation indicator system of equipment procurement contract performance
This paper constructed an indicator system for equipment procurement contract performance evaluation in terms of quality, progress, funding, and services.
Calculate the result of equipment procurement contract performance
According to the equipment procurement contract performance evaluation model, estimate the equipment procurement contract performance result find out the problems or weaknesses.
Equipment procurement contract performance evaluation indicator system
Evaluation indicator is an important parameter for understanding the performance of equipment procurement contracts and helps to strengthen the management of equipment procurement contracts. Geng Weibo et al. [1] (2020), Cai Wanqu et al. [2] (2018), Song Cuiwei et al. [3] (2014), proposed an evaluation indicator system for the performance of equipment procurement contract and conducted case analysis. Ji Lichao et al. [4] (2019), Li Zhengying et al. [5] (2019) analyzed the problems and risks in the performance of equipment procurement contracts and put forward countermeasures and suggestions. The equipment procurement contract performance evaluation indicator system constructed in this paper included 4 primary indicators and 8 secondary indicators, such as quality, progress, funding, and service, as shown in Tab.1.
Serial number Primary indicators Secondary indicators 1
Quality A Quality assurance A1 2 Quality process A2 3 Quality result A3 4 Progress B Progress completion result B1 5 Funding C Funding management C1 6 Result of expenditure C2 7 Service D After-sales service D1 8 Contract service D2 Tab.1 evaluation indicator system of equipment procurement contract
Quality evaluation indicator
Equipment quality evaluation indicator A refers to the process of fulfilling the equipment procurement contract and the quality of the equipment after completion, which includes three secondary indicators: quality assurance, quality process, and quality results.
Quality assurance evaluation indicator
The equipment quality assurance evaluation indicator A1 focuses on the quality management system, quality assurance program, and equipment production readiness status. This indicator is a qualitative indicator, obtained through expert scoring, with a score of 0 to 1.
Quality process evaluation indicator
The equipment quality process evaluation indicator A2 focuses on the management of equipment procurement supporting equipment, equipment research and production process management, and substandard product management. This indicator is a qualitative indicator, obtained through expert scoring, with a score of 0 to 1.
Quality result evaluation indicator
The equipment quality result evaluation indicator A3 focuses on the equipment qualification rate (key parts qualification rate, important parts qualification rate, and general parts qualification rate). This indicator is a quantitative indicator, calculated by Formula 1, with a score of 0 to 1. x Among them, 3 A x represents the evaluation score of equipment quality results, 1 x represents the pass rate of key parts, 2 x represents the pass rate of important parts, and 3 x represents the pass rate of general parts.
Progress evaluation indicator
The equipment procurement progress evaluation indicator B refers to the completion result of the equipment procurement contract fulfillment schedule. This indicator is a quantitative indicator, calculated by Formula 2, with a score of 0 to 1.
Among them, B x represents the evaluation score of equipment procurement progress, actual x represents the actual value of equipment procurement progress, and contract x represents the value of equipment procurement progress agreed in the contract.
Funding evaluation indicator
Equipment procurement expenditure evaluation indicator C refers to the funding management and use after the completion of the equipment procurement contract, which includes two secondary indicators: funding management and funding expenditure.
Funding management evaluation indicator
The evaluation indicator C1 of equipment procurement expenditure management focuses on the special situation of equipment procurement expenditures, the rationality of expenditure, and the timeliness of payment of contract expenditures. This indicator is a qualitative indicator, obtained through expert scoring, with a score of 0 to 1.
Expenditure evaluation indicator
The evaluation indicator C2 of equipment procurement expenditures focuses on the relationship between actual expenditures of equipment procurement expenditures and contractual expenditures. This indicator is a quantitative indicator, calculated by Formula 3, with a score of 0 to 1. x represents the evaluation score of equipment procurement expenditure, actual x represents the actual value of equipment procurement expenditure, and contract x represents the value of equipment procurement expenditure agreed in the contract.
Service evaluation indicator
Equipment procurement service evaluation indicator D refers to the performance of the equipment procurement contract, including two secondary indicators: after-sales service and contract service.
After-sales service evaluation indicator
The equipment procurement after-sales service evaluation indicator D1 focuses on the inspection of equipment delivery to the army after the completion of the equipment procurement contract, training, equipment instruction manuals, and technical information, equipment problem handling, and technical support for special tasks in wartime and emergency. This indicator is a qualitative indicator, obtained through expert scoring, with a score of 0 to 1.
Contract service evaluation indicator
Equipment procurement contract service evaluation indicator D2 focuses on contract management such as contract signing, modification, dispute settlement, and contract information management. This indicator is a qualitative indicator, obtained through expert scoring, with a score of 0 to 1.
Fundamental
In 1986, Rumelhart, Hinton, and Williams proposed an artificial neural network error backpropagation training algorithm (referred to as BP (Back Propagation) algorithm). The BP neural network is a complex nonlinear system composed of a large number of simple neurons interconnected, as shown in Fig.2.
The basic structure of BP neural network
Establish a three-layer BP neural network model for the equipment procurement contract performance evaluation. The number of nodes in the input layer is 8, which is the eight secondary evaluation indicators for the equipment procurement contract performance, and the number of nodes in the output layer is 1, which is the evaluation value of the equipment procurement contract performance result, the hidden layer nodes are set to 17, and the topological structure is 8-17-1, as shown in Fig.3.
Training result
Run the BP neural network model to reach the allowable error of the network after 50 iterations of training, and the model construction is completed. The error change process during model training is shown in Fig.4 Fig.4 Training process
Model validation
Use 5 sets of test samples to validate the model, and the verification results obtained are shown in Tab.3. It can be seen from the table that the maximum absolute error between the calculated result and the actual value is 0.029, and the minimum error is 0.0086. Assuming that the absolute error value is less than 0.02 as the standard, and the accuracy rate reaches more than 75%, the model is established. It can be seen from the table that the accuracy rate of the equipment procurement contract performance evaluation model is 80% (only one test sample exceeds the error), which shows that the equipment procurement contract performance evaluation model is established.
Case study
Taking four equipment procurement contracts as an example, using the trained equipment procurement contract performance evaluation model, the performance evaluation conclusion of the equipment procurement contract is obtained. It can be seen from the Tab.4 that the performance of equipment procurement contract A is positive (0.88); the performance of equipment procurement contract D is negative (0.33). There are problems in quality results, progress completion effects, contract services, etc., leading to the negative overall evaluation result.
Conclusion
With the in-depth advancement of competitive equipment procurement, the performance of equipment procurement contracts has increasingly become an important aspect of inspecting the quality and effectiveness of equipment procurement. Therefore, it is particularly important to do theoretical research on the evaluation of the equipment procurement contract. In accordance with the principle of "refined, operable, and quantifiable", this paper established an evaluation indicator system for the performance of equipment procurement contract from the four dimensions of quality, schedule, cost, and service, and uses the BP neural network model to carry out the equipment procurement contract performance evaluation. In the next step, we will further study the evaluation indicators and standards of equipment procurement contract performance, establish a quantifiable evaluation indicator calculation model, and improve the credibility of equipment procurement contract evaluation. | 2021-05-11T00:05:52.056Z | 2021-01-01T00:00:00.000 | {
"year": 2021,
"sha1": "91fb1f6c798097be2b4cb02aa5dcb556008f2dde",
"oa_license": null,
"oa_url": "https://doi.org/10.1088/1742-6596/1757/1/012188",
"oa_status": "GOLD",
"pdf_src": "IOP",
"pdf_hash": "a5d063613d889942f6e3f3e3d70b1d4cc9142eb9",
"s2fieldsofstudy": [
"Engineering",
"Business",
"Computer Science"
],
"extfieldsofstudy": [
"Business",
"Physics"
]
} |
187489229 | pes2o/s2orc | v3-fos-license | Model to build cost competitiveness through material productivity – a case study
Companies are facing increased customer expectations and cut throat market competition. Organizations are driving three major performance measures which are market outperformance, incremental margin, and cash flow conversion. As companies continue to outsource large portions of their manufacturing, managing material costs in the supply chain are important in reducing overall costs and remain competitive in order to ensure that all supply chain partners particularly in the upstream supply chain survive and be part of the future growth. The purpose of this paper is to present the detailed analysis of the various cost factors which affects the organizational performance and developing a unique model for the material productivity program. The key criterions include the innovative ideas of cost reduction, continual focus to eliminate wastes in the supply chain and also to drive excellence in execution of these projects.As the case industry has become global, it is essential for the case industry to carryout structured and sustained material cost reduction activity in order to capture the potential market through cost leadership and to emerge as best cost supplier among the other plants. This particular research work discusses more in detail about Indian market conditions, the changing customer needs due to entry of the global multinationals, the new challenges that we face in the local and the global market and how we respond to it and also to spell out the changing customer demand for the reduced cost, the challenges of price escalations of various input costs, the processes which case industry follow to reduce the cost, and suggesting a cost reduction methodology to achieve sustained cost reduction year on year. The result shows 4% reduction in material costs and the quality improvement of the production of automotive ancillary components.
Introduction
Every organization aims for profit and competency so that it can sustain in the market. The challenge is despite the increase in material costs, manufacturing costs, material costs contribution in the product cost is maximum since it plays a significant role in determining the revenue of the company. The case industry is a multi-national and there is a lot of opportunity available across the world. It is important for the case industry to do a structured, sustained material cost reduction activity in order to capture the potential market and to emerge as a competitive supplier. This work discusses more in detail about market conditions of India, the fluctuating customer demands, the fresh challenges that the country faces in the local and the international market with respect to cost and how we react to it. The objective of this study is to demonstrate the altering customer ultimatum for reduced costs, the challenge of input costs, the procedures, which case industry follow in order to lessen the cost and developing a method for refining the cost competitiveness of the upstream supply chain.
Literature review
In complementarities and cost reduction magazine, from the auto supply industry by [1] Susan Helper Apr 1997, shown the need to manage costs across the entire company by Japanese corporations. According to Aqua MCG Special [2] Report -Supply Chain Cost reduction is, for any organization, meeting the end customer expectations, maximizing the value or return to the investors is the key requirements. as General Motors seeks cost reduction by setting up a global competitive market, while Ford and Chrysler are trying to attain the same goal by making long term commitments to a few firms. The voice relationship with suppliers reduces the customer's bargaining power Helper and Levine, (1992). The incremental strategy demands on cost reduction to create shareholder value by improving capital and labor productivity. As mentioned in the report published by [3] Ernst & Young on Cost competitiveness from [5]complexity to confidence , during the last few years of economic and market volatility, reducing costs have been a constant focus of management around the world.
According to [4] Neil De Koker, (2002) managing director of the OESA (Original Equipment Suppliers Association), merger and acquisition activities do not provide any increased margins.. In most industries sourcing & procurement plays an important role, as company's profitability rests substantially on its ability to obtain goods, services at the lowest total cost. Refer following figure 1
Cost reduction framework
The framework followed for reducing the cost in the case industry is shown below in figure 2.
Figure 2
Cost reduction methodology.
Cost Effectiveness Model
This model is applied explicitly to tier 1, tier 2 suppliers of case industry . In the triangular model developed, the bottom of the pyramid seeks the support of the customer and as it reaches the top of the pyramid, it drives the development to the substitute suppliers to be on their own to build cost competiveness. The model is shown in the figure 3. Bar made to cold forging of Turned parts: The traditional practice of turning bar using single spindle, multi spindle auto machines. About 20-30% of the material will be wasted in the form of burr removal. The cycle time taken for machining was replaced with cold forging method which resulted in an overall cost reduction of 30-40% per part. The before and the after comparison is shown in the figure 4.
Figure 4
Hose adopter manufacturing -process change.
Gravity Die casting (GDC) to Pressure die castings of Aluminium castings (PDC) There is a possibility to reduce the weight of the input alloy material through conversion from GDC to PDC. The investment cost of the PDC die was higher than the GDC. Converting from GDC to PDC depends on the return on investment (ROI) calculation. One such example of GDC to PDC is shown in the figure 5. Because of the above cost reduction ideas, the material cost of the product has reduced significantly by 23%. The benefit was also shared with the customer and thus the case industry was able to offer the product at the best competitive rate in comparison with the competitor. The price comparison is shown in the figure 10.
Figure 10
Price comparisons of spring brake actuatorafter.
Conclusion
The work on material productivity resulted in synergy among different function, global alignment, and improving the supply chain. Material productivity Management process can support the organization to concentrate and bring in ownership with all the stakeholders. What was thought as not possible in meeting the international expectations on material productivity so far, have been changed with the approach to work on the system, the process and structures. This brought a cultural change within the organization to accept the material productivity as an organizational requirement.
The model on upstream -Supply chain competitiveness, gave a road map on how we bring about change with our tier 1, tier 2 suppliers to build their capability and be self-driven in bringing about changes at their factories. | 2019-06-13T13:18:35.354Z | 2018-09-20T00:00:00.000 | {
"year": 2018,
"sha1": "c5ccb110230280760f0fa846b3a50730793ba9c0",
"oa_license": null,
"oa_url": "https://doi.org/10.1088/1757-899x/402/1/012122",
"oa_status": "GOLD",
"pdf_src": "IOP",
"pdf_hash": "2622d9a1333f262ae9208e9629de857b31cd25b4",
"s2fieldsofstudy": [
"Business"
],
"extfieldsofstudy": [
"Business"
]
} |
10362012 | pes2o/s2orc | v3-fos-license | Parenteral Nutrition–Associated Liver Injury and Increased GRP94 Expression Prevented by ω-3 Fish Oil–Based Lipid Emulsion Supplementation
ABSTRACT Objective: Parenteral nutrition in infants with gastrointestinal disorders can be lifesaving, but it is also associated with parenteral nutrition–associated liver disease. We investigated the effects of incorporating ω-3 fish oil in a parenteral nutrition mixture on signs of parenteral nutrition–associated liver disease and explored the mechanism involved in this process. Methods: Seven-day-old New Zealand rabbits were divided into 3 groups of 8, and for 1 week they were infused via the right jugular vein with standard total parenteral nutrition with soybean oil (TPN-soy) or TPN with ω-3 fish oil–based lipid emulsion (TPN-FO), or naturally nursed with rabbit milk (control). Serum and liver tissues were analyzed for serological indicators and pathology, respectively. Reverse-transcriptase polymerase chain reaction was used to evaluate the messenger RNA levels of the endoplasmic reticulum stress chaperone protein glucose-regulated protein 94 (GRP94) in liver tissues and GRP94 protein levels were compared through immunohistochemistry and Western blot assays. Results: TPN-soy animals had significantly higher serum total bilirubin, direct bilirubin, and γ-glutamyl transpeptidase and lower serum albumin than the controls (P < 0.01, each) or the TPN-FO group, which were similar to the controls (P < 0.01 cf. TPN). Damage to liver tissues of the TPN-FO group was much less than that of the TPN-soy group. GRP94 messenger RNA and protein levels in liver tissues of TPN-soy animals were significantly higher than that of the controls or TPN-FO rabbits, which were similar to the controls. Conclusions: Incorporating ω-3 fish oil in parenteral nutrition emulsion greatly prevented liver dysfunction and liver tissue damage in week-old rabbit kits, possibly by preventing endoplasmic reticulum stress.
P arenteral nutrition, or intravenous feeding, provides nutritional support for infants who do not have adequate gastrointestinal function. It can be a lifesaving therapy for newborn patients. Since the first documented report of an infant given parenteral nutrition was published in 1944, >30,000 neonates have survived through its use (1). The treatment is, however, not without its inherent risks, among which parenteral nutrition-associated liver disease (PNALD) is a common complication. Approximately 30% to 60% of the infants who require long-term parenteral nutrition develop PNALD, with abnormalities of liver function and hepatic damage (2). The latter can lead to cholestasis, especially in premature infants, and may result in life-threatening liver cirrhosis (3). Although the pathogenesis of parenteral nutritionassociated cholestasis is not entirely understood, in severe cases bile duct regeneration, portal inflammation, and fibrosis are contributing factors (4).
Soybean oil is composed mainly of v-6 polyunsaturated fatty acids (PUFAs). Recent evidence suggests that lipid emulsions that consist of soybean oil in parenteral nutrition mixtures may have an essential role in the onset of subsequent liver damage (1,5). On the contrary, fish oil is rich in v-3 PUFAs, and fish oil-based lipid emulsions may be hepatoprotective and prevent PNALD (5,6). Moreover, the v-3 PUFAs in fish oil are relatively safe and can be used in neonates (7) and preterm infants (8). Nevertheless, the mechanism associated with putative v-3 PUFA-mediated hepatoprotection is unclear.
The endoplasmic reticulum (ER) is the site of synthesis and folding of secretory proteins. Disturbances in ER function may cause unfolded protein response and ER stress (ER stress), eventually leading to cell death and many human diseases (9,10). Hepatocytes are secretory cells that are rich in ER, and ER stress in hepatocytes is closely associated with the pathogenesis of liver diseases (11,12). Glucose-regulated protein 94 (GRP94), a member of the heat shock protein 90 family, contributes to the regulation of protein folding in the ER and thus the control of ER stress (13,14).
In the present study, we evaluated the effects of total parenteral nutrition (TPN) containing v-3fish oil or v-6 soybean oil on PNALD, by monitoring GRP94 levels in neonate rabbits.
Experimental Assignment and Establishment of TPN Model
The animal ethics committee of the Children's Hospital Affiliated to Soochow University granted approval for this study. Seven-day-old full-term New Zealand white rabbits (male and female, n ¼ 24, weighing 100-120 g) were obtained from Wuxi Huishan Jiangnan Experimental Animal Centre (animal license number SCXK [Su] 2009-0005), Jiangsu, China. All of the rabbits were nursed from their mother before arrival. During the experimental period, the rabbits were maintained in an incubator at 268C to 288C and 40% to 60% humidity, under a 12 hour/12 hour light/dark cycle.
The rabbits were randomly and equally divided into 3 groups of 8, to be sustained for 1 week on total parenteral nutrition with soybean oil (TPN-soy) via infusion, TPN containing fish oil (TPN-FO) via infusion, or naturally nursed with rabbit milk only (control).
In the TPN groups, animals were infused with nutrient mix; the total daily volume of intravenous nutrient solution for each rabbit in the TPN groups was 240 mL/kg, infused within 24 hours. The TPN regimen was sustained for 7 days, as previously described (15) (the components in the mix were purchased from Sino-Swed Pharmaceutical, Beijing, China [Tables 1 and 2]. Anesthesia was implemented with intraperitoneal injection of chloral hydrate (0.3 g/kg body weight). The rabbit was then placed in a horizontal dorsal decubitus position on the surgical table, and its legs were fixed to the extremities of the table. Skin sterilization was performed with benzalkonium bromide solution. For injection, the jugular vein was located, and a 10-gauge angiocatheter with a 1.2-mm silica gel tube was inserted approximately 1.5 cm into the superior vena cava. The tail end of the silica gel tube led out through a 0.5-cm incision in the dorsal scapular area, which was used as a subcutaneous tunnel exit. To avoid detachment, the end of the silica gel tube was connected to a rotating device.
Each 240-mL portion of TPN comprised 210 kcal, and the ratio of sugar to lipid was 1.4:1. Fat in each TPN group was given for 40 mL Á kg À1 Á day À1 ; components are listed in Table 1.
Serological Evaluation
To evaluate the relevant serological indicators, rabbits were anesthetized by intraperitoneal injection of 10% chloral hydrate. Two milliliters of blood were collected by cardiac puncture into lithium heparin anticoagulant tubes. After centrifuging at 3500 rpm, the serum was carefully separated and stored at À208C until used. Before analysis, the serum samples were removed from the À208C refrigerator and incubated overnight at 48C. The total bilirubin, direct bilirubin, alanine aminotransferase, aspartate aminotransferase, total protein, albumin, g-glutamyl transpeptidase (g-GT), alkaline phosphatase (ALP), triglyceride, total cholesterol, and prealbumin levels were examined using a Hitachi 7600 automated chemistry analyzer (Hitachi, Tokyo, Japan).
Pathological Examination
In addition to collecting blood samples (described above), liver tissues were collected after the animals were anesthetized. The abdomen was opened and the liver tissues were carefully removed. Some of the tissues were stored at À808C for analysis of GRP94 messenger RNA (mRNA) and GRP94 protein (described below), whereas other portions were used for immunohistochemical (below) or histopathological analysis.
After washing with normal saline tissue, samples were fixed in 10% paraformaldehyde, dehydrated through an alcohol series, cleared in xylene, and embedded in paraffin. Paraffin-embedded tissues were sectioned (5-mm thick) with a microtome. For histopathological comparisons, sections were dried, deparaffinized, and stained with hematoxylin and eosin. A pathologist who is experienced in liver disease reviewed the histology slides.
Reverse-Transcriptase Polymerase Chain Reaction
The mRNA levels of GRP94 were detected via reverse-transcriptase polymerase chain reaction. Liver tissues were removed from the À808C refrigerator and crushed. Total RNA was extracted using Trizol reagent in accordance with the manufacturer's instructions (Invitrogen, Carlsbad, CA), and the amount and purification were evaluated with an ultraviolet spectrophotometer.
A total of 1 mg RNA was used to synthesize complementary DNA using a Reverse Transcriptase Kit (Promega, Madison, WI). PCR primers were designed using Primer 5.0 software, compared with the GenBank database for identification, and synthesized by Sangon Biotech (Shanghai, China). PCR amplification was performed on the GRP94 gene upstream primer (5 0 -AGGAAACACTC TGGGACG-3 0 ) and downstream primer (5 0 -ATTCAGGTACTT AGGCATC-3 0 ), producing an amplified fragment of 583 bp. Amplification of the glyceraldehyde 3-phosphate dehydrogenase (GAPDH) gene upstream primer (5 0 -GTTTGTGATGGGCGTG AA-3 0 ) and downstream primer (5 0 -CGAAGGTAGAGGAGTG GGTG-3 0 ) produced an amplified fragment of 497 bp. The PCR reaction included 200 mmol/L dNTP, 2 U Taq DNA polymerase, 0.2 mmol/L of each of the upstream and downstream primers, 5 mL template complementary DNA, and ddH 2 O to reach a total volume of 50 mL. Reaction conditions were predenaturation at 948C for 10 minutes; denaturation at 948C for 45 seconds, annealing at 558C for 30 seconds, and extension at 728C for 30 seconds, for a total of 35 cycles; and extension at 728C for 7 minutes.
After electrophoresis in a 1.5% agarose gel, ethidium bromide-stained bands were visualized by ultraviolet transillumination, and the fluorescence intensity was semiquantified using a Bio2239 gel analysis system (Bio-Print, Chicago, IL).
Immunohistochemistry
Some of the paraffin-embedded tissue sections (described above) were used for immunohistochemical analysis. The glass slides used for immunohistochemistry were precoated with poly-Llysine (WuHanBoster Biological Technology, Wuhan, China). Immunostaining was carried out with a streptavidin-peroxidase kit obtained from Suzhou Enmaike Bio-Tech (Suzhou, China) in accordance with the manufacturer's instructions. Three nonoverlapping fields were randomly selected under Â400 magnification.
The cells positively stained with anti-GRP94 primary antibody appeared with brown-yellow granules in the cytoplasm of hepatocytes. The intensity of immunohistochemical staining was analyzed using Image-Pro-Plus image analysis software. An average gray value supplied by the software was used to reference the intensity of GRP94 staining (ie, normalized to an internal control).
Western Blot
Total protein was extracted from cells using lysis buffer. Protein concentrations were measured and equal amounts of protein extracts were resolved using sodium dodecyl sulfate-polyacrylamide gel electrophoresis (SDS-PAGE), which were then transferred to a polyvinylidene fluoride membrane (Millipore, Temecula, CA). Membranes were blocked with blocking buffer for 2 hours, and then incubated with primary antibody against GRP94 or b-actin (1:1000 dilution) at 48C overnight. After washing, membranes were incubated with ALP-conjugated goat anti-rabbit secondary antibody (1:600 dilution) at room temperature. Immunobands were visualized using an ALP kit (WesternBreeze; Invitrogen). To quantify protein levels, the expression bands of target proteins were analyzed, and the densitometric values were used to conduct statistical analysis. The housekeeping protein b-actin was used as an internal control.
Statistical Analyses
Data were analyzed using SPSS 17.0 software (SPSS Inc, Chicago, IL) and are presented as mean AE standard deviation. Statistical significance was determined using 1-way analysis of variance. P < 0.05 was recognized as significantly different.
Serological Indicators
No statistical significance was found in the levels of total protein, alanine aminotransferase, ALP, aspartate aminotransferase, triglyceride, total cholesterol, or prealbumin among the 3 groups (P > 0.05, Table 3). Compared with control animals, TPN-soy animals had significantly higher serum total bilirubin, direct bilirubin, and g-GT levels, but lower albumin (P < 0.01). No statistical differences were detected in the levels of these indicators in TPN-FO animals compared with the controls (P > 0.05, each). Compared with the TPN-soy group, serum total bilirubin, direct bilirubin, and g-GT were significantly lower in the TPN-FO group (F ¼ 1247.40, 1037.94, 971.09, respectively; P < 0.01 each), and albumin was significantly higher (F ¼ 70.31, P < 0.01).
Liver Pathology
Histological examination of the liver tissues obtained from the 3 experimental groups revealed that those of the control rabbits appeared normal, with intact hepatocytes (Fig. 1A) and without any signs of hepatocyte degeneration, necrosis, inflammatory cell infiltration, cholangiectasis, bile duct epithelial hyperplasia, or cholestasis. In the liver tissues of the TPN-soy group, inflammatory cell infiltration, diffuse hepatic steatosis, and disrupted hepatic cord structure were, however, evident (Fig. 1B), but there was no cholestasis or liver fibrosis, and the hepatic lobule was still visible.
In the TPN-FO group, only mild hepatic steatosis and inflammatory cell infiltration were found (Fig. 1C); the morphology of hepatocytes was normal, and there was no cholangiectasis, bile duct epithelial hyperplasia, or cholestasis.
GRP94 mRNA Levels in Liver Tissues
Hematoxylin and eosin staining of liver tissues revealed that rabbits given TPN containing fish oil sustained only mild hepatic steatosis and inflammatory cell infiltration compared with the animals infused with TPN containing soybean oil. Considering that ER stress is associated with liver pathology and GRP94 participates in the regulation of ER stress, we then assessed the mRNA levels of GRP94 in the liver tissues of the different groups to investigate the molecular mechanism underlying the seeming hepatic protection of TPN-FO against TPN-soy-induced liver damage.
We found that the GRP94 mRNA levels in liver tissues of the TPN-soy group (1.217 AE 0.113, referenced to the gray value standard) were significantly higher than these levels in the control (0.614 AE 0.034, P < 0.01; Fig. 2) and also significantly higher than the GRP94 mRNA levels of the TPN-FO group (0.661 AE 0.117). The GRP94 mRNA levels of the TPN-FO and controls were similar.
GRP94 Protein Levels in Liver Tissues
To further our investigation of the mechanism underlying hepatic protection associated with TPN-FO, the protein levels of GRP94 in liver tissues were determined via immunohistochemistry and Western blot assays.
Immunostaining of these tissues showed that GRP94 protein levels in the liver tissues of the TPN-soy group (133.84 AE 13.66, referenced to the gray value standard) were significantly higher than those of the controls (78.14 AE 8.17, P < 0.01; Fig. 3) and also significantly higher than the GRP94 protein levels of the TPN-FO group (80.73 AE 9.36, P < 0.01), whereas the GRP94 protein levels of the TPN-FO and controls were similar.
The results obtained by Western blot were in accord with those of the immunostaining (Fig. 4). That is, there was no associated upregulation in the GRP94 protein levels in the TPN-FO group (0.29 AE 0.03, relative optical density) as there was in the TPN-soy-treated animals (0.63 AE 0.04, P < 0.05), and GRP94 protein levels of the TPN-FO and controls (0.22 AE 0.01) did not differ significantly (P > 0.05). These data suggest that TPN-FO may prevent liver damage induced by TPN-soy, possibly by suppressing GRP94 upregulation and ER stress.
DISCUSSION
PNALD is a serious complication of patients, especially infants, who require long-term parenteral nutrition therapy. Here, we sustained 7-day-old rabbit kits for 1 week with TPN containing either soybean oil or v-3 fish oil, to examine the role of v-3 fish oil in preventing parenteral nutrition-associated liver injury. The effects of both of these were compared with normally nursed controls with regard to signs of PNALD. We found that, compared with the control group, usual soybean oil parenteral nutrition was associated with significant liver dysfunction, as indicated by higher ALP, alkaline phosphatase; ALT, alanine aminotransferase; AST, aspartate aminotransferase; g-GT, g-glutamyl transpeptidase; TPN ¼ total parenteral nutrition; TPN-FO ¼ TPN with v-3 fish oil; TPN-soy ¼ TPN with soybean oil. Ã P < 0.01 compared with TPN-soy. y P < 0.01 compared with the control.
A B C serum total bilirubin, direct bilirubin, and g-GT levels and lower serum albumin. These effects were not observed in the TPN-FO group, which was similar to the control group. Moreover, histological examination of liver tissues revealed hepatic damage in the TPN-soy group not seen in the TPN-FO, including inflammatory cell infiltration, diffuse hepatic steatosis, and disrupted hepatic cord structure. These observations are consistent with previous reports (15). Rats given TPN were found to have higher levels of the ER stress marker protein CHOP (C/EBP homologous protein), and also higher levels of molecules that are proapoptotic under ER stress, c-Jun NH2-terminal kinase (JNK1/2), and p38 MAPK (16). This suggests that ER stress is induced by TPN therapy. GRP94 is also a marker for ER stress, and in the present study we detected higher levels of GRP94 mRNA and GRP94 protein in the liver tissues of the rabbits given TPN-soy. Therefore, ER stress may participate in TPN-mediated liver damage in both rats and rabbits. Furthermore, it was reported that in normal liver L02 cells cultured in vitro, ER stress contributed to the progression of PNALD (17). In that report, ERS was induced with palmitate, which led to the upregulation of tribbles homolog 3 (TRB3), a pseudokinase that is known to be involved in the pathogenesis of PNALD. Therefore, the ER stress seems be an important contributing factor in the pathogenesis and progression of PNALD.
Although the etiology of PNALD is poorly understood, the soybean or combined soybean and safflower oils that are included in TPN are accepted as contributing factors (18). Both of these oils are rich in v-6 fatty acids. It has been reported that v-6 PUFAs generate proinflammatory mediators, which may contribute to the onset of liver diseases, whereas mediators derived from v-3 PUFAs are largely anti-inflammatory (19). A randomized controlled trial conducted by Puder et al (20) showed that fish oil-based intravenous lipid emulsion was safe for infants with PNALD, and could reduce mortality and organ transplantation rates in children with short bowel syndrome. Consistent with their observations, Diamond et al (21) reported that v-3 fatty acids may prevent PNALD by improving bile flow, inhibiting steatosis, and exerting immunomodulatory effects, although the molecular mechanism involved in this process remains unclear. In the present study, we found that substitution of v-3 fish oil for soybean fat emulsion was associated with prevention of liver dysfunction indicated by serology results, and liver tissue damage observed through histology. This implies that v-3 fish oil may protect against PNALD. Moreover, levels of GRP94 mRNA and GRP94 protein in the kits given TPN with v-3 fish oil were comparable with the control rabbits, and significantly lower than those given TPN with soybean fat emulsion. These data indicate that TPN-induced liver injury was reduced in those given v-3 fish oil, unlike those given soybean fat emulsion, and its mechanism may be associated with a reduction in ER stress by reducing the GRP94 expression associated with soybean fat emulsion. Our findings are consistent with a previous report that glycyrrhizin, an active component of licorice root that has been used to treat chronic hepatitis, represses TPN-associated acute liver injury in rats by suppressing ER stress (16). There was a difference in the amount of a-tocopherol between the 2 TPN solutions. a-Tocopherol is a well-known lipophilic antioxidant that has the ability to scavenge peroxyl radicals (22). Nandivada et al (19) reported that the risk factors of cholestasis and hepatic injury observed in PNALD included elevated serum concentrations of phytosterols, an abundance of v-6 PUFAs, and a relative paucity of a-tocopherol. Moreover, previous research indicated that a-tocopherol protected against CCl 4 -induced liver damage (22). Unfortunately, we did not investigate whether the a-tocopherols play a role in the prevention of PNALD in the TPN-FO group.
In summary, our present study showed that substitution of v-3 fish oil for soybean fat emulsion in TPN greatly prevented liver dysfunction and liver tissue damage in week-old rabbit kits, possibly by preventing ER stress. Our study may provide valuable evidence for the use of v-3 fish oil for preventing PNALD in infants. | 2016-05-15T12:39:42.399Z | 2014-11-24T00:00:00.000 | {
"year": 2014,
"sha1": "e43d34716e8fd59e99b5033752f725bfc1f267f3",
"oa_license": "CCBYNCND",
"oa_url": "https://europepmc.org/articles/pmc4255760?pdf=render",
"oa_status": "GREEN",
"pdf_src": "PubMedCentral",
"pdf_hash": "e43d34716e8fd59e99b5033752f725bfc1f267f3",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
234358730 | pes2o/s2orc | v3-fos-license | Smooth rational projective varieties with non-finitely generated discrete automorphism group and infinitely many real forms
We show, among other things, that for each integer $n \ge 3$, there is a smooth complex projective rational variety of dimension $n$, with discrete non-finitely generated automorphism group and with infinitely many mutually non-isomorphic real forms. Our result is inspired by the work of Lesieutre and the work of Dinh and Oguiso.
Introduction
It is quite recent that negative answers are given to the following long standing natural questions (see eg. [BS64], [DIK00], [Kh02], [CF20] for positive directions): Question 1.1. Let V be a smooth complex projective variety of dimension ≥ 2.
(1) Is the automorphism group Aut(V ) finitely generated if Aut(V ) is discrete?
(2) Are real forms of V , i.e., systems of homogeneous equations with real coefficients defining V , finite up to isomorphisms over R?
The first negative answers to these questions are given by Lesieutre [Le18]. He constructs a smooth complex projective variety V of dimension 6 with Kodaira dimension κ(V ) = −∞ denying both (1) and (2). This variety is not rationally connected. Expanding his idea, Dinh and Oguiso ( [DO19]) construct a smooth complex projective variety V of any dimension ≥ 2 with κ(V ) ≥ 0 again denying both (1) and (2). In somewhat different directions, Dubouloz, Freudenburg, Moser-Jauslin construct smooth affine rational varieties for any dimension ≥ 4 with infinitely many real forms ( [DFMJ21]). However, it is still completely open if there are counterexamples among smooth complex projective rational varieties, the most basic varieties in birational algebraic geometry.
The aim of this paper is to construct a smooth complex projective rational variety V of any dimension ≥ 3 denying both (1) and (2) (Theorem 1.3 below).
Before stating our main results, we recall precise definitions of crucial notions relevant to Question 1.1 and our main results.
(1) A variety of dimension n is called rational if it is birational to the projective space P n over the base field.
(2) An R-scheme W → Spec R is called a real form of a C-scheme V → Spec C if W × Spec R Spec C → Spec C is isomorphic to V → Spec C over Spec C. Two real forms W i → Spec R (i = 1, 2) are isomorphic if they are isomorphic over Spec R.
2010 Mathematics Subject Classification. 14J50, 14P05. The first named author is supported by the Tier 2 grant MOE-T2EP20120-0010. The second named author is supported by JSPS Grant-in-Aid (S) 15H05738, JSPS Grant-in-Aid 20H00111, 20H01809, and by NCTS Scholar Program. The third named author is supported by NSFC (No.12071337, No. 11701413, and No. 11831013).
By abuse of language, we sometimes say that a C-scheme V is defined over R when a real form W of V is understood from the context. (See [Se02] and [CF20, for more details about real forms.) (3) Let V → Spec C be a complex projective variety. Then the automorphism group Aut(V ) := Aut(V /Spec C) of V over Spec C has a natural locally algebraic group structure with at most countably many connected components, via the Hilbert scheme of V ×V . We denote by Aut 0 (V ) the identity component of Aut(V ). It is of dimension dim H 0 (V, T V ) when V is smooth. Here, T V denotes the tangent bundle of V . So it is natural to ask if the group Aut(V )/Aut 0 (V ) is always finitely generated or not. We say that [Br19] for more details.) (4) We denote by κ(V ) the Kodaira dimension of a smooth complex projective variety V . Then κ(V ) ∈ {−∞, 0, 1, . . . , n−1, n}, where n = dim V . The Kodaira dimension is a birational invariant in the sense that κ(V ) = κ(V ′ ) if V and V ′ are smooth birational projective varieties. (See eg. [Ue75] for more details.) The following is our main theorem: (1) For each integer n ≥ 3, there is a smooth complex projective rational variety V of dimension n, with discrete, not finitely generated Aut(V ), and with infinitely many mutually non-isomorphic real forms.
(2) Let V be a smooth complex projective variety of dimension n ≥ 3.
Our proof of Theorem 1.3 (1) and (3) is explicit and is based on the surfaces constructed in [Le18] and [DO19]. As in [Le18] and [DO19], the most crucial part of the construction is a realization of some non-finitely generated discrete subgroup G of Aut(S) of some special surface S as a finite index subgroup of the automorphism group Aut(V ) of another variety V via taking some products and suitable blowing-ups, so that V keeps the group G as automorphisms but kills almost all Aut(S) \ G and at the same time produces essentially no new automorphisms. This process is, in general, hardest for rational varieties compared with other varieties, especially because of the last requirement "V produces essentially no new automorphisms" (cf. [Le18,Page 198,Rem.4
]).
We are primarily interested in smooth complex projective varieties. However, concerning the base field of the non-finite generation part of Theorem 1.3, it might be worth mentioning the following: Remark 1.4. Let p be a prime number and k be an algebraically closed field containing the rational function field F p (t). In the proof of Theorem 1.3 (1), we will use a special rational surface S defined over R constructed by Lesieutre [Le18]. (See Section 3.) Replacing S by a rational surface defined over F p (t) in [Le18, Page 203], we find that for each n ≥ 3 and for each prime number p ≥ 3, there is a smooth projective rational variety V of dimension n defined over k, with discrete, not finitely generated Aut(V ). Indeed, the construction and proof of Section 3 is valid if we replace both R and C by k. (1) Is there a smooth complex projective rational surface V with discrete, not finitely generated Aut(V )?
(2) Is there a smooth complex projective rational surface V with infinitely many mutually non-isomorphic real forms?
Unfortunately, our method is not available to answer Question 1.5. See [Be16], [Be17] for some constraint from complex dynamics.
As in [Le18] and [DO19], throughout this paper, we use the following three general facts frequently. See eg. [ Throughout this paper, we denote by c the complex conjugate map. Then c is the generator of the Galois group Gal (C/R) and Gal (C/R) = {id, c}.
Theorem 1.8. Let V be a smooth projective complex variety defined over R. Suppose that there is a finite index subgroup G of Aut(V ) such that Gal (C/R) = {id, c} acts on G as identity via g → c • g • c and G has infinitely many conjugacy classes of involutions. Then V has infinitely many mutually non-isomorphic real forms.
For a complex projective variety X and non-empty closed algebraic subsets Y i (i ∈ I) of X, we define This is a subgroup of Aut (X) and f |Y i ∈ Aut (Y i ) (i ∈ I) if f ∈ Aut (X, Y i (i ∈ I)). For simplicity, we denote the group Aut (X, {P }) by Aut (X, P ) if P is a closed point of X.
Whenever we consider a complex variety V with a natural real form (which will be understood by the construction in our case), we denote it by V R . By abuse of notation, we denote by the set of real points V R (R) of V R simply by V (R) and regard it as a subset of the set of closed points of
Lesieutre's surface
In this section, we recall from [Le18] the core rational surface, which we call Lesieutre's surface. Lesieutre's surface will play a crucial role in our proof of Theorem 1.3 (1).
Let L ′ i (0 ≤ i ≤ 5) be six lines defined over R in P 2 such that the intersection points are mutually distinct and the points P ij , P kl , P mn are not colinear for any partition We choose such six lines so that P 10 = 0 , P 20 = 1 , P 30 = 2 , P 40 = 3 , P 50 = ∞ under a fixed affine coordinate x of L ′ 0 = P 1 . Let S → P 2 be the blow-up of P 2 at the 15 points P ij .
We denote by E ij ⊂ S the exceptional curve over P ij and by Under the identification C = L ′ 0 via S → P 2 , we may use the same affine coordinate x for C = P 1 as L ′ 0 . Then P 1 = 0 , P 2 = 1 , P 3 = 2 , P 4 = 3 , P 5 = ∞ with respect to the coordinate x.
Definition 2.1. We call this surface S Lesieutre's surface. By construction, S is defined over R, i.e., where S R is the blow up of P 2 R := Proj R[x 0 , x 1 , x 2 ] at the R-rational points P ij ∈ P 2 R (R). In order to distinguish with other real forms, we call this S R the natural real form of S.
By definition, Lesieutre's surface is a smooth projective rational surface defined over R.
(4) Every element of Aut(S) is defined over R with respect to the natural real form S R .
In particular, the Galois group Gal(C/R) acts on Aut(S) as identity.
Proof. The assertion (1) follows from the adjunction formula and (L i , L i ) = −4 < 0. The assertion (2) is proved by [Le18, Thm.3 (1)]. Note that Aut(S) preserves the divisor 5 i=0 L i by (1). Then the assertion (3) is clear, because C = L 0 is the unique irreducible component of 5 i=0 L i containing P 5 . The first part of the assertion (4) is already explained. The second assertion of (4) is proved in the course of proof of [Le18,Lem.19]. We shall reproduce the proof here for the convenience of the readers. Since the curves E ij and L i are defined over R and their classes generate Pic(S) = NS(S), it follows that Gal(C/R) acts on Pic (S) as identity. Thus Gal(C/R) acts on Aut(S) as identity by (2). Note that the representation in (2) is equivariant under the Galois action.
By Proposition 2.2 (3), we have a representation Proposition 2.4. The group G satisfies: (1) Im(r C ) (resp. r C (G)) contains the following elements (2) G is not finitely generated.
(3) G has infinitely many conjugacy classes of involutions.
Proof. The fact that . This proves the assertion (1).
We show the assertion (2). The group is a subgroup of index two of G. So, by Theorem 1.6, it suffices to show that G + is not finitely generated.
Observe that r C (G + ) is an abelian group, as it is a subgroup of the abelian group This subgroup is not finitely generated. Thus the abelian group r C (G + ) is not finitely generated, either. Hence G + is not finitely generated.
We show the assertion (3). As in [Le18, Page 204], we consider the subgroup G ev of G defined by On the other hand, it is shown by [Le18,Cor.18] that G ev contains infinitely many conjugacy classes of involutions. Then G ev has infinitely many classes of involutions under the conjugate action of G on G ev , as G ev is a finite index normal subgroup of G. Hence G has infinitely many conjugacy classes of involutions as well.
Definition 2.5. Let S be Lesieutre's surface. We choose and fix τ S ∈ G such that r C (τ S ) = f 3 , that is, r C (τ S )(x) = −x on C = P 1 .
Proof of Theorem 1.3 (1)
We shall prove Theorem 1.3 (1). Construction 3.5 and Proposition 3.6 below will complete the proof of Theorem 1.3 (1). We employ the same notations for Lesieutre's surface as in Section 2. In the rest, the following elementary lemmas will be used frequently.
Lemma 3.1. Let Y and Z be complex projective varieties and let G be a subgroup of Aut(Y × Z). Assume that Aut(Y ) is discrete and the projection Y × Z → Z is equivariant with respect to G. Then G ⊂ Aut(Y ) × Aut(Z).
Proof. Let f ∈ G. By the second assumption, f is of the form where f Z ∈ Aut(Z) and f z ∈ Aut(Y ). Then we have the morphism Since Aut(Y ) is discrete by the first assumption, it follows that f z does not depend on . This implies the result. Proof. The group G of all automorphisms which fix each point of A is a finite-index subgroup of Aut(P m , A). It is enough to show that G is trivial. This is true because if f is an automorphism then it is given by a square matrix of size m + 1. It has at most m + 1 linearly independent eigenvectors.
The following generalization has its own interest. We will also apply it for abelian varieies in Section 5.
Lemma 3.4. Let X be any compact Kähler manifold of dimension n. There is a number N such that if A is a finite subset of X containing N points in general position, then Aut(X, A) is finite. In particular, when all morphisms P n−1 → X are constant (e.g. X is a complex torus), then Aut( X) is finite, where X is the blow-up of X at the points in A.
Proof. The second assertion is a consequence of the first one because for such an X we have where E A is the set of exceptional divisors of X → X.
The first assertion is a consequence of Fujiki-Lieberman's theorem ( [Fu78,Thm.4.8], [Li78]). Indeed, since Aut(X) is a complex Lie group of finite dimension and Aut 0 (X) is associated to holomorphic vector fields of X, if P 1 ∈ X is a general point, then Aut(X, P 1 ) has dimension smaller than the one of Aut(X). By induction, there exists N such that for general P 1 , . . . , P N −1 , the group Aut(X, P 1 , . . . , P N −1 ) is discrete. It follows that the set of points which are fixed by some non-trivial element of this group is a countable union of proper analytic subsets of X. Choose P N ∈ X outside this set. Then we have that Aut(X, P 1 , . . . , P N −1 , P N ) is finite. Hence so is Aut(X, {P 1 , . . . , P N −1 , P N }), because [Aut(X, {P 1 , . . . , P N −1 , P N }) : Aut(X, P 1 , . . . , P N −1 , P N )] ≤ N!.
Note that ι is defined over R. Let us choose a finite set such that R i (0 ≤ i ≤ m + 1) are in general position in the sense that no m + 1 points of them are contained in a hyperplane of P m . Then R is invariant under ι and ι ∈ Aut(P m , R 0 , where S is Lesieutre's surface. Then X 0 is a smooth projective variety of dimension n = m + 2 defined over R with the natural real form X 0,R = S R × P m R . We will use the same notations of the points and curves on S as in Section 2. Let π 1 : X 1 → X 0 be the blow-up of X 0 at the points in {P 5 } × R ⊂ X 0 (R). (Once again, see the end of Introduction for the precise meaning of X 0 (R).) We denote by T (P 5 ,R 0 ) X 0 the tangent space of X 0 at (P 5 , R 0 ). Denote also by E 0 = P(T (P 5 ,R 0 ) X 0 ) = P m+1 ⊂ X 1 the exceptional divisor corresponding to the point (P 5 , R 0 ) ∈ X 0 and by E i (1 ≤ i ≤ 2(m + 1)) the remaining 2(m + 1) exceptional divisors. Then X 1 and E 0 are defined over R with natural real forms X 1,R and E 0,R . We choose Let π 2 : X 2 → X 1 be the blow-up at the point [(v, w)] in X 1 (R). Then X 2 is defined over R with a natural real form X 2,R induced by X 1,R . We denote the exceptional divisor of π 2 by F .
Proposition 3.6. Let X 2 be as in Construction 3.5. Then: (1) X 2 is a smooth complex projective rational variety of dimension n = m + 2 ≥ 3 defined over R.
Proof. Set X = X 2 . We shall employ the same notation as in Construction 3.5.
The assertion (1) is clear by the construction. We show the assertions (2) and (3) by dividing the argument into several steps.
Proof. Recall that X 0 = S × P m and H 0 ( by the Künneth formula. Since the linear system | − 2K S | consists of a single element by Proposition 2.2 (1), while −2K P m is very ample, the anti-bicanonical map coincides with the second projection p 2 : X 0 → P m . Since the linear system | − 2K X 0 | is preserved by Aut(X 0 ), it follows that the second projection p 2 : X 0 → P m is Aut(X 0 )equivariant. Since Aut(S) is discrete by Proposition 2.2, the result follows from Lemma 3.1.
(3) Let ϕ : P m+1 → X 2 be a non-constant morphism. Then ϕ(P m+1 ) is one of the following divisors: + 1)). Proof. We show the assertion (1). Note that m+1 ≥ 2. Since the Picard number ρ(S) ≥ 2, there is no surjective morphism P m+1 → S if m+ 1 = 2. Therefore there is no non-constant morphism P m+1 → S or P m+1 → P m by Lemma 3.2. Hence the morphism p i • ϕ is constant for the projections p i (i = 1, 2) from X 0 = S × P m to the i-th factor. Hence ϕ is constant.
Since π 1 • ϕ is constant by (1), the assertion (2) follows. We show the assertion (3). Recall that the proper transform Hence there is no surjective morphism P m+1 → E ′ 0 . Therefore, by Lemma 3.2, E ′ 0 admits no non-constant morphism from P m+1 . This together with the assertion (2) implies the assertion (3) exactly for the same reason as in the proof of (2).
Let ǫ ∈ {0, 1}. We define Here ι is the involution defined in Construction 3.5 and d(ϕ |C ) P 5 is the differential map of ϕ |C : C → C at P 5 . By definition, the index ǫ in (ϕ, ι ǫ ) ∈ H is uniquely determined by ϕ.
Proof. Let
Here we recall that C(v, w) is the 1-dimensional linear space in T (P 5 ,R 0 ) X 0 spanned by (v, w) and the action of (ϕ, g) on C(v, w) is nothing but the differential map. Then, by Claim 3.9, we have ) : Aut(P m , R 0 , R 1 , ..., R 2(m+1) )] < ∞. In particular, the number of g's in the definition of H ′ is at most finite. Thus [H ′ : H] < ∞.
The last assertion is clear by the definitions of G and H with the remark before Claim 3.10. This proves the claim.
(2) H is a finite index subgroup of Aut(X).
Proof. By Claim 3.8 (3), we have } is a finite family, this implies the assertion (1). The assertion (2) follows from (1) and Claim 3.10. Now we are ready to complete the proof of Proposition 3.6 (2), (3). By Claims 3.10 and 3.11 (2), H ≃ G is a finite index subgroup of Aut(X). Since G is not finitely generated by Proposition 2.4 (2), Aut(X) is not finitely generated as well by Theorem 1.6. This proves Proposition 3.6 (2).
By the construction, X is defined over R. By Proposition 2.2 (4) and by the construction, the Galois group Gal(C/R) acts trivially on H. Since G has infinitely many conjugacy classes of involutions by Proposition 2.4 (3), the same holds for H because H ≃ G. Since H is a finite index subgroup of Aut(X), it follows from Theorem 1.8 ( [Le18,Lem.13]) that X has infinitely many mutually non-isomorphic real forms. This proves Proposition 3.6 (3).
Proof of Theorem 1.3 (2)
In this section, we prove Theorem 1.3 (2). Let V be a smooth complex projective variety of dimension n.
Consider the case where κ(V ) = n. Then the pluricanonical map Φ |mK V | for large divisible m is a birational map onto the image. Thus Aut(V ) is a finite group by Theorem 1.7.
Next, consider the case where κ(V ) = n − 1 ≥ 1. Then the geometric generic fibre V η of the pluricanonical map This proves Theorem 1.3 (2).
From now, we consider the case where κ(V ) ≥ 0. For this, instead of Lesieutre's surface, we use the surface S 2 constructed by [DO19,Sect.4] to construct the desired varieties.
In the rest, we denote M := S 2 . The surface M is constructed from a Kummer K3 surface of product type in [DO19,Sect.4]. Since, we will not use the explicit form of M, we omit to repeat the detailed construction and just surmarize basic properties of the surface M we will use. See (1) is clearly satisfied. The first part of (2) follows from [DO19, Thm.2.8, Lem.4.6] and Theorem 1.6. For the last part of (2), we may choose one of the two points P ′ or P ′′ in [DO19, Def.2.7] as P . Let T be a complex abelian variety of dimension l, defined over R as an abstract variety. Let A be a finite subset of T such that Aut(T, A) is finite and c(A) = A under the complex conjugate map c of T with respect to T R . Such a subset A exists. Indeed, by Lemma 3.4, there is a finite subset A ′ ⊂ T such that Aut(T, A ′ ) is finite. Then we may take Let π : X l → M × T be the blow-up at the points in {P } × A. Let E i ≃ P l+1 (1 ≤ i ≤ |A|) be the exceptional divisors of π.
(1) X l is a smooth complex projective variety defined over R and dim X l = l + 2 and κ(X l ) = 0.
(2) Aut(X l ) is discrete and not finitely generated. Moreover, X l has infinite many mutually non-isomorphic real forms.
Proof. The assertion (1) is clear from the construction. We show the assertion (2). If l = 0, then the result follows from Proposition 5.1. From now, we assume that l ≥ 1. Let f ∈ Aut(X l ). Since T has no rational curve, it follows that π(f (E i )) ⊂ M × A.
Since f (E i ) ≃ P l+1 with l + 1 ≥ 2 and M is not covered by rational curves by κ(M) = 0, it follows that π(f (E i )) is a point. Since Aut(T, A) is finite, Aut(M, P ) × {id T } is a finite index subgroup of Aut(X l ). Hence H × {id T } ≃ H, where H is the group in Proposition 5.1, is also a finite index subgroup of Aut(X l ). Hence Aut(X l ) is discrete and is not finitely generated by Proposition 5.1 (2) and Theorem 1.6. Then X l has infinitely many mutually non-isomorphic real forms by Proposition 5.1 (3), (4) and Theorem 1.8.
Let Z m ⊂ P m+1 (m ≥ 1) be a smooth complex hypersurface of degree m + 3 defined over R. Set Y l+m := X l × Z m .
(1) Y m+l is a smooth complex projective variety defined over R with dim Y l+m = 2 + l + m and κ(Y l+m ) = κ(Z m ) = m.
(2) Aut(Y l+m ) is discrete and not finitely generated. Moreover, Y l+m has infinite many mutually non-isomorphic real forms.
Proof. Again, the assertion (1) is clear from the construction. We show the assertion (2).
Since |K X l | consists of a single element and K Zm is very ample, the canonical map Φ |K Y l+m | : Y l+m = X l × Z m → Z m coincides with the second projection p 2 : Y l+m → Z m for the same reason as in the proof of Claim 3.7. In particular, the second projection p 2 is Aut(Y l+m )-equivariant. Since Aut(X l ) is discrete by Claim 5.2, it follows from Lemma 3.1 that Aut(Y l+m ) = Aut(X l ) × Aut(Z m ).
Since Aut(Z m ) is finite by Theorem 1.7, as before, the group where H is the group in Proposition 5.1, is a finite index subgroup of Aut(Y l+m ) by Claim 5.2. The result now follows from the same reason as in the last part of the proof of Claim 5.2. Theorem 1.3 (3) now follows from Claim 5.2 with l ≥ 1 and Claim 5.3 with l ≥ 0 and m ≥ 1. | 2020-02-13T02:00:58.010Z | 2020-02-11T00:00:00.000 | {
"year": 2020,
"sha1": "e479124b94975e4a603397fa738f22ac56a235b1",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/2002.04737",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "26a8281bd7299fbef685c08dad89852a71c61d7d",
"s2fieldsofstudy": [
"Mathematics"
],
"extfieldsofstudy": [
"Mathematics"
]
} |
219287995 | pes2o/s2orc | v3-fos-license | Preparation of a Ceramic Matrix Composite Made of Hydroxyapatite Nanoparticles and Polylactic Acid by Consolidation of Composite Granules
Composites made of a biodegradable polymer, e.g., polylactic acid (PLA) and hydroxyapatite nanoparticles (HAP NPs) are promising orthopedic materials. There is a particular need for biodegradable hybrid nanocomposites with strong mechanical properties. However, obtaining such composites is challenging, since nanoparticles tend to agglomerate, and it is difficult to achieve good bonding between the hydrophilic ceramic and the hydrophobic polymer. This paper describes a two-step technology for obtaining a ceramic matrix composite. The first step is the preparation of composite granules. The granules are obtained by infiltration of porous granules of HAP NPs with PLA through high-pressure infiltration. The homogeneous ceramic-polymer granules are 80 μm in diameter, and the composite granules are 80 wt% HAP NPs. The second step is consolidation of the granules using high pressure. This is performed in three variants: Uniaxial pressing with the pressure of up to 1000 MPa at room temperature, warm isostatic compaction (75 MPa at 155 °C), and a combination of the two methods. The combined methods result in the highest densification (99%) and strongest mechanical properties; the compressive strength is 374 MPa. The structure of the ceramic matrix composite is homogeneous. Good adhesion between the inorganic and the organic component is observable using scanning electron microscopy.
Introduction
The orthopedic market experiences a continuous interest in bioresorbable materials, such as phosphates (based on calcium phosphate and calcium carbonate) and polymers. The former group includes hydroxyapatite (HAP) and beta-tricalcium phosphate (β-TCP), which are biocompatible materials whose chemical composition is very similar to the natural apatite found in bone tissue. The calcium phosphate ceramic can improve the bioactivity, osteoconductivity, and resorbability of composite biomaterials [1][2][3]. The market is interested in bioresorbable materials with high mechanical strength, and many studies have been conducted that seek to increase their mechanical properties. The calcium phosphate ceramic is used for cementite, coatings, three-dimensional (3D) printed scaffolds and drug delivery. Due to its brittleness, it cannot be used for implants that must withstand high loads 3 of 14 Additional reinforcement comes from the use of stoichiometric HAP NPs. The nanostructure is preserved during processing.
Production of a material for a high-strength and bioresorbable implant is a complex problem. The composition of the composite is essential, but attention should also be drawn to its microstructure [24][25][26]. There are known methods for the preparation of composites with a high proportion of micrometric ceramics in the polymer matrix. Composites with a high ceramic phase content can be obtained by the infiltration of a ceramic matrix by a polymer, the mechanical grinding of components, or chemical methods (polymer dissolution and addition of ceramics) and extrusion [32][33][34][35][36][37][38]. Composite-forming methods can be axial or isostatic pressing. Russias et al. tested various proportions of a composite of polylactide and micrometric HAP particles. They proved that composites with the content of 70-80% of microHAP were characterized by mechanical properties corresponding to the human cortical bone. They also noticed that composites with such a composition were less homogeneous than those with less than 50% of HAP particles, i.e., more polymer. The SEM images presented in that publication disclose large HAP agglomerates. Russias et al. stated that the low polymer content in the composite and its non-uniform distribution did not achieve a high densification of the samples (for 80% of HAP they achieved the densification of 86%), which translated into unfavorable mechanical properties and degradation of the material [27]. It is a challenge to obtain a homogeneous material with good adhesion between the polymer and the ceramic. Such microstructures are a prerequisite for good mechanical properties. The greater the amount of ceramics, the more difficult the task. Gay et al. [38] describe the preparation of a composite of HAP NPs and PLLA by HAP NPs deagglomeration using wet attrition milling and further dispersion in chloroform. This procedure resulted in a homogeneous composite with an HAP NPs content of 25-50 wt%, which is characterized by a density of up to 97% and a maximum compressive strength of 100 MPa. Wolff et al. [24] describes obtaining a composite with 78 vol% of a ceramic using the spray granulation technology and uniaxial warm pressing. The obtained porosity is below 2 vol%. A previous article by Pietrzykowska et al. described methods of obtaining and forming the bioresorbable composite. The presented methods led to a composite with a preserved nanostructure and the compressive strength of 110 MPa. The presented work also showed problems with agglomeration of particles and their formation due to the amount of surface water in the nanopowder [39].
Moreover, composite filaments are created from phosphates and thermoplastic polymers. In their review paper, Fallon et al. underline the problem related to the increase in the quantity of the filler (i.e., ceramic particles) in thermosetting polymers per filament for 3D printing. They report that in line with the increase in the filler content, usually the decrease in homogeneity and dispersion of the composite, as well as the increase in viscosity are observed. This ultimately has an adverse impact on processability, the quality and the mechanical properties of the printed sample [40]. Dubinenko et al. describe a composite for 3D printing with the maximum HAP content of 50 wt%. The composite was obtained by dissolving PLA in chloroform, mixing with HAP in a ball mill, and subsequently after drying, it was processed in an extruder to obtain a filament for 3D printing. It was shown that the composite filament was characterized by a homogeneous structure. For 50%, a substantial increase of E to 8000 MPa was achieved, while pure PLLA has 2468 MPa [41].
Despite earlier studies on bioresorbable materials, there has been little research on composites with a bioactive ceramic matrix, combined with biodegradable polymers.
In this research, we focus on creating a composite characterized by a high degree of homogeneity, which can be formed from bioresorbable components. The base component of the composite is HAP NPs, which provides biological properties and strength (osteoinduction, Young's modulus, and compressive strength) [29,42]. A polymer providing the implant with resistance to brittle cracking is the other component. Such a material could be used, for instance, for bone grafts. The application of such a biodegradable material in bone damage treatment reduces the risk of complications and the costs of treatment. The developed composite is characterized by a high homogeneity of the structure. A ceramic additive increases the strength of the thermoplastic polymer-polylactide (PLA) [43][44][45][46][47][48].
Materials Preparation
In this study, HAP NPs were prepared by the wet chemical precipitation method from calcium oxide (Fluka, Munich, Germany), orthophosphoric acid (Sigma-Aldrich, Steinheim, Germany) solution, and deionized water. The HAP NPs granules were obtained by the spray drying method using a spray dryer (Mini Spray Dryer b-290, Buchi, Flawil, Switzerland) with a fitted nozzle with the diameter of 1.4 mm. The inlet and outlet temperatures of the nozzle were adjusted to 220 • C, and 96 • C, respectively. Aqueous dispersions of HAP NPs, with the HAP NPs concentration of 10 wt%, were prepared using the homogenizing device. The specific surface area of HAP NPs was 77 m 2 /g and the density was 2.83 g/cm 3 . The Ca/P ratio was 1.73. The HAP average particle size calculated, based on the specific surface area and skeletal density result was 28 nm, and the average crystallite size was 29 nm. The average size of HAP NPs was equal to the average crystallite size, which means that the HAP NPs were built of single crystals. Figure 1 shows the results of the powder X-ray diffraction (XRD) measurements of the HAP NPs.
PLA is a biodegradable and biocompatible synthetic polymer with the tensile yield strength of 62 MPa, tensile elongation of 3.5%, density of 1.24 g/cm 3 , and relative viscosity of 3.3. The polylactide used in this research is a commercial material with the trade name Ingeo biopolymer 3052D, produced by NatureWorks LLC, Minnetonka, MN, USA. The material takes the form of transparent granules, which are sized approximately 0.5 mm.
Materials Preparation
In this study, HAP NPs were prepared by the wet chemical precipitation method from calcium oxide (Fluka, Munich, Germany), orthophosphoric acid (Sigma-Aldrich, Steinheim, Germany) solution, and deionized water. The HAP NPs granules were obtained by the spray drying method using a spray dryer (Mini Spray Dryer b-290, Buchi, Flawil, Switzerland) with a fitted nozzle with the diameter of 1.4 mm. The inlet and outlet temperatures of the nozzle were adjusted to 220 °C, and 96 °C, respectively. Aqueous dispersions of HAP NPs, with the HAP NPs concentration of 10 wt%, were prepared using the homogenizing device. The specific surface area of HAP NPs was 77 m 2 /g and the density was 2.83 g/cm 3 . The Ca/P ratio was 1.73. The HAP average particle size calculated, based on the specific surface area and skeletal density result was 28 nm, and the average crystallite size was 29 nm. The average size of HAP NPs was equal to the average crystallite size, which means that the HAP NPs were built of single crystals. Figure 1 shows the results of the powder X-ray diffraction (XRD) measurements of the HAP NPs.
PLA is a biodegradable and biocompatible synthetic polymer with the tensile yield strength of 62 MPa, tensile elongation of 3.5%, density of 1.24 g/cm 3 , and relative viscosity of 3.3. The polylactide used in this research is a commercial material with the trade name Ingeo biopolymer 3052D, produced by NatureWorks LLC, Minnetonka, MN, USA. The material takes the form of transparent granules, which are sized approximately 0.5 mm.
Materials Preparation
Composite granules (PLA/HAP NPs) of HAP NPs and PLA with a high content (up to 80 wt%) of calcium phosphate phase were prepared by a solvent evaporation method. First, the determined amounts of PLA were dissolved in dichloromethane to achieve a composite with the total organic fraction of 20 wt%. Next, HAP NPs granules were suspended in the polymer solution and subjected to magnetic stirring for two hours at room temperature. Subsequently, we conducted a high-pressure impregnation of PLA solution in HAP NPs granules. The suspensions were cast into Petri dishes. The dishes were left at room temperature for 12 h in order to allow full evaporation of the dichloromethane. A powder of the composite granules was prepared using ball-milling (FRITSCH Pulverisette 5, Weimar, Germany) and cryo-milling (IKA ® A11 basic analytical mill, Staufen, Germany).
Materials Preparation
Composite granules (PLA/HAP NPs) of HAP NPs and PLA with a high content (up to 80 wt%) of calcium phosphate phase were prepared by a solvent evaporation method. First, the determined amounts of PLA were dissolved in dichloromethane to achieve a composite with the total organic fraction of 20 wt%. Next, HAP NPs granules were suspended in the polymer solution and subjected to magnetic stirring for two hours at room temperature. Subsequently, we conducted a high-pressure impregnation of PLA solution in HAP NPs granules. The suspensions were cast into Petri dishes. The dishes were left at room temperature for 12 h in order to allow full evaporation of the dichloromethane. A powder of the composite granules was prepared using ball-milling (FRITSCH Pulverisette 5, Weimar, Germany) and cryo-milling (IKA ® A11 basic analytical mill, Staufen, Germany).
Variant 1 involved cold pressing at a pressure of up to 1 GPa. The method involved filling a steel mold with the above-described granules and applying axial pressure at room temperature. Samples were kept under pressure for 10 s. The compaction density was investigated as a function of pressure. Five samples at 2 g each were pressed. A cuboid was formed with the dimensions of 4 × 4 × 35 mm. The mechanical strength was measured for three samples produced at a given pressure.
The second variant was warm isostatic pressing, with the consolidation temperature of up to 200 °C and the pressure of 75 MPa. Five samples at 2 g each of composite granules were placed in an elastic mold with the diameter of 8 mm and the height of 15 mm. These were pressed using isostatic pressure at 165 °C. The pressure vessel chamber was filled with methyl silicone oil. The optimum temperature was selected experimentally.
Variant 3, which involved combining the above methods in two stages, used the following two conditions: first, axial pressing at 1 GPa then, warm isostatic consolidation at the temperature of 165 °C and 75 MPa. The mechanical properties and densification were investigated.
Materials Characterization
The density measurements were performed with a helium pycnometer (model AccuPyc 1330, Micromeritics, Norcross, GA, USA) using an in-house procedure.
The specific surface area (SSA) of the powders was measured by the Brunauer-Emmett-Teller (BET) method (model Gemini 2360, V 2.01, Micromeritics, Norcross, GA, USA). The average diameter of the particles was calculated based on the specific surface area and density, assuming that all of the particles were spherical and identical [49].
Chemical composition of the powders: The chemical composition analysis of the powders was examined by inductively coupled plasma optical emission spectrometry (ICP-OES) with induction in Variant 1 involved cold pressing at a pressure of up to 1 GPa. The method involved filling a steel mold with the above-described granules and applying axial pressure at room temperature. Samples were kept under pressure for 10 s. The compaction density was investigated as a function of pressure. Five samples at 2 g each were pressed. A cuboid was formed with the dimensions of 4 × 4 × 35 mm. The mechanical strength was measured for three samples produced at a given pressure.
The second variant was warm isostatic pressing, with the consolidation temperature of up to 200 • C and the pressure of 75 MPa. Five samples at 2 g each of composite granules were placed in an elastic mold with the diameter of 8 mm and the height of 15 mm. These were pressed using isostatic pressure at 165 • C. The pressure vessel chamber was filled with methyl silicone oil. The optimum temperature was selected experimentally.
Variant 3, which involved combining the above methods in two stages, used the following two conditions: • first, axial pressing at 1 GPa • then, warm isostatic consolidation at the temperature of 165 • C and 75 MPa.
The mechanical properties and densification were investigated.
Materials Characterization
The density measurements were performed with a helium pycnometer (model AccuPyc 1330, Micromeritics, Norcross, GA, USA) using an in-house procedure.
The specific surface area (SSA) of the powders was measured by the Brunauer-Emmett-Teller (BET) method (model Gemini 2360, V 2.01, Micromeritics, Norcross, GA, USA). The average diameter of the particles was calculated based on the specific surface area and density, assuming that all of the particles were spherical and identical [49].
Chemical composition of the powders: The chemical composition analysis of the powders was examined by inductively coupled plasma optical emission spectrometry (ICP-OES) with induction in argon plasma (Thermo Scientific, iCAP 6000 series, Cambridge, United Kingdom). The samples analyzed using ICP-OES were prepared as follows: 5 mg of powder was weighed in a 110 mL Teflon ® vessel, and 15 mL of deionized water (HLP 20 UV, Hydrolab, Straszyn, Poland) was added. Then, 6 mL of HNO 3 was added, and the solution was subjected to one microwave heating cycle in the microwave reactor (Magnum II, Ertec, Wroclaw, Poland). After cooling, the sample volume was replenished to 50 mL with deionized water.
The densification of the material was investigated as a function of pressure. The densification and porosity of the consolidated material were checked at a pressure between 100 MPa and 1000 MPa. The compressions of HAP NPs and PLA/HAP NPs composite as a function of pressure were compared.
The thermogravimetry analysis (TG) was carried out using an STA 449 F1 Jupiter by Netzsch (Selb, Germany). The analysis was performed with a heating rate of 10 • C/min; the top temperature was 200 • C.
The phase composition of the reaction products was analyzed by powder X-ray diffraction (Panalytical X'Pert PRO diffractometer, Cu Kα1, Panalytical, Almelo, The Netherlands). The patterns were collected at room temperature in the two-theta range 10-100 • and with a step of 0.03 • . The Scherrer equation determined the particle size.
SEM: The materials' structure was examined by scanning electron microscopy (SEM) using the Ultra Plus microscope (ZEISS, Oberkochen, Germany).
Uniaxial compression tests were carried out at room temperature using an MTS 858 (Eden Prairie, MN, USA) dynamic testing machine equipped with a ±15 kN transducer. The tests were conducted under the displacement control mode. The crosshead velocity was 20 µm/min. Strain was measured based on crosshead displacement. Samples with the length of 15 mm and the diameter of 8 mm were used for each test. Based on the load displacement data, the yield stress (YS) and ultimate compressive strength (UCS) were estimated.
Three-point bending tests were carried out at room temperature using an MTS QTest/10 (Eden Prairie, MN, USA) test universal testing machine equipped with a ±15 kN transducer. The tests were conducted under the displacement control mode. The crosshead velocity was 0.5 mm/min. Samples with the length of 20 mm and the cross-section of 4 mm × 4 mm were used for each test. The span was 12 mm. Based on the stress-strain curve, the ultimate bending strength was calculated.
The Vickers hardness measurement was performed using a hardness tester with a 100 g load. Microcomputed tomography (micro-CT) was performed using SkyScan 1172 (Bruker, Kontich, Belgium), The X-ray source was 400 kV, 10 W, 5 µm pixel size.
Results and Discussion
The average size of granules obtained using the spray drying and infiltration technology was 80 µm and their density was 2.26 g/cm 3 . The ratio of HAP NPs to PLA, as measured by TG, was 20% PLA and 80% HAP NPs by weight. The phase composition of the composite granules was investigated by XRD, which showed that the HAP NPs structure was preserved (Figure 1). The morphology of the granule samples was examined by SEM. The granules were round and homogeneous. PLA was well dispersed, and the boundary between the particle and the polymer was invisible (large areas of PLA were not observed). Figure 3a shows the size of the composite granules, and Figure 3b,c shows the spherical structure, density, nanoporosity, and size of HAP NPs. TG studies confirmed the proportion of HAP NPs and PLA as 80:20 by weight. The melting point of PLA was 175 • C. Nanoparticles tend to form agglomerates, which is the main obstacle in the formation of a homogeneous composite [50][51][52][53][54]. In our case, the obtained granulate is a controlled agglomerate of HAP NPs infiltrated with PLA. The results of the forming using axial pressures of up to 1000 MPa (at room temperature) are as follows. Figure 4 shows a densification graph of PLA/HAP NPs composite granules. The materials were compared with pure HAP NPs as a function of pressure. The densification increased in line with the forming pressure. We see the fastest density increase in the range of up to 200 MPa. After that, the density increased slowly. Between 800 MPa and 1000 MPa, the densification was only a few percent. The densification of composite granules was higher than that for pure HAP NPs. For HAP NPs, we obtained a maximum densification of 82%, whereas for the composite granules it was 90%. An even better degree of densification was achieved for the composite granulate because round granules filled the steel mold more precisely, which is proven by the rapid increase in the densification (Figure 4). We are convinced that PLA in the composite fulfilled the role of a lubricant for HAP NPs, which ultimately enabled the achievement of a higher degree of densification. These results are similar to those achieved by Rakovsky et al. [34]. The results of the forming using axial pressures of up to 1000 MPa (at room temperature) are as follows. Figure 4 shows a densification graph of PLA/HAP NPs composite granules. The materials were compared with pure HAP NPs as a function of pressure. The densification increased in line with the forming pressure. We see the fastest density increase in the range of up to 200 MPa. After that, the density increased slowly. Between 800 MPa and 1000 MPa, the densification was only a few percent. The densification of composite granules was higher than that for pure HAP NPs. For HAP NPs, we obtained a maximum densification of 82%, whereas for the composite granules it was 90%. An even better degree of densification was achieved for the composite granulate because round granules filled the steel mold more precisely, which is proven by the rapid increase in the densification (Figure 4). We are convinced that PLA in the composite fulfilled the role of a lubricant for HAP NPs, which ultimately enabled the achievement of a higher degree of densification. These results are similar to those achieved by Rakovsky et al. [34].
The results of compaction, using the warm isostatic technology, are as follows. The optimal compaction temperature for the composite of 80% HAP NPs and 20% PLA was 155 • C at a pressure of 75 MPa. The results of the warm isostatic technology at a temperature below the melting point (glass transformation: 60 • C at normal pressure) were checked. We managed to obtain solid moldings with a densification of 30% and a compressive strength of 100 MPa. When comparing this variant of forming with variant I, we noticed that, in order to achieve the densification of 70% in variant I (i.e., without temperature), the necessary pressure required is 300 MPa. In variant II, the applied temperature of 155 • C is the melting point of PLA, which contributes, together with pressure, to the viscosity of the polymer, thereby increasing the slip in the composite. Due to the high dispersion of the polymer and transition to the plastic state of PLA, the densification of 70% is achieved at 75 MPa and 165 • C. The results of compaction, using the warm isostatic technology, are as follows. The optimal compaction temperature for the composite of 80% HAP NPs and 20% PLA was 155 °C at a pressure of 75 MPa. The results of the warm isostatic technology at a temperature below the melting point (glass transformation: 60 °C at normal pressure) were checked. We managed to obtain solid moldings with a densification of 30% and a compressive strength of 100 MPa. When comparing this variant of forming with variant I, we noticed that, in order to achieve the densification of 70% in variant I (i.e., without temperature), the necessary pressure required is 300 MPa. In variant II, the applied temperature of 155 °C is the melting point of PLA, which contributes, together with pressure, to the viscosity of the polymer, thereby increasing the slip in the composite. Due to the high dispersion of the polymer and transition to the plastic state of PLA, the densification of 70% is achieved at 75 MPa and 165 °C.
After comparing the results of both variant, being axial pressure pressing and warm isostatic pressing, we selected and applied the best parameters for forming the composites in the third variant. The characteristic properties of compaction, compressive strength, and bending were compared. First, dense composite samples were obtained by axial pressing at 1000 MPa at room temperature. Then, the filled mold was placed in a steel chamber for isostatic pressing. The isostatic pressing process was carried out at the temperature of 155 °C and pressure of 75 MPa. The samples were kept in these conditions for 12 minutes. Then, they were removed from the vessel and cooled in air. The densification difference was observed after high-pressure densification and warm isostatic consolidation at 155 °C. During this experiment, the pressure was optimized for maximum densification of the composite material. The green body of the composites was condensed up to 85%. After comparing the results of both variant, being axial pressure pressing and warm isostatic pressing, we selected and applied the best parameters for forming the composites in the third variant. The characteristic properties of compaction, compressive strength, and bending were compared. First, dense composite samples were obtained by axial pressing at 1000 MPa at room temperature. Then, the filled mold was placed in a steel chamber for isostatic pressing. The isostatic pressing process was carried out at the temperature of 155 • C and pressure of 75 MPa. The samples were kept in these conditions for 12 minutes. Then, they were removed from the vessel and cooled in air. The densification difference was observed after high-pressure densification and warm isostatic consolidation at 155 • C. During this experiment, the pressure was optimized for maximum densification of the composite material. The green body of the composites was condensed up to 85%.
The densification, porosity, and mechanical properties were investigated. The results are shown in Table 1 and Figure 5. The highest densification displayed by the samples that were pressed using the two steps was 99%, whereas that displayed by the cold-pressed samples was only 80% (Figure 6). In Figure 5, we observed the increase in mechanical properties along with the rise in the degree of compaction. The best results were obtained for the third variant (the two-stage forming), in which the cold-pressed material at 1000 MPa was then compressed at 165 • C at a lower pressure. This process was carried out at the softening temperature of the polymer, which allowed the polymer to combine, and resulted in an increase in bending strength. The two-stage pressing was based on the maximum ceramic densification in the first step and on infiltration of the polymer in the second step. The combination method caused the removal of material stresses. It is not simple to remove the porosity when the densification degree is above 90% for ceramic nanocomposites [27]. The combination of two pressing methods permitted the application of variant I of the granulate, which improved mold bulk density and filling. In variant II, the polymer was used as a lubricant and adhesive. process was carried out at the softening temperature of the polymer, which allowed the polymer to combine, and resulted in an increase in bending strength. The two-stage pressing was based on the maximum ceramic densification in the first step and on infiltration of the polymer in the second step. The combination method caused the removal of material stresses. It is not simple to remove the porosity when the densification degree is above 90% for ceramic nanocomposites [27]. The combination of two pressing methods permitted the application of variant I of the granulate, which improved mold bulk density and filling. In variant II, the polymer was used as a lubricant and adhesive. the cold-pressed material at 1000 MPa was then compressed at 165 °C at a lower pressure. This process was carried out at the softening temperature of the polymer, which allowed the polymer to combine, and resulted in an increase in bending strength. The two-stage pressing was based on the maximum ceramic densification in the first step and on infiltration of the polymer in the second step. The combination method caused the removal of material stresses. It is not simple to remove the porosity when the densification degree is above 90% for ceramic nanocomposites [27]. The combination of two pressing methods permitted the application of variant I of the granulate, which improved mold bulk density and filling. In variant II, the polymer was used as a lubricant and adhesive. Structural investigations showed a very homogeneous structure of the composites. The fracture surfaces of the samples after the bending tests showed that the structure of the composite was very homogeneous and brittle ( Figure 6). Gay et al. [38] described that "At higher mineral contents, the rupture mechanism of the composite changes from ductile to brittle". High-pressure pressing led to a high compaction of the composites. We assumed that the PLA polymer present in the composite granules acted as a lubricant during pressing. In the course of warm isostatic pressing, which took place at the softening temperature of the polymer, both compaction and infiltration occurred. We observed that the polymer in the ceramic matrix at the softening temperature combined. The effect was to increase the compressive and bending strength of the samples.
The micro-computed tomography studies of the composition and porosity of the composite formed by the third variant showed the porosity of up to 1 vol%. The PLA volume fraction was 21%. A three-dimensional reconstruction showed that the structure was homogeneous. Figure 7a shows a top view of the sample: Violet color pores are disclosed against a yellow background, their share is 1% of the volume. The micro tomography tests did not disclose a two-phase nature of the obtained composite. At the resolution level of 5 µm of the device, the material is homogeneous. Figure 7b is a side view and consequently it shows a distribution of porosity and homogeneity of the material in the cross-section.
Structural investigations showed a very homogeneous structure of the composites. The fracture surfaces of the samples after the bending tests showed that the structure of the composite was very homogeneous and brittle ( Figure 6). Gay et al. [38] described that "At higher mineral contents, the rupture mechanism of the composite changes from ductile to brittle". High-pressure pressing led to a high compaction of the composites. We assumed that the PLA polymer present in the composite granules acted as a lubricant during pressing. In the course of warm isostatic pressing, which took place at the softening temperature of the polymer, both compaction and infiltration occurred. We observed that the polymer in the ceramic matrix at the softening temperature combined. The effect was to increase the compressive and bending strength of the samples.
The micro-computed tomography studies of the composition and porosity of the composite formed by the third variant showed the porosity of up to 1 vol%. The PLA volume fraction was 21%. A three-dimensional reconstruction showed that the structure was homogeneous. Figure 7a shows a top view of the sample: Violet color pores are disclosed against a yellow background, their share is 1% of the volume. The micro tomography tests did not disclose a two-phase nature of the obtained composite. At the resolution level of 5 μm of the device, the material is homogeneous. Figure 7b is a side view and consequently it shows a distribution of porosity and homogeneity of the material in the cross-section. The mechanical properties of the composites were compared with the mechanical properties of polylactide and the sintered ceramic body with HAP NPs from the literature. The highest compressive strength for a sintered HAP NPs composite was 208.6 MPa [53]. Dan Wu et al. obtained compressive strength below 80 MPa for PLA and its composites [54]. The result is unique in terms of the achieved high homogeneity of HAP NPs and of the polymer in the obtained granulate. Ultimately, a solid sample, with a hybrid structure and good mechanical properties, was obtained.
As the bending strength did not change, we did not observe changes related to the blocking of cracks as we did with a decrease in porosity. As widely known, the decrease in porosity and internal stresses increase the strength. We monitored the hardness of the formed composites. The coldpressed samples had the highest repeatability of results. The warm-pressed samples, probably The mechanical properties of the composites were compared with the mechanical properties of polylactide and the sintered ceramic body with HAP NPs from the literature. The highest compressive strength for a sintered HAP NPs composite was 208.6 MPa [53]. Dan Wu et al. obtained compressive strength below 80 MPa for PLA and its composites [54]. The result is unique in terms of the achieved high homogeneity of HAP NPs and of the polymer in the obtained granulate. Ultimately, a solid sample, with a hybrid structure and good mechanical properties, was obtained.
As the bending strength did not change, we did not observe changes related to the blocking of cracks as we did with a decrease in porosity. As widely known, the decrease in porosity and internal stresses increase the strength. We monitored the hardness of the formed composites. The cold-pressed samples had the highest repeatability of results. The warm-pressed samples, probably because of porosity, had the smallest hardness HV1. The mixed method allowed us to obtain materials with a good repeatability of results, and HV1 was 52 ± 5.
In summary, the two-step method allowed us to achieve very high densification and mechanical properties. This was possible because a tight bonding between PLA and HAP NPs was obtained within the pores of the infiltrated granules, and because the granules presented excellent properties for the isostatic pressing technology.
A highly dense and homogeneous composite was formed as a result of the use of pressing technologies. The results are promising because a composite, characterized by a high dispersion of nanoparticles, was obtained, at the same time preserving their phase structure. The next step should include a degradation test with in vitro conditions. The composites obtained by our method have stronger mechanical properties than similar materials previously discussed in the literature. In the literature, Wolff obtained a composite with 78 vol% of a ceramic, using the spray granulation technology and uniaxial warm pressing. The obtained porosity was below 2 vol% and the mechanical strength was 100 MP [39]. These results prove a novelty: The achievement of a highly homogeneous material with a high quantity of ceramic particles. The high degree of homogeneity of the composite granules contributed substantially to the compressibility of the material and its final properties.
Conclusions
The results of this paper constitute a response to the lack of sufficient mechanical strength of the bioresorbable composites being hydroxyapatite nanoparticles in a polymer matrix (e.g., polylactic acid (PLA)) for bone substitution purposes in bone regeneration, which have been obtained so far. The aim and novelty of the research was to obtain a homogeneous hybrid composite, characterized by a high degree of densification, with the HAP NPs content of up to 80 wt%, without the presence of agglomerates of NPs.
We have developed a two-step method for a bioresorbable composite with a very high content of a ceramic in a polymer matrix. The composite has potential for orthopedic applications as it achieved the compressive strength of 375 MPa, the Young's modulus of 7 GPa, and the densification of 99%. The first step was to prepare composite granules of porous HAP NPs granules and PLA by high-pressure infiltration. The second step was the consolidation of the composite granules.
Three consolidation variants for the forming were tested. The best results were obtained by compaction in the third variant: uniaxial cold pressing and subsequent isostatic pressing at 155 • C. We successfully formed a composite, consisting of HAP NPs at 80 wt% and a bioresorbable polymer (PLA) for the balance. The composite had a homogeneous structure, and we observed good adhesion between the polymer and the ceramic. The obtained composite had an average HAP NPs particle size of 28 nm. This approach permitted us to consolidate the composite without nanoparticle growth and degradation of the polymer.
The technology can be used to form a thermosetting composite with a high content of a ceramic phase at a relatively low temperature of up to 200 • C. The manufactured composites are promising materials for applications such as implants in bone regeneration. | 2020-06-04T09:04:47.773Z | 2020-05-30T00:00:00.000 | {
"year": 2020,
"sha1": "268431b9cba1aa9ff9c65bd368004eb182c6cf08",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2079-4991/10/6/1060/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "8a89d6190a757f3e3778cf0850c640b408b75ee7",
"s2fieldsofstudy": [
"Materials Science"
],
"extfieldsofstudy": [
"Materials Science",
"Medicine"
]
} |
19652619 | pes2o/s2orc | v3-fos-license | Relation between Lineal Energy Distribution and Relative Biological Effectiveness for Photon Beams according to the Microdosimetric Kinetic Model
RBE/Photon/Microdosimetry/Proportional counter/Monte Carlo simulation. Our cell survival data showed the obvious dependence of RBE on photon energy: The RBE value for 200 kV X-rays was approximately 10% greater than those for mega-voltage photon beams. In radiation therapy using mega-voltage photon beams, the photon energy distribution outside the field is different with that in the radiation field because of a large number of low energy scattering photons. Hence, the RBE values outside the field become greater. To evaluate the increase in RBE, the method of deriving the RBE using the Microdosimetric Kinetic model (MK model) was proposed in this study. The MK model has two kinds of the parameters, tissue-specific parameters and the dose-mean lineal energy derived from the lineal energy distributions measured with a Tissue-Equivalent Proportional Counter (TEPC). The lineal energy distributions with the same geometries of the cell irradiations for 200 kV X-rays, Co γ-rays, and 6 MV X-rays were obtained with the TEPC and Monte Carlo code GEANT4. The measured lineal energy distribution for 200 kV X-rays was quite different from those for mega-voltage photon beams. The dose-mean lineal energy of 200 kV X-rays showed the greatest value, 4.51 keV/μm, comparing with 2.34 and 2.36 keV/μm for Co γ-rays and 6 MV X-rays, respectively. By using the results of the TEPC and cell irradiations, the tissue-specific parameters in the MK model were determined. As a result, the RBE of the photon beams (yD: 2~5 keV/μm) in arbitrary conditions can be derived by the measurements only or the calculations only of the dose-mean lineal energy.
INTRODUCTION
In the radiotherapy using photon beams, Relative Biological Effectiveness (RBE) has been conventionally regarded as 1.0 in any energy range. 1)Only physical dose has been used to design the treatment planning in the conventional photon radiotherapy.However, the photon energy distribution outside the field is different with that in the field because of a large number of low energy scattering photons. 2)Consequently, the RBE outside the field becomes greater. 3)n microdosimetry, a lineal energy distribution is related to RBE and a distribution of energy deposited by ionizing radiations in the microscopic region, 4,5) which causes lethal lesions of DNA resulting in radiation-induced cell death.Thus, lineal energy is an important quantity for evaluation of radiation quality, [6][7][8][9] and has been measured with a Tissue-Equivalent Proportional Counter (TEPC) and calculated with the Monte Carlo simulation.[10][11][12][13] The Microdosimetric Kinetic model (MK model) is a biophysical model of cell survival after irradiations.6,7) In the MK model, the increase in the RBE can be explained as the increase in energy deposited in a microdosimetric site which can be defined as a microscopic subunit referred to as a "domain".Furthermore, it is assumed that the mean number of lethal lesions in a domain can be described by a linearquadratic function of specific energy.A survival fraction of tumor cells irradiated with low-LET radiations was mathematically derived by considering distributions of lethal lesions according to the Poisson distribution. Moreov, this model requires the determinations of the dose-mean lineal energy and the tissue-specific parameters to calculate RBE values.In this study, the dose-mean lineal energy for 200 kV X-rays, 60 Co γ-rays, and 6 MV X-rays were derived from the lineal energy distributions measured with a TEPC.Furthermore, the measured lineal energy distributions were compared with the calculations with a Monte Carlo code, GEANT4. 14) he tissue-specific parameters were determined from the experimental cell survival fractions after irradiations with 200 kV X-rays, 60 Co γ-rays, and 6 MV X-rays.
We assume that the biological response of the Human Salivary Gland (HSG) tumor cells can be expressed according to the MK model.Our purpose is to propose the method of deriving the RBE for photon beams from physical approach instead of performing the cell irradiations.By using the obtained tissue-specific parameters, the RBE of the photon beams (yD: 2~5 keV/μm) in arbitrary conditions can be derived by the measurements only or the calculations only of the lineal energy distributions.
Derivation of RBE according to the MK model
According to the paper, 6,7) shape of a domain is sphere, and a cell nucleus is filled with domains, although it cannot be physically realized.The meaning of domains is to set the restricted region in a cell nucleus, because a lethal lesion is produced by a pair of sub-lethal lesions which is created in a nearly distance each other.The MK model assumes that the mean number of lethal lesions L in a domain can be described by a linear-quadratic function of specific energy z, as follows. (1) The average number of lethal lesions Ln in a cell nucleus and the survival fraction S can be described by using the expected value (brackets), assuming the Poisson distribution of number of hits (ion-pairs) to a domain by a charged particles, as follows, (2) where N, , and yD denote the number of domains in a cell nucleus, the average number of lethal lesions in a domain, and the single-event dose-mean lineal energy. 4,5)Other parameters, rd, and ρ represent the radius of domains, the density of domains, respectively.The tissue-specific parameters, rd, α0, and β were obtained from the experimental cell survival fractions after irradiations with 200 kV X-rays, 60 Co γ-rays, and 6 MV X-rays in this study.That is, these parameters were determined as free parameters for fitting the experiments.The α 0 and β have replaced NA and NB, respectively.Thus, the determination of N is not necessary in the process.The domain's density ρ is assumed to be 1.0 g/cm 3 .Note that only α depends on radiation quality, whereas β is tissue-specific and constant irrespective of radiation quality.By using the lineal energy distribution obtained from a TEPC or GEANT4 simulation, the dose-mean lineal energy y D can be calculated, as follows, where ε, l, y, f(y), and d(y) denote the energy deposited in a domain, mean chord length, lineal energy, and the probability density of a lineal energy, and dose distributions of a lineal energy, respectively.The mean chord length l of a sphere can be expressed as 2/3 × diameter (2/3 × 1 μm water-equivalent) by assuming the uniformity of radiations emitted in a given directions. 4)In the calculation of the RBE, we defined 200 kV X-rays as a reference radiation, as the following equation, (5) where S * is negative of the natural log of the survival fraction S. The RBE value was calculated at the 10% survival level, i.e., S* = -ln(0.1).In the MK model, β200kV is equal to β.
Measurements of lineal energy distributions with a TEPC
A TEPC (LET -1/2, Far West Technology Inc.) was used for measurements of lineal energy distributions in the microdosimetric size for a 200 kV X-ray diagnostic apparatus (MG226/4.5,YXLON,half value layer (with filters of Cu and Al of 0.5 mm thick) = 11.9 mmAl), a 60 Co irradiation equipment (custom-made), and a 6 MV clinical accelerator (Varian 21EX, Varian Medical Systems).Geometrical setup of the TEPC and other conditions were similar to the cell survival experiments shown in Fig. 1.The shape of sensitive volume of the TEPC is a sphere of 1.27 cm in diameter filled with a tissue-equivalent gas which consists of C3H8 (55.0%),CO2 (39.6%), and N2 (5.4%) at a low pressure of approximately 33 Torr (1 μm water-equivalent size).A positive voltage of 640 V was applied to an anode wire of the TEPC and the signals were sent to a pre-amplifier (142PC, ORTEC) and a main-amplifier (671, ORTEC).Energy calibration was performed by α particles from 244 Cm source.To prevent the spectra from the distortion by the pileup, the TEPC should be used up to a few tens of μGy/min.However, the lowest nominal dose rate of the 6 MV clinical accelerator is 1 Gy/min.Various techniques to extremely reduce the dose rate of the clinical accelerators down to a few tens of μGy/min were reported by Amols and Zellmer. 10,11)The gun grid voltage was decreased in this study.In adjusting the gun grid voltage, a 600 ml ionization chamber (C-110, Oyogiken) was used for monitoring the dose rate before the TEPC measurements.Consequently, the dose rate could be successfully reduced to approximately 30 μGy/min.
Monte Carlo simulation with GEANT4
Monte Carlo simulation will be the most accurate method to calculate not only dose distributions, but also microdosimetric values, such as lineal energy distributions. 15,16)In order to calculate the lineal energy distributions, the simulations using the code GEANT4 (Version 4.8.2.p01) have been performed on a 16-CPU Linux cluster for a 200 kV X-ray diagnostic apparatus, a 60 Co irradiation equipment, and a 6 MV clinical accelerator.The GEANT4 provides a number of user-selectable physics lists for the calculations.In this study, "ElectroMagnetic (EM) standard physics" with a cut value of 0.01 μm was used to calculate the lineal energy distributions.The geometrical setup and other conditions of simulations were the same as the cell survival experiments (Fig. 1).
For a 200 kV X-ray diagnostic apparatus, energy spectra behind a tungsten target can be simplified by using a Birch's formula, 17) (6) where N A , ρ, A, T, Q, C, θ, and μ ν are the Avogadro's number, the density and the atomic number of the tungsten target, the electron energy, the X-ray energy intensity per unit energy interval per incident electron flux per atom, The Thomson-Whiddington constant, the target angle, and the attenuation coefficient, respectively.By using above the formula in the GEANT4 simulation, photon energy of a 200 kV X-ray diagnostic apparatus is easily determined.However, the filters and collimator were modelled in the GEANT4 simulation.A Cu and an Al filter of 0.5 mm thick were placed behind the tungsten target, and the field size was a diameter of 30 cm at source to surface distance of 57 cm.
A 60 Co irradiation equipment was modelled with a field of 33 cm diameter at a distance of 80 cm from the 60 Co source emitting γ-rays of 1.17 and 1.33 MeV.
For a 6 MV clinical accelerator, the initial electron beam parameters, i.e., the mean energy, the radial intensity distribution, and the spread of mean energy were the same as those proposed by Sheikh-Bagheri. 18)In order to verify the validity to use these electron beam parameters for our medical linear accelerator, the calculated data were compared with measured data for the depth dose curve of a 10 × 10 cm 2 field and the dose profile curve of a 40 × 40 cm 2 field.The dose distributions from the GEANT4 simulation agreed with the measured dose distributions with an averaged The schematic of the irradiation geometry for 200 kV X-rays, 60 Co γ-rays, and 6 MV X-rays.The depths from the phantom surface to the HSG tumor cells for 200 kV X-rays, 60 Co γ-rays, and 6 MV X-rays were 1, 6, and 100 mm water equivalent depth, respectively.difference of 1.0% (1SD of 1.6%) and 0.9% (1SD of 1.9%) for the depth dose and the dose profile, respectively.By using these electron beam parameters, the distributions of position, the kinetic energy, and the charge of all particles created at the treatment head were stored as a phase-space file above the field-defining jaws.The stored phase-space file was repeatedly used to calculate the lineal energy distributions of different conditions.This technique can effectively reduce the calculation time. 19)
Cell culture and irradiations
Human Salivary Gland tumor cells (HSG, JCRB1070: HSGc-C5) were used for the measurements of the survival curves in this study. 20,21)The HSG is a standard reference cell line for the inter-comparison of RBE among proton facilities in Japan, Korea, etc. 22,23) Eagle's minimum essential medium (M4655, Sigma) supplemented with 10% fetal bovine serum and antibiotics (100 U/ml penicillin and 100 μg/ml streptomycin) was used at cell culture.Harvested cells were seeded in T25 flasks at about 2.0 × 10 5 cells/flask with 5 ml of the medium, and incubated in a 5% CO2 incubator at 37°C for 2 days prior to irradiation.The differences of the biological responses for different radiation quality are investigated in this study.Hence, for masking Oxygen Enhancement Effect (OER), the HSG were cultured in the monolayer (hypoxic fraction of 0%), although it is not the same as condition of the in-situ tumor cells.The flasks were filled with additional medium one day before the experiment and then returned to the incubator.The irradiated cells were rinsed twice with PBS -, soaked once with 0.05% trypsin with 0.02% EDTA, and kept at 37°C with a bit of remaining trypsin for 4 minutes to harvest the cells.The cells were collected with 5 ml of fresh medium.Concentrations of the cells in suspension were measured with a particle analyzer (Coulter Z1).The suspensions were diluted by medium, seeded in three 6 cm culture dishes (Falcon 3002) expected to be approximately 100 surviving cells per dish, and incubated in the incubator for 13 days.The dishes were rinsed with PBS -, fixed with a 10% formalin solution in PBS -for 10 minutes, rinsed with tap water, stained with a 1% methylene blue solution for 10 minutes, rinsed again with tap water, and dried in air.Colonies consisting of more than 50 cells were counted under a stereomicroscope as the number of viable cells.Considering the dose rate dependence of cell inactivation, a dose rate at the cell position was fixed at 0.8 Gy/min for all cases by adjusting the distance from the source to the cells.The depths from the phantom surface to cells for 200 kV X-rays, 60 Co γ-rays, and 6 MV X-rays were 1, 6, and 100 mm water equivalent depth, respectively (Fig. 1).The irradiation doses were measured with the thimble chamber according to the protocol of TRS 277 24) (in-Air method) for 200 kV X-rays and Japanese standard dosimetry 01 25) for 60 Co γ-rays and 6 MV X-rays.The setup of the chamber has the uncertainty of 1-2 mm (change of dose rate of approximately 1%).
Parameter derivation from cell survival curves
In the MK model, the β value is assumed to be constant irrespective of radiation quality, i.e., β MK = β 200 kV = β in Eq. 5.However, by using a linear-quadratic function with two free parameters (α, β) to fit the survival fraction, the β takes different values for 200 kV X-rays, 60 Co γ-rays, and 6 MV X-rays, because there were experimental uncertainties for the survival fraction in a high dose region.Consequently, the β value was obtained by the following method in this study.First, experimental survival curves were fitted by linearquadratic functions with two free parameters (α, β).Secondly, the βMK value was regarded as an averaged value of three β values from the different photon beams.Thirdly, survival curves were re-fitted by linear-quadratic functions using averaged β value (βMK) with α alone as a free parameter.Finally, the resulting α values were fitted by a linear function of yD measured with the TEPC for determination of tissuespecific parameters, rd and α0.
Microdosimetric distributions y-yd(y)
The energy deposited in a domain under the irradiations was experimentally simulated with a TEPC and calculated using the code GEANT4.The energy deposit ε was converted into lineal energy y using Eq. 3. Furthermore, lineal energy distributions y-f(y) were re-formed for the style of the standard representation of microdosimetric distribution y-yd(y).Figure 2 shows the y-yd(y) distributions from the TEPC and the GEANT4 simulation.The y-yd(y) distribution for 200 kV X-rays was quite different from those of the other megavoltage-photon beams.However, y-yd(y) distribution for Co γ-rays was similar to that for 6 MV X-rays.Table 1 shows dose-mean lineal energy yD obtained from the lineal energy distributions using Eq. 4. The y-yd(y) distribution for 200 kV X-rays showed the greatest yD value, 4.51 and 4.41 keV/μm for the TEPC and the GEANT4 simulation, respectively.The yD value for 60 Co γ-rays agreed with that for 6 MV X-rays in both TEPC measurements and GEANT4 simulation.Furthermore, the yD values for 200 kV X-rays and 60 Co γ-rays were close to the published data. 26,27)
Cell irradiations
Figure 3 shows the experimental survival curves of the HSG tumor cells irradiated with 200 kV X-rays, 60 Co γ-rays, and 6 MV X-rays.All curves are linear-quadratic functions fitted to the experimental survival fraction by a method of least-squares.As shown in the Fig. 3, 200 kV X-rays have the lowest survival fraction among the three photon beams at the same irradiation dose.From this result, RBE depends on photon energy.
Determination of the r d and the α 0 value in the MK model
The second and the third column of Table 2 show the results of fitting to the experimental survival fraction of the HSG tumor cells with a linear-quadratic function (α, β).The β MK value is assumed to be constant irrespective of radiation quality, and an averaged value (0.036 Gy -2 ) of three experimental β values from different photon beams was used.The resulting α values are shown in the fourth column of Table 2.
Equation 2 is used to determine the tissue-specific parameters, r d and α 0 .The resulting α values in the linear-quadratic functions with the fixed β MK were fitted with a linear function of y D obtained with the TEPC using a method of the least squares (Fig. 4).The coefficient of determination R 2 is 0.936 in terms of the accuracy of the fitting.The value of r d can be obtained from the slope of the linear function, as β/(ρπr d 2 ) = 0.03384, i.e., r d = 0.23 ± 0.03 μm, where the density of a domain ρ was assumed to be 1.0 g/cm 3 .Furthermore, the α 0 value was derived from the y-intercept of the linear function, i.e., α 0 = 0.088 ± 0.023 Gy -1 .
DISCUSSION
28] Table 1.The yD values obtained from the lineal energy distributions in the TEPC measurements and the GEAN4 simulations using Eq. 4.
2.
The fitting of the survival fraction of the HSG tumor cells with a linear-quadratic function with two free parameters (α, β) and one free parameter (α alone).However, the published data show different results, because the y D value strongly depends on measurement conditions such as volume size and the geometry of the detector.For instance, it was reported that the y D value for 200 kV X-rays measured by a spherical proportional counter of walled type with a simulated diameter of 0.97 μm was 4.2 keV/μm, whereas one with the simulated diameter of 2.06 μm was 3.3 keV/μm. 26)In this study with the simulated diameter of 1.0 μm, we obtained the y D values of 4.51 ± 0.05 (TEPC) and 4.41 ± 0.03 keV/μm (GEANT4), which were close to the y D value of 0.97 μm.For 60 Co γ-rays, we obtained the y D values of 2.34 ± 0.03 (TEPC) and 2.24 ± 0.01 keV/μm (GEANT4) with the simulated diameter of 1.0 μm, which were close to the published data of 2.34 keV/μm with the simulated diameter of 0.95 μm. 27)n the spectrum shape (Fig. 2), there exist the quite difference between 200 kV X-rays and mega-voltage photon beams, and little difference between 60 Co γ-rays and 6 MV X-rays in both the TEPC measurements and the GEANT4 simulation.These results can be understood through considering the stopping power of recoil electrons by Compton scattering.Figure 5 shows the collision mass-stopping power of electrons in water as a function of kinetic energy. 29)s can been seen in the figure, the stopping power of electrons is drastically increasing in the region of lower than approximately 200 keV.For example, averaged photon energy of 200 kV X-rays is approximately 80 keV.In such a lower energy region, the energy deposited in the microscopic region becomes greater.Consequently, probability of incidence of the lethal lesion in DNA is expected to be higher, which results in the increase in the RBE.On the other hand, there is little difference in both the spectrum shape and the yD values between 60 Co γ-rays and 6 MV Xrays, because collision stopping power is almost constant in the range of mega-voltage kinetic energy.For 6 MV X-rays, averaged photon energy at a depth of 10 cm in water is approximately 1.7 MeV.
In the determination of α value in the MK model, experimental survival fraction was fitted by a linear-quadratic function with one free parameter (α alone).However, the differences of fitting between with one free parameter and with two should be evaluated.Figure 6 represents the comparison between two methods for 200 kV X-rays.We can see a little difference in the range of high irradiation dose > 8 Gy.However, it was not serious within the scope of our purpose, because a dose of 2 Gy per fraction is wellused for the conventional radiation therapy using photon beams.The coefficients of determination R 2 were 0.997 and 0.998 for one parameter and two, respectively.The results were similar for 6 MV X-rays and 60 Co γ-rays.Thus it does not matter that the linear-quadratic function with the fixed βMK value is used in terms of the accuracy of the fitting.
In the final analysis, we obtained the tissue-specific parameters, rd = 0.23 ± 0.03 μm, α0 = 0.088 ± 0.023 Gy -1 , and β = 0.036 Gy -2 .As a matter of fact, these values are slightly different from those in the previously reported paper. 30)In that paper, the tissue-specific parameters were rd = 0.42 μm, α0 = 0.13 Gy -1 , and β = 0.05 Gy -2 , which were derived from the experimental survival fraction of same kind of cells irradiated with carbon beams.When these parameters are used, the MK model gives disagreement with our experimental RBE for photon beams.This shows that the MK model cannot reproduce the experimental survival fraction of the HSG tumor cells for both low-LET radiations like photon beams and high-LET radiations like carbon-ion beams by the same tissue-specific parameters.The reason is not clear.
In conclusion, since we obtained the tissue-specific parameters, the rd, α0 and β value of the HSG tumor cells from the experimental survival fraction and the experimental yD values, the α value of the HSG tumor cells can be expressed as a linear function of the yD value: α = 0.0338 × yD + 0.0885.Finally, we obtained the RBE of the HSG tumor Fig. 5. Collision mass-stopping power of electrons in water as a function of kinetic energy. 29)g. 6.Comparison between the two methods of the fitting to survival fraction of the HSG tumor cells for 200 kV X-rays.cells at the 10% survival level using Eq. ( 5), as follows, (7) where α = 0.0338 × y D + 0.0885.At present, the y D values outside the field for 6 MV X-rays have been measured with the TEPC, and an increase in the RBE outside the field has been investigated by using the above equation.
Fig. 3 .
Fig. 3. Survival curves of the HSG tumor cells for 200 kV X-rays (open circle), 60 Co γ-rays (open triangle) and 6 MV X-rays (open square).All curves are linear-quadratic functions with two free parameters (α, β) by a method of least-squares.Error bars show ± 1SE of each measurement.
Fig. 4 .
Fig. 4. The resulting α value in the linear-quadratic function with a fixed βMK was fitted by a linear function of yD obtained with the TEPC.The rd and α0 were derived from the slope and the y-intercept of the linear function, respectively. | 2017-09-15T07:43:28.396Z | 2011-01-01T00:00:00.000 | {
"year": 2011,
"sha1": "651d990f452c7f5d075b448fd40c21a759852e76",
"oa_license": "CCBYNC",
"oa_url": "https://academic.oup.com/jrr/article-pdf/52/1/75/6177209/jrr-52-75.pdf",
"oa_status": "GOLD",
"pdf_src": "ScienceParseMerged",
"pdf_hash": "651d990f452c7f5d075b448fd40c21a759852e76",
"s2fieldsofstudy": [
"Physics",
"Medicine"
],
"extfieldsofstudy": [
"Medicine",
"Physics"
]
} |
14513149 | pes2o/s2orc | v3-fos-license | The Relationship between Population Structure and Aluminum Tolerance in Cultivated Sorghum
Background Acid soils comprise up to 50% of the world's arable lands and in these areas aluminum (Al) toxicity impairs root growth, strongly limiting crop yield. Food security is thereby compromised in many developing countries located in tropical and subtropical regions worldwide. In sorghum, SbMATE, an Al-activated citrate transporter, underlies the AltSB locus on chromosome 3 and confers Al tolerance via Al-activated root citrate release. Methodology Population structure was studied in 254 sorghum accessions representative of the diversity present in cultivated sorghums. Al tolerance was assessed as the degree of root growth inhibition in nutrient solution containing Al. A genetic analysis based on markers flanking AltSB and SbMATE expression was undertaken to assess a possible role for AltSB in Al tolerant accessions. In addition, the mode of gene action was estimated concerning the Al tolerance trait. Comparisons between models that include population structure were applied to assess the importance of each subpopulation to Al tolerance. Conclusion/Significance Six subpopulations were revealed featuring specific racial and geographic origins. Al tolerance was found to be rather rare and present primarily in guinea and to lesser extent in caudatum subpopulations. AltSB was found to play a role in Al tolerance in most of the Al tolerant accessions. A striking variation was observed in the mode of gene action for the Al tolerance trait, which ranged from almost complete recessivity to near complete dominance, with a higher frequency of partially recessive sources of Al tolerance. A possible interpretation of our results concerning the origin and evolution of Al tolerance in cultivated sorghum is discussed. This study demonstrates the importance of deeply exploring the crop diversity reservoir both for a comprehensive view of the dynamics underlying the distribution and function of Al tolerance genes and to design efficient molecular breeding strategies aimed at enhancing Al tolerance.
Introduction
Aluminum (Al) tolerance has been deemed one of the main breeding targets in acid soil regions [1] and is of particular importance in sorghum in view of its primary role as a staple food and fodder crop in tropical and subtropical African countries [2]. At soil pH values below pH 5.0, rhizotoxic ionic forms of Al are solubilized into the soil solution, damaging sensitive root systems and reducing root growth [3], finally resulting in severe yield losses. Fortunately, genetic variation for Al tolerance can be exploited in breeding programs to improve sustainable production on acid soils. In sorghum, the Alt SB locus, located on chromosome 3, was first identified as a major determinant for Al tolerance in the sorghum line SC283, explaining 80% of the phenotypic variation in a SC283-derived mapping population [4].
Root organic acid release into the rhizosphere resulting in the formation of stable, non-toxic complexes with Al has long been hypothesized as a major physiological mechanism of tolerance via root Al exclusion in plants [5]. More recently, genes encoding root malate and citrate transporters belonging to the ALMT and MATE families, respectively, have been cloned in wheat (ALMT1, [6]), sorghum (SbMATE, [7]) and barley (HvMATE, [8]). In sorghum, SbMATE has been shown to underlie the major Al tolerance locus, Alt SB [7]. ALMT and/or MATE homologs with a likely role in Al tolerance have also been found in many other species such as maize [9], Arabidopsis [10,11], wheat [12], rape [13] and rye [14,15]. Recent studies also indicated the importance of other genes acting both within and outside the organic acid release pathway that influence the ability of plants to deal with Al toxicity. The CH2H2-type zinc finger transcription factor, STOP1 [16], has been shown to regulate both AtMATE and AtALMT expression [11,16], and ART1, a rice homolog of Arabidopsis STOP1, was shown to regulate the expression of several genes with possible roles in rice Al tolerance [17], a response that was also observed for STOP1 in Arabidopsis [18]. Nevertheless, mechanisms of Al tolerance different than Al-induced organic acid release have been suggested to be mediated by other genes such as those encoding ATP binding cassette (ABC) transporters with a possible involvement in Arabidopsis [19][20][21] as well as in rice Al tolerance [22]. While the specific role of ABC transporters in Al tolerance is yet to be elucidated, these proteins have been hypothesized to mediate Al redistribution from sensitive sites [19], Al sequestration into vacuoles [20] or to promote cell wall modifications [22]. In addition, the rice Nramp metal transporter, Nrat1, has recently been proposed to mediate Al uptake as part of a tolerance mechanism based on sequestration of Al away from the cell wall into the symplasm of root cells [23].
Sorghum Moench is a heterogeneous genus divided into 5 subgenera within which many species have been described including both rhizomatous and annual types [24]. Accordingly, subgenera Sorghum includes S. halepense and S. propinquum, two rhizomatous species, and S. bicolor, which comprises all annual taxa. Three subspecies have been recognized within S. bicolor reflecting cultivated taxa, wild types and stabilized weedy derivatives. Sorghum bicolor subsp. bicolor contains all of the cultivated sorghums [25] that are distributed among 28 grain sorghum types previously classified as species according to the original Snowden system [24]. Cultivated grain sorghums were ultimately classified into five basic botanical races defined based on panicle and spikelet morphological differences [26]. These races are bicolor, caudatum, durra, guinea and kafir, with an additional ten intermediates that were derived from intercrossing among members of the five main races. Sorghum domestication possibly occurred in the northeastern region of Africa at least 5000 years ago [27], giving rise to the early bicolor race [24,27]. The origin of the guinea race probably took place in tropical West Africa resulting from selection for adaptation to a wet habitat [24,28]. From this region, the guinea race spread to Malawi and later to southern Africa along the mountains of eastern Africa [24], being subsequently transported to Asia [28]. Guinea sorghums account for more than 70% of the sorghum cultivated in West and Central Africa and may account for more than 50% of all sorghum produced in Africa [29]. The caudatum race probably originated in the area of original domestication of the species [24] from where it spread to West and South Africa. There is evidence suggesting a southern African origin for the kafir sorghums and the origin of the durra race might have taken place in northeastern Africa or Asia [28].
Considering that Al tolerance is a rare event [1], efforts to broaden our still incipient understanding of the diversity of Al tolerance mechanisms in plants parallels the 'needle in a hay stack' scenario in germplasm banks [30], where breeders are challenged with skimming through thousands of accessions in search for novel allelic variants for loci underlying desirable traits. To serve as a guide for these efforts, better knowledge on the relationship between population structure and Al tolerance for cultivated sorghums is sorely needed.
Population substructure reflects the evolutionary history of a species [31] and can be understood as the presence of genetically differentiated subgroups in the original population [32]. A myriad of factors can lead to genetic divergence within a population including local adaptation, selection and genetic drift [33], and these factors may result in non-random distribution of important agronomic traits. In cultivated sorghum, genetic diversity patterns are influenced by both racial and geographical origins [34], resulting in well defined subgroups that can be studied for a possible relationship between Al tolerance and population structure.
In the present study, the cultivated sorghum collection described by Deu and colleagues [34] was combined with a sorghum panel that is representative of the lines currently used by the Embrapa acid soil breeding program [35]. The combined cultivated sorghum panel was then subjected to a population structure analysis. Our analysis strongly indicates that Al tolerance is a rare trait that is not randomly distributed considering the diversity patterns observed in cultivated sorghums. In addition, a wide range of diversity was observed for dominance behavior related to the Al tolerance trait, which ranged from almost complete recessivity to almost complete dominance. Finally, our population structure analysis allowed us to make inferences with regards to the origin and evolution of Al tolerance mutations in light of the domestication history leading to cultivated sorghums.
Al tolerance variation
At {27} mM Al 3+ (Table S1), 80% of the sorghum accessions were sensitive to Al (RNRG 5d ,30%), 14% were intermediately tolerant (30%,RNRG 5d ,80%), and only 6% or 16 sorghum accessions showed RNRG 5d .80%, thus being classified as highly Al tolerant (only these lines were designated Al tolerant in this paper). Accounting for lines that are breeding derivatives from known Al tolerant sources (e.g. the sorghum line 9929034 is derived from SC566, and CMS226 and CMS227 are derived from SC283), only 5% of the whole panel were found to be highly tolerant to Al. Sorghum Al tolerance is inducible over time, significantly increasing after two to three days of Al exposure [7]. Here we have used an Induction of Root Growth (IRG) index which is generated by dividing the daily rate of root growth calculated between the 3 rd and 5 th days of Al exposure by that obtained between the 1 rst and the 3 rd days. Differences in magnitude of the induction response were also observed across the panel with most of the accessions showing root growth inhibition at varying degrees (IRG,1), and only 26 accessions showing induction of root growth for 3 to 5 days of Al exposure compared to root growth for 1 to 3 days in Al (IRG.1). The induction response varied substantially among these 26 accessions, from nearly 1 (i.e. almost constant growth rates) to a 100% increase in the rate of root growth between days three and five in Al.
At {39} and {60} mM Al 3+ , the sorghum accession IS14351 did not group with any other accession, showing the highest relative net root growth, which indicates it is the most Al tolerant accession in the panel.
Principal Component Analysis allowed us to identify the first and second principal components as responsible for 98.7% of the total Al tolerance variation ( Figure 1 and Table S2). The first principal component (PC1), whose linear combination has positive eigenvector coefficients for all variables, can thus be interpreted as a general Al tolerance index, whereas the second principal component (PC2), explaining 12% of the variation, contrasts relative root growth assessed at 3 and 5 days of Al exposure to the induction response (Table S2). The majority of the sorghum accessions in the diversity panel showed low scores for both PC1 and PC2 (Figure 1), reflecting the high frequency of Al sensitive accessions in the diversity panel. A significant spread in PC2 scores was observed with increasing tolerance (PC1.0), with maximum amplitude being reached at PC1 near 2. Highly Al tolerant accessions (PC1.3.5) showed PC2 scores in general between approximately +1.5 (IS26554) and 21.7 (IS29691). The relative importance of the induction response to RNRG varied, being substantial for accessions such as IS26554, similar in IS26457/ CMS225 and smaller in IS29691 ( Figure 1). The highly Al tolerant accession, IS14351, showed the lowest PC2 score, indicating a relatively lower importance for the induction response in this highly Al tolerant line.
Genetic and expression analysis of Al tolerance
We undertook linkage analysis between Al tolerance and markers flanking Alt SB to assess the role of the Al tolerance locus in the donor accessions. Because 80% of the accessions in the panel were Al sensitive, linkage analysis focused primarily on populations derived from the Al tolerant accessions. Populations derived from two intermediate accessions, IS21849 and IS23645, were also included in this analysis. Linkage analysis revealed that Al tolerance in IS14351, IS21519, IS21849, IS23645 and IS26554 can be attributed to the Alt SB locus, whereas significant markertrait associations were not found for BC families derived from IS23142, IS26457 and IS29691 (Table S3). However, analysis of Al tolerance for parents and derived F 1 hybrids indicated additive gene action (20.3#d/a#+0.3) for 4 Al tolerance donors, whereas Al tolerance in 11 out of the 17 sorghum accessions was either a recessive (d/a#20.7) or partially recessive trait (20.7,d/ a,20.3) ( Figure 2 and Table S4). The sorghum accessions, CMS225 and SC283, showed the highest degree of dominance and strict complete dominance (d/a = 1) was never observed in this study. It should be noted that the power to detect genetic linkage in a backcross population decreases as gene action approaches complete recessivity, although this extreme situation was never observed in our dataset. Considering the rather recessive behavior of Al tolerance in IS23142 and IS26457, even with the lack of linkage with markers flanking Alt SB , we cannot rule out the possibility that Al tolerance in these accessions is due to partially recessive Alt SB alleles. However, this possibility seems less likely for IS29691 in view of its rather additive mode of gene action for Al tolerance.
We then studied expression for SbMATE, which underlies Alt SB , in 7 Al tolerant accessions including IS14351, IS21519 and IS26554, whose Al tolerance was found to be due to Alt SB according to our genetic analysis, and IS23142, IS29691 and IS26457, for which non-significant marker-trait associations were observed (Table S3). Two known sources of Al tolerance due to Alt SB , SC283 and SC566, in addition to the Al sensitive standards, BR007 and BR012 (Table S1 and [35]), were also included as controls. All Al tolerant accessions except for IS29691 exhibited SbMATE expression levels significantly higher than that in the Al sensitive standards, BR012 and BR007 ( Figure 3). Expression in the tolerant lines ranged from ,10-to ,80-fold higher than that observed in BR012. This is consistent with a strong role of Alt SB in Al tolerance for these accessions despite the recessive mode of gene action observed in some sources. In agreement with our expectations, due to its extremely low level of SbMATE expression, only IS29691 is likely to strongly rely on Al tolerant loci distinct from Alt SB .
Distribution of Al tolerance with respect to racial classification
The five basic morphological races were represented in the diversity panel with a larger and similar representation for the guinea and caudatum races ( Figure 4A). Aluminum sensitive accessions tended to be randomly distributed across the major sorghum races with a slightly higher frequency in caudatum sorghums ( Figure 4B). Nonetheless, the racial distribution for intermediate and Al tolerant accessions was strikingly different. The vast majority of intermediate accessions, 19, were found to be members of the guinea race with an additional 8 and 4 accessions belonging to guinea margaritiferums and the caudatum races, respectively. The remaining intermediate accessions were evenly distributed at a lower frequency within guinea-caudatums, bicolors or were uncharacterized for racial origin ( Figure 4C). Seven of the sixteen Al tolerant accessions were guinea sorghums and two were caudatums, with one accession, IS23142, morphologically classified as a durra type. The six remaining Al tolerant accessions were breeding derivatives ( Figure 4D). Because the sorghum panel is unbalanced with respect to racial representation, we undertook a Chi-square test for independence based on a six (number of accessions in each of the five basic sorghum races and guinea margaritferum)62 (Al tolerant + intermediate and Al sensitive) contingency table. The results (x 2 = 50.8, P[x 2 .50.8] = 1.3E- 14) indicated that the distribution of Al tolerance cannot be explained solely by differences in racial representation in the diversity panel, thus significantly departing from a random pattern. Table S4. doi:10.1371/journal.pone.0020830.g002 Figure 3. Expression analysis of SbMATE. SbMATE relative expression was determined using quantitative real-time PCR with expression in the Al sensitive line, BR012, as a reference. 18S ribosomal RNA was used as internal control. The first centimeter from root apices cut from roots of intact plants exposed to {27} mM Al 3+ in nutrient solution at pH 4.0 for 5 days were harvested for total RNA isolation. Twenty-eight apices per experimental unit (genotype) were collected and the bars indicate standard deviations based on 3 technical reps. doi:10.1371/journal.pone.0020830.g003 Sorghum accessions in the diversity panel were then geographically represented on a soil map of Africa depicting the distribution of Al saturation classes ( Figure 5). Except for two accessions coming from Sudan and Chad, intermediate accessions appeared to be more frequent in West and East Africa. The distribution of Al tolerant accessions coincided in general with the distribution of the intermediate accessions, but the former were geographically more tightly clustered in West Africa compared to a broader distribution in East Africa, across Ethiopia and Tanzania and South/East Africa, in Malawi and Zimbabwe. At the level of resolution of the soil map, Al tolerant accessions from South/East Africa appear to be originated in areas particularly prone to Al toxicity, in soils with Al saturation above 25%.
Analysis of population structure based on SSR markers
A total of 501 alleles were revealed by 38 SSR loci genotyped in 254 sorghum accessions. Within those, 399 showed minor allele frequencies under 10%. The average number of alleles per locus was 13.2, ranging from 2 for the marker locus, Xtxp136, to 29 for Xgap206. The Polymorphic Index Content (PIC) value over the 38 SSR markers averaged 0.65, ranging from 0.19 for marker mSbCIR246 to 0.93 for markers Xgap206 and Xtxp321 (Table S5).
Upon population structure analysis, the Ln(k) vs. k curve showed a steep increment in model likelihood up to k = 4 although additional but apparently slighter increments occurred between 5 and 12 subpopulations ( Figure S1). Using the Dk criterion, the most evident level of differentiation was observed with k = 4, but additional peaks, although much less evident, were also detected at k = 6 and k = 12 ( Figure S2). Nevertheless, the largest proportion of individuals assigned to a specific cluster with a cluster membership probability higher than 0.8 was obtained with k = 4 and k = 6, with 81 and 71%, respectively, contrasting with ,60% for k = 12. However, particularly in sorghum, where hybridization between sorghum races is a common event, cluster membership should not be adopted as the sole criterion to define the most likely number of subpopulations. Thus, we subsequently analyzed in detail the nature of the clusters obtained setting k at 4 and 6 subpopulations. The corresponding clusters for k = 4 ( Figure S3A) were composed of guinea accessions from western Africa and guinea margaritiferum (k4Q1), durra accessions from centraleastern Africa and from Asia, bicolor and caudatum accessions from Asia (k4Q2), caudatum accessions from Africa, a group of transplanted caudatum and durra accessions from the Lake Chad region, and lines from the Embrapa collection and the US (k4Q3), and kafir and guinea accessions from southern Africa (k4Q4). At k = 6, the former k4Q3 group was separated into k6Q2, which included caudatum accessions from Africa and a group of transplanted caudatum and durra accessions from the Lake Chad region, and k6Q3, with lines from the Embrapa collection and the US. In addition, the k4Q4 group was separated into k6Q4, which included kafir accessions from southern Africa and k6Q6, which included guinea accessions from southern Africa ( Figure S3B, Table S6). Based on these results, we believe that six subpopulations result in a meaningful representation of the genetic diversity patterns underlying this panel, which led us to define k = 6 as the starting point to look into the distribution of Al tolerance in sorghum.
Population structure and Al tolerance in sorghum
The distribution of Al tolerance within each of the six subpopulations defined with STRUCTURE ( Figure 6A) is shown in Figure 6B. The distributions were in general asymmetric and skewed towards Al sensitivity (RNRG 5d ,,50%). Intermediate accessions were predominantly clustered in Q1 (guinea accessions from western Africa and guinea margaritiferum accessions), Q3 (lines from the Embrapa collection and US) and Q6 (guinea accessions from southern Africa and Asia), resulting in greater interquartile range for Q1, Q3 and Q6 compared to the other subpopulations. Al sensitive accessions were mainly clustered in Q2 (caudatum accessions from Africa and the group of transplanted caudatum and durra accessions from Lake Chad region), Q4 (kafir accessions from southern Africa) and Q5 (durra, bicolor and caudatum accessions from eastern Africa and Asia). Al tolerant accessions appear as outliers in Figure 6B and were again predominantly present in Q1, Q3 and Q6 but were also present in Q2 (caudatum types).
Due to different population sizes and unequal variances within subpopulations for Al tolerance traits, the Kruskal-Wallis test was applied as suggested by Lin and collaborators [36], confirming that there are differences among subpopulations for all traits related to Al tolerance. The non-parametric lsd test indicated subpopulations Q1, Q3 and Q6 to be in general superior in terms of Al tolerance traits (Table 1).
Finally, we undertook a series of model selection steps as an attempt to more formally isolate the individual contribution of the different subpopulations to Al tolerance. Our rationale is based on the idea that removing a subpopulation that captures a significant proportion of the Al tolerance variation from the model should result in a decrease in model likelihood. The complete model showed the lowest Bayesian Information Criterion (BIC) value, indicating that all subpopulations are important to explain the observed variation in Al tolerance ( Figure 7). However, excluding subpopulations, Q1, Q3 and Q6 resulted in a stronger reduction in model performance when compared to excluding the remaining subpopulations. Moreover, based on the increment in BIC estimates, guineas from southern Africa and Asia (Q6) appear to be the most important in capturing Al tolerance variation, followed by guinea accessions from western Africa and guinea margaritiferum (Q1), and lines from the Embrapa collection and the US (Q3). The contribution of the remaining subpopulations, Q2 and Q4, although significant, was lower than that of Q6, Q1 and Q3. This is expected, considering the lower representation of intermediate accessions in Q2 and Q4, whose removal caused a nearly equal increase in BIC estimates. We also used the PROC STEPWISE procedure implemented in SAS with the MAXR option to obtain the proportion of the variance explained by population structure alone, which was approximately 16%. This indicates that although the incorporation of population structure covariates is important to control for false positives in association analysis for Al tolerance, a substantial fraction of the phenotypic variance should still persist and can be potentially assigned to Al tolerance QTL.
Discussion
The sorghum panel used in this study was assembled to represent the genetic diversity present in cultivated sorghums [34], thus allowing us to look in detail into a possible relationship between population structure and Al tolerance with a focus on sorghum production on acid soils. A similar low frequency for Al tolerance was also observed by Reddy and collaborators [37] based on field screening and is consistent with Al tolerance being a derived state [1], with a possible relatively recent origin of Al tolerance mutations in sorghum.
Our previous [35] and present genetic analyses indicate that 11 out of the 16 highly Al tolerant accessions rely on the Alt SB locus to express their tolerance. Because of the rather recessive nature of some Alt SB alleles that could lead to false negatives in the genetic analysis, we also studied expression for SbMATE, which underlies Alt SB , in a subset including 4 of the 5 remaining Al tolerant accessions in addition to other Al tolerant accessions for which our genetic analysis confirmed a role for Alt SB in Al tolerance control. All Al tolerant accessions except for one showed expression levels significantly higher than that in the Al sensitive standards, BR007 and BR012. Considering that Al tolerance and SbMATE expression are highly correlated [7], our data supports a role for Alt SB in providing tolerance to the vast majority of the Al tolerant accessions in the diversity panel. In addition, tolerant lines were found to vary for SbMATE expression levels, suggesting contrasting allelic effects. This expands on our previous findings indicating substantial diversity for Al tolerance control at the Alt SB locus, which in a small, 12-member panel, was largely due to an allelic series at Alt SB encoding highly variable Al tolerance phenotypes [35].
Population structure analysis revealed that Al tolerance is by no means randomly distributed across the diversity continuum but instead is rather specific to certain genetically differentiated subgroups featuring specific racial and geographical origins. Particularly, the guinea subpopulations Q1 and Q6 are important repositories of Al tolerance in sorghum. Although the caudatum subpopulation, Q2, appears to be relatively less important than Q1, Q3 and Q6 in explaining the variation for Al tolerance in the panel according to our Q+K model, this subpopulation included eight lines with RNRG values between 40 and 100. This indicates that sorghum subpopulations containing caudatum types may also be useful for the identification of Al tolerance donors. Interestingly, the only non-guinea/caudatum accession that was found to be Al tolerant, IS23142, morphologically classified as a durra type, showed high membership coefficients to guinea subpopulations Q1 and Q6, suggesting a guinea-durra transfer of Al tolerance. The high level of tolerance observed within Q3, a subpopulation with a predominance of lines from the Embrapa collection and the Figure 5. Geographical and racial distribution of the sorghum accessions and Al toxicity in Africa. Accessions were plotted on the map based on the latitude and longitude coordinates found in http://www.icrisat.org/sorghum/Project1/pfirst.asp when available. Accessions lacking those coordinates were plotted randomly within the known country of origin. Outer circles indicate the classes of Al tolerance whereas inner circles indicate racial classification [26]. Racial classifications can be found in [34] and http://www.ars-grin.gov. The soil data set is based on the Fertility Capability Classification (FCC, [60] US, reflects the fact that some of those lines have been purposely selected for breeding Al tolerant sorghums for the Brazilian acid soils, also reflecting the presence of Al tolerant breeding derivatives [35].
Correlation between population structure and variation for a phenotypic trait has been reported and may result from adaptation and/or genetic drift [38,39]. In maize, a deletion allele of the D8idp gene, which is associated with flowering time, was found in high frequency among Northern Flint material and in low frequency among tropical material, likely resulting from diversifying selection for flowering time [39]. Interestingly, the guinea race is the main race of sorghum cultivated in West Africa due to its adaptation to a range of stresses commonly found there, including poor soil fertility and low soil pH [40]. This suggests that the strong relationship between population structure and Al tolerance in sorghum is not solely caused by genetic drift and may be the result of local adaptation to acid, Al toxic soils. Considering that those soils can be distributed in rather localized regions, thus escaping the resolution level of our soil map, a more detailed soil characterization in West Africa with regards to Al toxicity is needed to gain additional insights into this hypothesis. The local adaptation hypothesis is reinforced by the fact that Al toxicity has indeed been documented to impair sorghum production in West Africa [41,42].
The fact that the vast majority of the Al tolerant accessions in the diversity panel were either guinea types or were genetically closely related to guinea sorghums from West and South/East Africa, leads us to speculate that Al tolerance mutations were originated after the initial migration from the original area of sorghum domestication between Sudan and Ethiopia [2,27], arising in West Africa after the guinea race differentiated from the primordial bicolor types. Supportive of this hypothesis is the presence of Al tolerant accessions in the Malawi region, which is thought to be a secondary center where guinea sorghums occur [24,26,43]. Interestingly, one of the most Al tolerant accessions in the diversity panel, SC566, a caudatum type from Nigeria, clustered with guinea sorghums, reinforcing the possibility for a single racial origin with subsequent interracial spread of Al tolerance genes in sorghum. In fact, it is known that the guinea race is sympatric with all four of the other basic races of sorghum and interracial hybrids among them are occasionally observed, which are commonly encountered in drier areas from Nigeria to Uganda [28].
The accessions in the diversity panel showed strikingly different modes of gene action for Al tolerance, with the vast majority showing recessive gene action at varying degrees. Interestingly, the dominance level in newly adaptive genes for insecticide resistance has been shown to be extremely plastic, varying from almost recessive to almost complete dominance [44]. Considering the plausible possibility that the rather new Al tolerance mutations were originally recessive in nature, we are then presented with the question of how dominance arose for Al tolerance genes. Although it has been the subject of great debate (reviewed by Bourget [45]), the hypothesis of dominance arising from evolutionary change has been proposed [46]. More recently, the degree of dominance for QTL controlling differences in plant and influorescence architecture between maize and teosinte was found to be greater in the maize background [47]. This observation led the authors to hypothesize that changes in gene action could possibly result from selection during the domestication process for modifier loci that enhance the expression of the trait in the heterozygote. The strong correlation between Al tolerance and SbMATE expression and the highly monomorphic nature of the SbMATE coding region suggest an important role for regulatory polymorphisms in Al tolerance controlled by the Alt SB locus [7]. Along those lines, the fact that MATE genes have been found to be modulated by transcription factors such as STOP1 [11] leads us to raise the hypothesis that dominance in the case of Al tolerance is an acquired state, with a possible origin at modifier loci interacting with Al tolerance genes. One possible precedent for this is the acethylcholinesterase gene conferring insecticide resistance, which showed extremely plastic dominance behavior [48]. Clearly, in the case of Al tolerance, a more specific study on background effects modulating the expression of Al tolerance genes is needed to gain further insights into this hypothesis. In addition, our experimental design allowed us to assess the degree of dominance related to the Al tolerance trait as a whole, whereas the dominance behavior for Alt SB was not individualized. Although our data strongly suggest a pivotal role for Alt SB in conferring Al tolerance, evidence for other Al tolerance genes has been found both here for IS29691 and in our previous studies [35]. The strong relationship observed in the present study between Al tolerance and population structure and the significant plasticity in dominance behavior indicate that the dynamics involving the distribution and function of major Al tolerance genes is much more complex than initially suggested by simple inheritance outcomes in the pioneering genetic studies with a few parental genotypes. A more comprehensive and detailed view of plant Al tolerance enabling powerful molecular breeding strategies will require a detailed understanding of the evolutionary history leading to Al tolerance loci in each species.
Genetic stocks
Two-hundred and nine accessions from the landrace collection described in [34] and forty-five inbred lines that are frequently used in breeding programs in the US and Brazil formed a combined panel that was used in this study.
Seventeen F 1 hybrids were generated by crossing different accessions, which ranged from intermediate to high Al tolerance, to the Al sensitive line BR007, to investigate the mode of gene action for Al tolerance. For a genetic analysis of Al tolerance based on the Alt SB locus, 8 F 1 hybrids derived from 2 moderately and 6 highly Al tolerant accessions were backcrossed to BR007 to generate backcross one F 1 (BC 1 F 1 ) populations.
Assessment of Al tolerance in nutrient solution
Analysis of Al tolerance was conducted in nutrient solutions containing either 0 or 148 mM Al, which correspond to free Al 3+ activities of {0} and {27}mM Al 3+ (values inside brackets indicate Al 3+ activity estimated with the speciation software, GEOCHEM-PC [49]). A subset of 27 accessions including all Al tolerant accessions determined at {27}mM Al 3+ were re-screened at 0, 222 and 360 mM Al, which correspond to free Al 3+ activities of {0}, {39} and {60} mM Al 3+ . The highly Al tolerant line, SC566, and the Al sensitive line, BR007, which had been previously identified as such by [35], were included as controls. The experiments consisted of a completely randomized design with two replications and seven plants per replication. Hydroponic analyses of Al tolerance were undertaken as described in [35]. Briefly, seeds of each genotype were germinated for four days and seedlings were transferred to containers with nutrient solution lacking Al (pH 4.0) placed in a growth chamber with 27uC day and 20uC night temperatures, a light intensity of 330 mmol photons m 22 s 21 and a 12-h photoperiod. After 24 h of acclimation, the initial length of each seedling's root growing in control solution (ilc) was measured. The For the genetic analysis of Al tolerance control at Alt SB , 8 BC 1 F 1 families were phenotyped for Al tolerance at {27} mM Al 3+ . An independent control lacking Al cannot be employed in families segregating for Al tolerance due to the genetically dissimilar nature of individual plants. Thus, Al tolerance was assessed on an individual plant basis as described in detail in [35], by estimating the degree of root growth inhibition caused by Al over a five-day exposure period relative to the control root growth. Relative root growth (%) was calculated with the formula RRG = [(flAl2flc) 5d /((flc2ilc) 1d 65)]6100 where d is the Al exposure period measured in days.
To investigate the mode of gene action for Al tolerance, 17 F 1 hybrids having BR007 as the common Al sensitive parent were evaluated at {27} mM Al 3+ , including the parents of each cross as controls, and RRG was estimated. The experiments consisted of completely randomized designs with at least 7 plants per genotype.
The sorghum accessions were also inspected for root damage after five days of Al exposure and a Visual Root Damage (VRD) scale ranging from 1 (root apices heavily damaged) to 5 (root apices undamaged) was applied. Three independent evaluations were carried out for estimating VRD means.
Al tolerance in sorghum has been reported to be inducible over time, significantly increasing after two to three days of Al exposure [7]. In the current study, an Induction of Root Growth (IRG) index was estimated by dividing the daily rate of root growth calculated between the 3 rd and 5 th days of Al exposure by that obtained between the 1 rst and the 3 rd days. IRG values less than one indicate that the rates of root growth recorded between days 3 and 5 of Al exposure were smaller than those between days 1 and 3, values equal to one indicate constant root growth rates whereas induction of root growth results in IRG.1, reflecting higher rates of root growth between days 3 and 5 relative to those between days 1 and 3 of Al exposure.
Analysis of SbMATE expression via Quantitative Real-Time Reverse Transcription (RT) PCR
Sorghum seedlings were grown following the same methods used for assessment of Al tolerance in nutrient solution containing {27} mM Al 3+ in a growth chamber under controlled environmental conditions. Each experimental unit (genotype) consisted of the first centimeter of root apices collected from 28 intact plants, 5 days after Al treatment imposition. These 28 plants per genotype were divided in 4 sets (7 plants per set) and each set was randomized inside the growth chamber.
Total RNA was extracted from tissue samples using the RNeasy Plant Mini Kit (Qiagen, Valencia, CA) and 10 U of DNase I (RNase free) from the same manufacturer were added to each sample following incubation at room temperature for 15 min. First-strand cDNA was synthesized using 2 mg of total RNA with the High-Capacity cDNA Reverse Transcription Kit (Applied Biosystems, Foster City, CA).
SbMATE transcripts were quantified with using the TaqMan Gene Expression kit on the ABI Prism 7500 Real Time PCR System (Applied Biosystems, Foster City, CA). A series of cDNA dilutions were used for making standard curves both for SbMATE transcripts and for 18S RNA which was used as the internal reference. Then, the selected dilution for specific cDNA samples (10 ng for SbMATE transcripts and 0.01 ng for 18S RNA) were used as real-time PCR templates to quantify relative transcript levels following the conditions recommended by the manufacturer. The forward (F) and reverse (R) primers, as well as the probe sequences are F: 59-CAG CCATTGCCCATGTTCTTT-39, R: 59-ACCAGCTTGCTCAGCATTATCA-39 and Probe: 6FAM-CCCAGTACCTGATAACGC-TAMRA.
Levels of expression for endogenous 18S RNA were determined using TaqMan Ribosomal RNA Control Reagents (AppliedBiosystems, Foster City, CA). Distilled water or products of room temperature reactions without reverse transcriptase were used as negative controls. The levels of the SbMATE transcripts were normalized to endogenous 18S RNA and SbMATE expression relative to that in the Al sensitive accession, BR012, was calculated. Three technical reps were used. The experiment was repeated 2 times with similar results.
DNA isolation and PCR amplification
Leaf tissue from three plants of each accession and leaf from individual seedlings for the segregating families were used for DNA isolation according to Saghai-Maroof and colleagues [50]. The markers CTG29 (CTG29F: HEX-ATGCAGTATCTGCAGTA-TCATTT; CTG29R: AATCCGTCAGGTCAGCAATC), S17 (S17F: GGCTGCCCGTCCCTTTCTCTGTCT; S17R: CCGG-GGCGCTGGGCTTCCTT) and S73 (S73F: AAGCGCTGGCC-CAAATGAAATGA; S73R: GAGCCAACACGGGGAGAACA-AGTC) were used to determine whether Al tolerance in the tolerant sources was due to allelic variation at Alt SB . CTG29 is a sequence tagged site (STS) marker that is linked to Alt SB at 0.2 cM (estimate obtained in 2085 F 2 individuals derived from a cross between SC283 and BR007, [7]). Upon positional cloning of Alt SB [7], another two STS markers, S17 and S73 were developed from sequences in the same bacterial artificial chromosome (BAC) that harbors Alt SB , with S17 being located at 32.1 kb from Alt SB on the same side as CTG29 whereas S73 is located at 22.4 kb from Alt SB . on the opposite side. Due to the tight physical and genetic linkage of these marker loci to Alt SB , the odds for a double recombination event in BC families of the size used in this study are extremely low, making these markers diagnostic for Alt SB .
PCR reactions with CTG29 were performed as described in Caniato et al. [35]. For S17 and S73, amplifications were carried out in a reaction volume of 20 mL which contained 30 ng of genomic DNA, 106 polymerase chain reaction buffer, 0.5 mM dNTP, 3 mM MgCl 2 , 4 pmol of each primer, 5% of dimethyl sulfoxide (DMSO) and 0.5 U of Taq DNA polymerase (Phoneu-tria, Belo Horizonte, MG). Amplification proceeded with an initial denaturation step of 95uC for 1 min followed by 30 cycles at 94uC for 1 min, 62uC for 1 min, 72uC for 1 min, and a final extension step at 72uC for 10 min. Electrophoresis was carried out in 1% (w/v) agarose gel at 100 V in 16 TAE buffer, revealing scorable polymorphisms between the parental lines.
Thirty-eight SSR markers from a sorghum SSR kit (http://sat. cirad.fr/sat/sorghum_SSR_kit/) developed within the Generation Challenge Programme (GCP), which are evenly distributed across the sorghum genome, were used for genetic diversity and population structure analyses. The fragment sizes obtained for the Deu et al. [34] collection were provided by the GCP and the 45 lines from the Embrapa collection were genotyped with the same SSR markers. Because differences in allele sizes for the same alleles are expected between labs, a set of 10 highly diverse sorghum lines with a wide range of allelic variation (http://sat.cirad.fr/sat/ sorghum_SSR_kit/data/control_comp.html) was used as a control. DNA for these 10 lines were used for PCR amplification with the same SSR markers along with the 45 lines from the Embrapa collection to normalize differences in fragment sizes based on what was obtained under the conditions employed by GCP and Embrapa. PCR reactions were carried out as described for STS amplifications but without DMSO and using 2.5 pmol of each SSR primer. Amplification proceeded with a touchdown protocol including an initial denaturation step at 94uC for 4 min, followed by 9 cycles at 94uC for 45 s, 60uC for 1 min with a reduction rate of 0.5uC per additional cycle, 72uC for 1 min 15 s, and 24 cycles at 94uC for 45 s, 55uC for 1 min, 72uC for 1 min 15 s, and a final extension step of 5 min at 72uC. Three microliters of 200-fold diluted amplification products and 6.9 ml of Hi-Di formamide (Applied Biosystems, Foster City, CA) were mixed with 0.1 ml GS500 ROX (Applied Biosystems, Foster City, CA) internal size standard and denatured at 95uC for 5 min. The fragments were assayed on an ABI 3100 sequencer (Applied Biosystems, Foster City, CA). Fragment sizes were determined based on migration relative to the internal size standard using the GeneMapper 3.5 software. Allele sizes obtained for each control line were compared to the expected allele sizes posted on http://sat.cirad.fr/sat/ sorghum_SSR_kit and a correction factor for each marker was imposed to normalize allele sizes for the GCP and Embrapa datasets so that the two panels could be genetically merged.
Statistical analysis of Al tolerance data
One-way analysis of variance for RNRG 3d , RNRG 5d , VRD and IRG data at each Al activity, followed by the Scott-Knott test [51], were initially undertaken to cluster the accessions into homogeneous groups for the response variables. RNRG 3d , RNRG 5d and IRG data obtained at {27} mM Al 3+ were also subjected to Principal Component Analysis (PCA, [52]) based on standardized variables.
Genetic analysis of Al tolerance control at the Alt SB locus
Simple interval mapping was undertaken when two markers flanking the Alt SB locus were available whereas single marker analysis was applied in the remaining cases. Significant associations with Al tolerance were declared at a logarithm-of-odds (LOD) equal to or higher than three.
Gene action estimates for Al tolerance
The degree of dominance for the Al tolerance trait was estimated as the ratio between dominance (d) and additive (a) effects, d/a, where d = Tt2[(TT+tt)/2 and a = (TT2tt)/2. TT denotes the RRG mean for the tolerant parent, tt is the RRG mean for the sensitive parent, BR007, which was common to all crosses, and Tt denotes the RRG mean for each F 1 hybrid. Therefore, a d/ a value of 21 indicates that the phenotypic mean of the F 1 (Tt) hybrid equals that of the homozygous sensitive (tt) parent, d/ a = +1 means that the F 1 hybrid is as tolerant as the homozygous tolerant (TT) parent, and d/a = 0 indicates that Al tolerance in the F 1 hybrid equals the average of the RRG means estimated for the two parents.
In the present study we adopted the following convention for assigning modes of gene action related to Al tolerance: recessive (d/a#20.7), partially recessive (20.7
Genetic diversity analysis
Total and per locus number of alleles and the Polymorphism is the squared frequency of the ith allele), were calculated with PowerMarker version 3.25 [53].
Analysis of population structure
A Bayesian cluster analysis as implemented in the software STRUCTURE [54,55] was used to estimate the number of subpopulations (k) based on the SSR data set. The admixture model with correlated allele frequencies was adopted, with burn-in length 100,000 and 1,000,000 run length, with five independent runs for each k set to range from 1 to 13. It has been reported that in some instances the log probability of data may not provide a correct estimate of number of clusters [56]. Thus, we also calculated Dk as the second order change of log of probability of data, Ln(k), divided by its standard deviation [56] and the rate of change in Dk between successive k values was adopted as an auxiliary criterion to identify the most likely number of subpopulations.
Analysis of Al tolerance with respect to subpopulations defined by STRUCTURE
The non-parametric Kruskal-Wallis test was initially used to test whether the defined subpopulations differed for the Al tolerance response variables assessed at {27} mM Al 3+ . Statistical significance for all pairwise differences among subpopulations for each variable were estimated by calculating the least significant difference (lsd) between subpopulations as lsd~z a k(k{1) is the superior limit of the normal distribution; n i and n j are the number of individual within each subpopulation, i and j, respectively, and N is the total number of individuals. Subsequently, a linear mixed model accounting for population structure and familial relatedness or kinship [57] was fit to the data in order to clarify a possible relationship between population structure and the distribution of the Al tolerance in sorghum. Our model was y = Qn+Zu+e, where y is a vector of phenotypic observations, n is a vector of fixed effects related to population structure and u is a vector of random effects related to familial relatedness. Z is an incidence matrix of 0 s and 1 s, relating Z to y. Q is the population membership assignment matrix obtained with STRUCTURE. The variances for the random effects are Var(u) = 2KVg and Var(e) = RV R , where K is a 2546254 matrix based on the proportion of shared allele values [58], obtained with PowerMarker [53], R is a 2546254 matrix with the off-diagonal elements being zero and the diagonal elements being the reciprocal of the number of observations for which each phenotypic data point was obtained, Vg the genetic variance, and V R the residual variance. Analyses were performed in SAS with the code available at http://www.maizegenetics.net/unifiedmixed-model.
Our complete model included the subpopulations Q1, Q2, Q3, Q4 and Q6. Q5 was found to comprise basically Al sensitive genotypes and was thus excluded from the model to remove dependency. Each one of subpopulations Q1, Q2, Q3, Q4 and Q6 were then sequentially removed following model selection based on the Bayesian Information Criterion (BIC, [59]). Figure S1 Posterior probability of data, Ln(D), for each number of subpopulations (k). Simulations were carried out with k ranging from 1 to 13. Ln(k) values are means of five independent runs for each k. (DOC) Figure S2 Second order of change of probability of data (Dk, [56]) for different subpopulation numbers (k). (DOC) Figure S3 Membership of individual sorghum accessions to subpopulations (Q). (A) k4Q1, guinea accessions from western Africa and guinea margaretiferum; k4Q2, durra accessions from Central eastern Africa and from Asia, bicolor and caudatum accessions from Asia; k4Q3, caudatum accessions from Africa, group of transplanted caudatum and durra accessions from Lake Chad region and lines from Embrapa collection and USA and k4Q4, kafir and guinea accessions from southern Africa and (B) k6Q1, guinea accessions from western Africa and guinea margaretiferum; k6Q2, caudatum accessions from Africa and group of transplanted caudatum and durra accessions from Lake Chad region; k6Q3, lines from Embrapa collection and US; k6Q4, kafir accessions from southern Africa; k6Q5, durra accessions from central eastern Africa and from Asia; bicolor and caudatum accessions from Asia k6Q6, guinea accessions from southern Africa and Asia. Membership coefficients for each subpopulation are shown in Table S6. Arrows indicate hierarchical subpopulation splits from k = 4 to k = 6. (DOC)
Supporting Information
Table S1 Sorghum accessions evaluated in this study. Country of origin, racial classification according to [26] performed at ICRISAT and CIRAD as found in [34] are shown followed by RNRG 3d , RNRG 5d , VRD and IRG evaluated at {27}mM Al 3+ and at {39} and {60} mM Al 3+ (except VRD). Values are means of two replications (7 plants per replication). Means followed by the same lower-case letters constitute homogeneous groups by the Scott-Knott test (P,0.05). Al tolerant accessions indicated in [26] performed at ICRISAT and CIRAD as found in [34] are shown followed by each subpopulation (Q) defined with the software STRUCTURE. (XLS) | 2017-04-27T08:56:59.088Z | 2011-06-14T00:00:00.000 | {
"year": 2011,
"sha1": "3473fdd44e6e81d5d481bd2c65482a86e6a23294",
"oa_license": "CCBY",
"oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0020830&type=printable",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "2df2b12c72c61c369139a5447ab3b28b303bb6b5",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
} |
269625080 | pes2o/s2orc | v3-fos-license | Combined effects of border irrigation and super-absorbent polymers on enzyme activity and microbial diversity of poplar rhizosphere soil
Fast-growing poplar plantations are considered a great benefit to timber production, but water availability is a key factor limiting their growth and development, especially in arid and semi-arid ecosystems. Super-absorbent polymers facilitate more water retention in soil after rain or irrigation, and they are able to release water gradually during plant growth. This study aimed to examine the effects of reduced irrigation (60% and 30% of conventional border irrigation) co-applied with super-absorbent polymers (0, 40 kg/ha) on root exudates, enzyme activities, microbial functional diversity in rhizosphere soil, and volume increments in poplar (Populus euramericana cv. ‘Neva’). The results showed that 60% border irrigation co-applied with super-absorbent polymers significantly increased the content of organic acids, amino acids and total sugars in the root exudates, and the activities of invertase, urease, dehydrogenase, and catalase in the rhizosphere soil in comparison to conventional border irrigation without super-absorbent polymers. Meanwhile, this treatment also enhanced the average well-color development, Shannon index, and McIntosh index, but decreased the Simpson index. Additionally, the average volume growth rate and relative water content of leaves reached their maximum using 60% irrigation with super-absorbent polymers, which was significantly higher than other treatments. However, using 30% irrigation with super-absorbent polymers, had a smaller effect on rhizosphere soil and volume growth than 60% irrigation with super-absorbent polymers. Therefore, using an appropriate water-saving irrigation measure (60% conventional border irrigation with super-absorbent polymers) can help to improve enzyme activities and microbial diversity in the rhizosphere soil while promoting the growth of poplar trees.
Introduction
Poplar, which is currently the major afforestation tree species of rapid-growing and high-yield forests in China, plays an important role in wood-production, urban greening, bioenergy, and desertification control [1,2].Currently, flood irrigation is the main strategy in the water management of poplar plantations.Flood irrigation not only causes various environmental problems, such as soil compaction and poor permeability, but it also leads to a substantial water waste, soil erosion, and nutritional losses, thus posing a potential threat to underground water via its contamination.Moreover, water resource shortages are a serious problem in the northern regions of China.The north contains 64% of the land area, but possesses only 20% of the country's water resources [3].This situation is gradually becoming a major problem and it is limiting agricultural and forestry development [4,5].Furthermore, the problem will be amplified in the coming decades because of climate change, by which may lead to reduced rainfall, rising temperatures, and increasing evapotranspiration [6,7].Therefore, it is essential to improve soil moisture conservation, develop water-saving technologies, and increase water use efficiency to sustainably develop the Chinese agroforestry sector in the future.To do this, the traditional flood irrigation system urgently needs to be replaced by a water-saving strategy in poplar plantations via appropriate irrigation measures or new water retention materials.
Border irrigation is an irrigation measure that is low-cost, simple to operate and easily popularized [8].Our previous studies have indicated that border irrigation maintains yield but requires only 20% of the water provided by traditional flood irrigation, which is prone to poor soil permeability, soil water-fertilizer-air-heat disharmony, and serious water waste [9].Lv et al. [10] found that border irrigation with a border width of 2.8 m and a border length of 60 m resulted in higher water use efficiency as well as lower nitrate-nitrogen content in the deep soil.Other research showed [11] that excessive border length and width leads to lower irrigation water use efficiency and may affect the quality of irrigation water in border irrigation.Applying soil additives to improve water retention is one proven, simple and effective approach for further water savings [12].During the last several decades, super-absorbent polymers have been extensively studied and shown to have special attributes due to their threedimensional structure.When super-absorbent polymers are integrated into soil, they retain a large amount of water and nutrients, which are subsequently released according to the requirements of the growing plants.Consequently, super-absorbent polymers have been widely used in agroforestry water conservation and ecological restoration efforts [3,7].
Most studies on super-absorbent polymers in agroforestry have only evaluated and compared physicochemical characteristics [12], and the effects on soil characteristics and plant growth [5].This research has indicated that the application of super-absorbent polymers can reduce soil bulk density and soil water permeability, and help to protect soil organic matter [7].Additionally, the combined use of irrigation and super-absorbent polymers has been reported for corn and Chinese cabbage cultivation [4,13].However, less information is available on the effect of border irrigation with super-absorbent polymers addition on soil biological characteristics of poplar plantations, especially in respect to enzyme activity and microbial diversity in poplar rhizosphere soil.The rhizosphere is the soil compartment profoundly modified by the release of root exudates consisting of low molecular weight organic acids, sugars, and more complex chemical molecules [14].Root exudates enhance plant nutrient uptake, sustain a larger and more active microbial activity, and influence the composition of the rhizosphere microbial communities [14].Soil enzymatic activities are sensitive biomarkers of any natural and anthropogenic disturbance [15,16].Soil enzymes are vital to catalysis of several crucial reactions that are necessary for soil microorganisms, as well as for the stabilization of organic matter formation, organic waste decomposition, and nutrient cycling [17][18][19].At the same time, soil microbial community functional diversity is one of the most important microbial parameters in soil and it has been regarded as a possible indicator of soil quality [6].Root exudates have an obvious influence on soil enzyme activities and microbial diversity [20].Previous findings clearly showed that enzyme activities in the rhizosphere soil may well serve as indicators of microbial diversity [21].Xu et al. [22] reported that the Shannon index and Simpson indices of the treatments applied with water retention agent were significantly higher than those of CK (no application of water retention agents) in the anthesis and harvest stages, indicating that the application of water retention agent could improve the soil microbial diversity of winter wheat in saline-alkali land.Yu et al. [23] found that the soil conditioners made of water retention agents, organic fertilizers and microbial inoculant could effectively increase the activities of soil catalase, cellulase, sucrase, urease and alkaline phosphatase, and enhance the abundance and diversity of soil bacterial communities.Therefore, soil enzyme activities and microbial diversity can represent important indexes for evaluating the sustainable development of forest land and can be frequently measured for the purpose of providing immediate and accurate information about small changes in soils [24].
In our study, we explored the effect of border irrigation co-applied with super-absorbent polymers on the root exudates, enzyme activity, and microbial community functional diversity of rhizosphere soil and on poplar growth in northern China.We hypothesized that border irrigation co-applied with super-absorbent polymers would increase enzyme activities and microbial diversity in rhizosphere soil and result in increased tree growth.The purpose was to determine the feasibility of implementing border irrigation and super-absorbent polymers for increasing the growth of poplar, with the aim of providing a better way to foster soil water-saving and high yield in poplar plantations.
Ethics statement
This research did not involve human or other animal subjects.For soil sample collections, we collected the minimum number of specimens necessary to ensure that appropriate vouchers were obtained.The field studies did not involve endangered or protected species.Permission to work in a poplar plantation located in Jinan City was obtained through a cooperative agreement between Shandong Academy of Forestry and Jinan State-owned Nursery.
Site description and plant material
A field experiment was carried out in a poplar plantation located in a state-owned northern suburb of the city of Jinan, Shandong province, north China (36˚40 0 N latitude, 117˚00 0 E longitude).The site has a warm temperate zone continental monsoon climate with four distinct seasons; average temperature and rainfall are 14˚C and 650-700 mm, respectively.The main physicochemical characteristics of the 0-40 cm soil layer in the research site are shown in Table 1.
Super-absorbent polymers, a cross-linked copolymer of acrylamide and potassium acrylate, were provided by the Beijing Hanlisorb Polywater Hi-Tech.Co. Ltd.Their parameter characteristics are shown in Table 2.
Regular amounts of urea, superphosphate, and potassium sulfate were applied: 197.3 kg/ha for N, 67.7 kg/ha for P 2 O 5 , and 50.5 kg/ha for K 2 O, as first applied in 2021; these fertilizer dosages increased by 10% each year thereafter as the stand age increased.The poplar 'I-107' (Populus × euramericana cv.'Neva') had been planted five years earlier using a distance of 5 m between rows and 2.5 m within rows.The experimental trees were uniform, and the average (± standard deviation, SD) tree height and stem DBH (diameter at breast height of 1.3 m) were 12.39 ± 0.46 m and 11.92 ± 0.43 cm, respectively.These poplar trees are managed carefully and grown on a short rotation (7-8 years), mainly for pulpwood production.
Experimental design and irrigation treatment
The experiment used a randomized complete block design, with five treatments and three replications.Fifteen plots were established, and every replication for each treatment included a plot with 30 trees arrayed in five rows.The innermost 12 trees, which were identified as representative of the plot mean, were used for detailed measurements.Immediately after the leaves had fully unfolded, five treatments were applied to the poplar trees at the start of the growing season on April 10, 2021.The treatments were as follows: (1) CK (conventional border irrigation), based on the horizontal distribution of the poplar root system [25]; specifically, the border width for irrigation was set at 1.0 m and the irrigation quota was 720 m 3 /ha, with 120 m 3 / ha per month from April to September; (2) BI 60 , which amounted to 60% of the conventional border irrigation quantity; (3) BI 30 , which amounted to 30% of the conventional border irrigation quantity; (4) BI 60 +SAP,which was BI 60 applied together with super-absorbent polymers of 40 kg/ha, and (5) BI 30 +SAP, which was BI 30 applied together with super-absorbent polymers of 40 kg/ha.At the first irrigation, a circular ditch that had a depth of 40 cm was prepared 60 cm away from the tree trunk.The super-absorbent polymers (50 g/tree) were mixed with the soil at 1:10 (v/v), put into the soil layer at a depth of 20-40 cm and then covered with surface soil.The irrigation time was determined based on conventional border irrigation methods.For each irrigation, the reduced irrigation treatment was controlled by its corresponding proportion of irrigation time, and the irrigation amount was measured to the nearest 0.001 m 3 with a water meter.In addition, the super-absorbent polymers were only used once, in 2021, and the amount and method of irrigation in 2022 and 2023 were the same as those applied in 2021.
Rhizosphere soil sampling and analysis
During late October 2023, rhizosphere soil at a distance of 60 cm from the trunk was collected following the procedures described by Wang and Zabowski [26].Fifteen soil samples were collected from the innermost 12 trees in every plot, then mixed evenly as a composite soil sample with three replications.Root exudates in the rhizosphere soil were measured according to the method described by Klein et al. [27].Briefly, the soil was shaken off the roots at a distance of 60 cm from the trunk and then carefully washed the roots free of fritted clay with 400 ml of distilled water four separate times.The wash solution was filtered using a 0.45 μm membrane filter and the filtered exudates were dried by rotary evaporation at 40˚C, dissolved in 30 ml of distilled water and 30 μL of chloroform.The concentrated filtrates were separated into organic acid, sugar and amino acid fractions.Eight ml of the concentrate were passed successively through cation and anion exchange resins.The amino acids were eluted from a Dowex [15].Briefly, invertase activity was measured using 8% sucrose as a substrate and then incubated at 37˚C for 24 h to determine the produced glucose using a colorimetric method.Urease activity was measured using 10% urea solution as substrate, and the soil mixture was incubated at 37˚C for 24 h; the produced NH 3 -N was determined with the colorimetric method.Dehydrogenase activity was determined using 1% triphenyltetrazolium chloride (TTC) as a substrate and then incubated in the dark at 37˚C for 24 h, measuring the produced triphenylformazan (TPF) by spectrophotometry.Catalase activity was determined using 0.3% H 2 O 2 as a substrate, shaken for 20 min and the filtrate was titrated with 0.1 M KMnO 4 .Soil microbial communities are spatially dependent and highly responsive to environment changes [28], and the diversity of soil microbial community is closely related to the function and structure of the ecosystem [29].Functional diversity of the soil microbial community was determined using the method of Garland and Mills [30].The McIntosh index, Simpson index, and Shannon index were used to describe soil microbial diversity: Shannon diversity index : where P i is every reaction well subtracting the absorbance value of the control well and then dividing by the summed color absorbance value of 31 wells; S i is the ratio of the activity on each substrate (OD i ) divided by the sum of activities on all substrates (SOD i ) [31].
Relative water content and volume growth rate
During mid-September 2023, the relative water content in leaves was measured according to Eneji et al. [32], using samples selected from mature leaves on the sunny side of every tree.Individual tree height was measured by the tangent method, and tree DBH by a ruler with a 0.5-mm accuracy, at the start of the experiment (April 10, 2021) and at the end of the short rotation period (October 27, 2023).Tree volume was calculated according to Eq (4) [25]: where h and d stand for tree height (m) and DBH (cm), respectively.Then, the average volume growth rate was calculated using Eq (5) [33]: where P v stands for the average growth rate (%) in tree volume; n stands for the interval years between two measurements; V l and V 2 stand for the tree volume before and after n years (m 3 ), respectively.
Statistical analysis
The data were analyzed as a completely randomized design.An analysis of variance (ANOVA) evaluated the effects of the five irrigation treatments on the root exudate, enzyme activity, and microbial functional diversity in rhizosphere soil and poplar growth.When the ANOVA revealed a significant difference among the treatments, the least significant difference test was used to detect differences between the individual treatment-level means.All statistical analyses were performed at a significant level of P < 0.05.ANOVA and multiple comparisons were performed using SPSS software (version 23.0; SPSS Inc., Chicago, Illinois, USA).
Root exudates
With the application of super-absorbent polymers, the organic acid content clearly increased, and reached a maximum in the BI 60 +SAP treatment, increasing by 13.40%, 42.26%, 63.10%, and 18.15% in comparison with the CK, BI 60 , BI 30 and BI 30 +SAP treatments, respectively (Table 3).Total sugar content in the BI 60 +SAP treatment significantly increased compared to CK, BI 60 , BI 30 and BI 30 +SAP treatments.Additionally, the amino acid content of BI 60 +SAP was also markedly higher than that of the other treatments.The results showed that the application of super-absorbent polymers significantly increased organic acids, total sugar and amino acids in the exudates of poplar rhizosphere soil, with the 60% conventional irrigation co-applied with super-absorbent polymers being the most remarkable result.
Enzyme activities in the rhizosphere soil
The activities of invertase, urease, dehydrogenase, and catalase in BI 60 and BI 30 treatments were markedly lower than those of CK, whereas the corresponding enzyme activities in the BI 60 +SAP and BI 30 +SAP treatments were increased when super-absorbent polymers were coapplied (Table 4).Furthermore, the activities of all four enzymes in the BI 60 +SAP treatment were evidently higher than those in other treatments, exhibiting increases of 21.52%, 7.65%, 18.07%, and 20.87% in the activities of invertase, urease, dehydrogenase and catalase compared with those of CK, respectively.Thus, reduced irrigation resulted in decreased activities of enzymes in the rhizosphere soil of the poplar plantation; however, it was the 60% irrigation coupled with super-absorbent polymers application that significantly increased soil enzyme activities.
Microbial community functional diversity
The microbial diversity in the rhizosphere soil of poplar under the different irrigation treatments were respectively expressed by functional diversity indices.The McIntosh index of various treatments exhibited a variation pattern of BI 60 +SAP > CK > BI 30 +SAP > BI 60 > BI 30 , and the pairwise differences among the treatments were all significant (Fig 1 Moreover, the Simpson index showed a different pattern to both the Shannon and McIntosh indexes, such that the Simpson index of BI 60 +SAP was evidently decreased in comparison to the other treatments.These results indicated that the 60% conventional irrigation co-applied with super-absorbent polymers greatly increased the Shannon index and McIntosh index of poplar rhizosphere soil, but it significantly reduced the Simpson index value.
Volume growth rate and relative water content
The rank order of average volume growth rate in the five treatments was BI 60 +SAP > CK � BI 30 +SAP > BI 60 > BI 30 (Fig 2).The average growth rate in the BI 60 +SAP treatment was the maximum, which was significantly increased compared with the other treatments.Although similar to CK, the average growth rate of the BI 30 +SAP treatment was evidently higher than that of either the BI 60 or BI 30 treatment, for which BI 60 exhibited a higher growth rate than did BI 30 .Additionally, the relative water content in leaves of different treatments exhibited the same variation trend as the average volume growth rate; and the relative water content in the BI 60 +SAP treatment was higher than that in the other treatments.The data suggested that the combined use of reduced irrigation and super-absorbent polymers significantly facilitated the growth of poplar trees in the studied plantation.
Discussion
The tree Populus × euramericana cv.'Neva' is known as one of the most suitable species for wood production and afforestation in the arid and semi-arid districts of China [25,34].In this environment, water availability is the main factor suppressing plant growth and tree production.Since water is expected to become scarcer in the near future, water competition between plants and humans may make the situation even worse [12,35].Super-absorbent polymers have been successfully used as water-saving materials in horticulture and agriculture [4,7,13], but there are few reports of super-absorbent polymers use in poplar plantations.The incorporation of super-absorbent polymers into soil intensifies the retention of a great deal of water and nutrients that are released slowly, as required by the plant, thereby improving plant growth under conditions of limited water supply [3,31], and also ameliorating the biological characteristics of the rhizosphere soil [7].In the process of plant growth, the root system absorbs water and nutrients from the soil and also releases inorganic ions, secretes protons, and generates plentiful organic matter, all of which are added to the growth medium (called 'root exudates') and may have immediate or longer-term effects on organic matter accumulation and nutrient cycling in the soil-plant system [36].In the present study, the 60% conventional border irrigation combined with super-absorbent polymers significantly increased the content of organic acids, amino acids, and total sugars in the root exudates.This result is likely because the applied super-absorbent polymers can lower soil bulk density, thereby increasing soil porosity and improving soil permeability [12,37], which produces an ideal physical environment for tree root growth and thus facilitates the enhancement of root activity in poplar.
Many studies have used soil enzymes as indicators of soil microbial activity and fertility [38].In our study, enzyme activities of invertase, urease, dehydrogenase and catalase under reduced irrigation were dramatically decreased.Similar results were also reported by Bastida et al. [6] who found that restricted irrigation had a negative impact on soil enzyme activities.In this experiment, the application of super-absorbent polymers likely compensated for the negative effects associated with restricted irrigation and were instead associated with an increase in soil enzyme activities under 60% irrigation.The positive effects of super-absorbent polymers on soil enzyme activity may be due to improvements in soil structure or increases in the amount of water retained near the roots which may have enhanced root activity and stimulated the release of more organic acids, amino acids, and total sugars.This increase in root exudates is supported by our results (Table 3) and may have facilitated increases in microbial biomass and activity, changes in the decomposition and mineralization rates of organic matter in the soil, and corresponding increases in enzyme activity [4].Gianfreda [39] found that higher enzyme activity can be interpreted as a greater functional diversity of the microbial community in the rhizosphere soil.
In this study, the McIntosh index indicates the uniformity of the soil microbial community.The Shannon index provides information describing the distribution or spread of carbon source utilization by the microbial community [40].Kennedy and Smith [41] also identified the Shannon index as a way of quantifying the evenness, richness, and diversity of the soil microbial community, whereas Staddon et al. [42] argued that the Shannon index is influenced more by species richness.In this study, the Shannon index for the 60% irrigation with superabsorbent polymers was obviously higher than that of other treatments.This result is probably because the application of super-absorbent polymers can alter the three phases of solid, liquid, and gas of soil, eventually forming a honeycomb-like structure, which increases the waterretaining property and nutrient-supplying capability of the soil to facilitate increased root activity; this, in turn, strengthens the connection between root metabolism and rhizosphere microorganisms [43].Furthermore, such applications may significantly increase root exudates, which is the main source of carbon and energy for microorganisms in the rhizosphere soil.In addition, it can affect the solubility and effectiveness of the rhizosphere elements by changing the rhizosphere pH value [12], oxidation-reduction potential and chelation, which together influence microbial metabolism and microbial functional diversity in rhizosphere soil in a direct or indirect manner [22].The Simpson index is weighted towards abundance of the most common species [42,44,45].Our results demonstrated that the variation pattern of the Simpson index was opposite to the Shannon index or the McIntosh index, whereas Zhong and Cai [45] found that the pattern in variation of the Simpson index in a paddy soil was the same as that of the Shannon index for a long-term experiment.The differences between these studies concerning the relationship of the Simpson and Shannon indexes can be attributed to the different soil physicochemical properties, plant species, experimental period, and other factors [46,47].Therefore, among different irrigation measures, the 60% conventional border irrigation with super-absorbent polymers induced the highest increase in microbial diversity of poplar rhizosphere soil.Marinari et al. [48] observed that a greater functional diversity of microbial communities in the rhizosphere soil will result in elevated activities of many enzymes.Thus, this improvement of the microflora might also be related to the increase of enzyme activities found in the rhizosphere soil.
In the present study, the average growth rate in volume and relative water content in leaves following the 60% irrigation with super-absorbent polymers was significantly higher than that obtained in the other treatments.This result may be closely related to the improved root exudate content, increased microbial diversity, or elevated soil enzyme activities, all of which could augment soil fertility.Furthermore, a water-fertilizer combined micro-domain is introduced by super-absorbent polymers, which considerably improves the holding capacity of water and fertilizer to facilitate the synthesis of more biomass via photosynthesis [3,12], further suggesting that the improvement of the micro-ecological environment in the vital root zone is beneficial to tree growth.
Moreover, we demonstrated that the effect of the 30% irrigation combined with superabsorbent polymers on soil enzyme activities, microbial diversity, and tree volume growth was significantly decreased compared with that of 60% irrigation coupled with super-absorbent polymers.The results further indicated that super-absorbent polymers cannot produce water and thus a sufficient water supply is still necessary for poplar management.This agreed with the findings of Bai et al. [12] and Han et al. [37], who reported that super-absorbent polymers require a minimum amount of water in order to provide benefits, suggesting that reaching a minimum threshold of water availability is a strong factor influencing both the microbial community and the efficacy of super-absorbent polymers.Our results also suggested that an appropriate reduction in the irrigation volume co-applied with super-absorbent polymers showed a noteworthy effect on the yield increase of poplar.These results highlighted the connections between enzyme activities and microbial diversity of soil and poplar productivity.Furthermore, the present study also underlined how a one-time application of super-absorbent polymers could guarantee a significant yield increase, at least in the following three years.We conclude that super-absorbent polymers are of great importance to both the water-saving strategy and high-yield cultivation of a poplar plantation, which also offers the advantage of saving on needless labor and time.
Additionally, in the actual use of super absorbent polymers, a variety of factors should be taken into account, such as the type of super absorbent polymers, application method, application amount, soil texture, water and fertilizer conditions and other factors [49].At the same time, most of the super absorbent polymers are synthetic polymers, which are difficult to degrade or only partially degraded in the soil, and the polymers remaining in the soil are prone to cause soil environmental pollution [50].
Fig 1 .
Fig 1. Effects of different irrigation treatments on the diversity index of microbes in the rhizosphere soil of a poplar plantation.Bars are means, and error bars are standard deviations (n = 3).Means followed by same lowercase letter within each column were not significantly different among treatments (P>0.05).https://doi.org/10.1371/journal.pone.0303096.g001
Fig 2 .
Fig 2. Effects of different irrigation treatments on the average growth rate in volume and relative water content in leaves of a poplar plantation.Bars are means, and error bars are standard deviations (n = 3).Means followed by same lowercase letter within each column were not significantly different among treatments (P>0.05).https://doi.org/10.1371/journal.pone.0303096.g002
Table 2 . Some selected characteristics of super absorbent polymers. Main materials Types Particle size range (mm) Color Water absorbency (g g −1 ) pH 1 Electrical conductivity (μs cm −1 )
cation exchange column with 2 N NH 4 OH.The carboxylic acids were eluted from a Dowex 1 × 8 anion exchange column with 5 N formic acid.The neutral solution contained the sugars.Soil enzymatic activities were determined in triplicate air-dried samples according to the method developed by Guan
Table 3 . Effect of different irrigation treatments on the root exudate contents in the rhizosphere soil of a poplar plantation (mean ± SD).
Note: Means followed by same lowercase letter within each column were not significantly different among treatments (P>0.05).https://doi.org/10.1371/journal.pone.0303096.t003 | 2024-05-09T05:10:40.163Z | 2024-05-07T00:00:00.000 | {
"year": 2024,
"sha1": "984c2190d3c5765e3e046901aa1aa3ff5387a17d",
"oa_license": "CCBY",
"oa_url": null,
"oa_status": null,
"pdf_src": "PubMedCentral",
"pdf_hash": "fe2d4373c70ac3849b501c4b1084b208104a1dd3",
"s2fieldsofstudy": [
"Environmental Science"
],
"extfieldsofstudy": [
"Medicine"
]
} |
256680722 | pes2o/s2orc | v3-fos-license | A phase II trial of an alternative schedule of palbociclib and embedded serum TK1 analysis
Palbociclib 3-weeks-on/1-week-off, combined with hormonal therapy, is approved for hormone receptor positive (HR+)/HER2-negative (HER2−) advanced/metastatic breast cancer (MBC). Neutropenia is the most frequent adverse event (AE). We aim to determine whether an alternative 5-days-on/2-days-off weekly schedule reduces grade 3 and above neutropenia (G3 + ANC) incidence. In this single-arm phase II trial, patients with HR+/HER2− MBC received palbociclib 125 mg, 5-days-on/2-days-off, plus letrozole or fulvestrant per physician, on a 28-day cycle (C), as their first- or second-line treatment. The primary endpoint was G3 + ANC in the first 29 days (C1). Secondary endpoints included AEs, efficacy, and serum thymidine kinase 1 (sTK1) activity. At data-cutoff, fifty-four patients received a median of 13 cycles (range 2.6–43.5). The rate of G3 + ANC was 21.3% (95% CI: 11.2–36.1%) without G4 in C1, and 40.7% (95% CI: 27.9–54.9%), including 38.9% G3 and 1.8% G4, in all cycles. The clinical benefit rate was 80.4% (95% CI: 66.5–89.7%). The median progression-free survival (mPFS) (95% CI) was 19.75 (12.11–34.89), 33.5 (17.25–not reached [NR]), and 11.96 (10.43–NR) months, in the overall, endocrine sensitive or resistant population, respectively. High sTK1 at baseline, C1 day 15 (C1D15), and C2D1 were independently prognostic for shorter PFS (p = 9.91 × 10−4, 0.001, 0.007, respectively). sTK1 decreased on C1D15 (p = 4.03 × 10−7), indicating target inhibition. Rise in sTK1 predicted progression, with the median lead time of 59.5 (inter-quartile range: −206.25–0) days. Palbociclib, 5-days-on/2-days-off weekly, met its primary endpoint with reduced G3 + ANC, without compromising efficacy. sTK1 is prognostic and shows promise in monitoring the palbociclib response. ClinicalTrials.gov#: NCT3007979.
INTRODUCTION
Hormone receptor positive (HR+) and human epidermal growth factor receptor 2 negative (HER2−) breast cancer accounts for 70% of breast cancer diagnoses and is a leading cause of cancer death in women 1,2 . The discovery of cyclin D/cyclin-dependent kinase 4/6 (CDK4/6) as a key downstream target of estrogen receptor (ER) and endocrine resistance mechanisms has led to the development of CDK4/6 inhibitors for the treatment of HR+/HER2− breast cancer 3,4 . CDK4/6 inhibitors, including palbociclib, ribociclib, and abemaciclib, have gained FDA approval based on the significant improvement in progression-free survival (PFS) when added to endocrine therapy in patients with advanced disease as front-line therapy [5][6][7][8][9][10][11] or following disease progression on prior endocrine therapy [12][13][14] . Overall survival (OS) benefit has also been observed in several studies [15][16][17] . These agents have now become the standard of care in combination with an endocrine therapy partner for HR+/HER2− metastatic breast cancer (MBC). However, treatment-related neutropenia is a common adverse event (AE), leading to dose interruption, reduction, and at times discontinuation. Palbociclib (Ibrance, Pfizer) is the first CDK4/6 inhibitor approved in combination with an aromatase inhibitor as first-line therapy and with fulvestrant following prior hormonal therapy based on results from PALOMA trials 5,9,10,13 . Neutropenia was the most frequent high grade (G) AE related to palbociclib in PALOMA-2 (79.5% all grades, 56.1% G3, 10.4% G4) and PALOMA-3 (78.8% all grades, 53.3% G3, 8.7% G4) 9,13 . Febrile neutropenia occurred in 1.8%, while dose reduction occurred in over a third of patients who received palbociclib across PALOMA-2 and PALOMA-3 9,13,18 . In addition, with the half-life of palbociclib being~27 h, recovery of Rb phosphorylation and cell proliferation during the off-treatment week is a concern. In a longitudinal biomarker study of palbociclib plus letrozole, recovery of tissue Rb phosphorylation and Ki67 levels to baseline was observed on day 3 and day 4-5 during the off week, respectively 19 . We, therefore, proposed a 5days-on/2-days-off weekly schedule to allow bone marrow recovery during the 2 off days to avoid the week-long break. We hypothesize that this alternative schedule is more tolerable with less frequent high-grade (G3+) neutropenia and dose interruptions/reductions compared to the historical data with the 3-weeks-on/1-week-off schedule. Based on our previous study supporting serum thymidine kinase 1 (sTK1), an E2F-dependent enzyme critical for DNA synthesis, as a pharmacodynamic indicator of CDK4/6 inhibition 20 , we assessed sTK1 dynamics for target inhibition, and examined its potential prognostic value and utility in disease monitoring in this study. Fig. 1).
DISCUSSION
CDK 4/6 inhibitors represent a major advance in the treatment of HR+/HER2− MBC. However, a one-week break following 3 weeks of administration is required for both palbociclib and ribociclib due to treatment-induced neutropenia 22,23 . To improve the tolerability and to avoid the 1-week break, which has been shown to led to the recovery of target inhibition in previous studies 19 improved tolerability of palbociclib administered in a 5-days-on/2days-off weekly schedule compared to historical data from palbociclib trials 9,13,18 .
To assess target inhibition, we examined serial sTK1 activity as a surrogate pharmacodynamics marker in this trial, and demonstrated a significant reduction of sTK1 to an undetectable level in 78.2% of patients at C1D15. This result mirrors the impressive reduction in sTK1 observed 2 weeks after initiation of palbociclib in our prior study of patients with early stage HR+/HER2− breast cancer receiving neoadjuvant palbociclib and anastrozole 20 . The significant reduction of sTK1 at C1D15 and C2D1 indicates that the 5-days-on/2-days-off schedule achieved effective target inhibition and provides reassurance with the 2-day break each week.
This alternative schedule offers an advantage in avoiding the 1-week break with a 3-weeks-on/1-week-off schedule, which potentially compromises the efficacy of CDK4/6 inhibitors because of their short terminal half-life (~26 h for palbociclib 23 ; 32.6 h for ribociclib 22 ). In a biomarker study that assessed the longitudinal effect of palbociclib following a single dose or 3-week administration on Ki67 and pRb expression in skin biopsies and sTK1 in blood samples collected from 26 patients with HR+/HER2− MBC, recovery of pRb, Ki67, and sTK1 was observed during the offtreatment week, back to baseline on day 2-3 for pRb, day 3-4 for Ki67 and sTK1 19 . The rebound of sTK1 at the end of C1 therapy with the 3-weeks-on/1-week-off schedule was also demonstrated Similarly, the mPFS of 11.96 (10.43~NR) months observed in the endocrine-resistant population was comparable to that reported in PALOMA-3 (mPFS 11 months in the updated analysis) 25 .
In addition, our study demonstrated the potential utility of sTK1 activity at baseline and on-treatment as a prognostic marker and in monitoring disease status for patients with MBC receiving a CDK4/6 inhibitor. High sTK1 at baseline or early on-treatment time point (C1D15 or C2D1), especially at C1D15, had high degrees of accuracy in predicting progression within 6 months (84% accuracy for C1D15). In addition, we demonstrated that a rise in sTK1 predicted subsequent clinical/RECIST progression.
Our data are consistent with previous studies demonstrating that a higher baseline sTK1 is associated with a shorter time to progression in patients with advanced HR+/HER2− breast cancer receiving endocrine therapy and a rise in sTK1 on-therapy from baseline was associated with treatment resistance [26][27][28][29] . Few studies have evaluated sTK1 activity in patients receiving standard dosing palbociclib in combination with endocrine therapy. McCartney et al analyzed serial plasma TK1 activity at baseline (T0), after 1 cycle (T1), and at progression (T2) in 46 patients with HR + MBC treated with palbociclib within the TREnd trial 30 . The median TK1 activity was 75 Du/L at T0, decreased to 35 Du/L at T1, and increased to 251 Du/L at progression 30 . Patients with increasing TK1 at T1 correlated with a worse outcome than those with decreased/stable TK1 (n = 33; mPFS 3.0 vs 9.0 months; p = 0002), similar to our study 30 . Although TK1 above the median at T2 was associated with worse outcomes on post-study therapy, baseline TK1 was not prognostic 30 . In addition, in vitro studies demonstrated that TK1 reduction occurred in palbociclib sensitive but not resistant cells 30 . Cabel et al. 31 reported a study that assessed the plasma TK1 activity at baseline and a 4-week time point in 103 patients with ER+/HER2− MBC treated with palbociclib and endocrine therapy, which demonstrated that baseline TK1 activity (using median value as cutoff) was an independent prognostic factor for both PFS and OS, but adding TK1 activity at 4 weeks did not further increase survival prediction. More recently, Malorni et al. assessed TK1 at pre-treatment, C1D15, and D28 in patients with endocrine-resistant luminal MBC receiving palbociclib and fulvestrant. In this study, TK1 was significantly suppressed on C1D15, with 90/108 (83%) patients to <20 Du/L, similar to our observation with the 5-days-on/2-days-off schedule. However, on D28, a TK1 rebound was observed in most patients in the Malorni study, with TK1 < 20 Du/L in only 29% patients, while 28 of 44 (63.6%) had TK1 < 20 Du/L on C2D1 in our study. Similar to our study, at each time point, higher TK1 was significantly and consistently associated with shorter PFS, with C1D15 being most prognostic. However, none of the prior studies assessed longitudinal TK1 activity assessment until progression. Our study is the first to show that increases in TK1 occur earlier than clinical progression, indicating detection of subclinical progression.
Our study is limited by the small sample size and the nonrandomized single-arm trial design. The data regarding sTK1, at BL and early on-treatment time points, in predicting response on CDK4/6 inhibitors is intriguing as there are no predictive biomarkers currently available for CDK4/6 inhibitors. Future prospective trials are warranted to confirm our results on the safety and efficacy of the 5-days-on/2-days-off weekly schedule and to validate the clinical utility of sTK1 in guiding the management of patients with HR + /HER2-MBC.
In conclusion, this single-arm phase II trial of palbociclib administered at an alternative 5-days-on/2-days-off weekly schedule, in combination with either letrozole or fulvestrant, met the predefined primary endpoint in reducing the incidence of high-grade neutropenia, without compromising efficacy. While randomized trials are needed to confirm this finding, this alternative schedule provides an option for patients having difficulty tolerating the standard 3-weekson/1-week-off schedule and to potentially avoid drug discontinuation due to neutropenia. Our data demonstrate sTK1 activity a promising biomarker of prognosis and disease monitoring in patients receiving CDK4/6 inhibitors. 20,32 for all patients with available samples at baseline, C1D15, C2D1, and C4D1, as well as every 3 cycles up to progression in those who progressed by the time of data cutoff. TK1 activity was determined using a refined ELISA-based method (DiviTum ® ) according to the manufacturer's instruction (www.biovica.com) and was performed at the Biovica laboratory in Uppsala, Sweden, with laboratory investigators blinded to patient data.
Outcomes
The primary endpoint was the rate of G3 + ANC between C1D1 and C2D1 (C1D1-29). The sample size of 47 provided 90% power, based on a onesample binomial exact test at alpha = 5%, to test the one-sided null hypothesis of G3 + ANC rate >62%, an estimate based on incidences from prior phase III trials of palbociclib 10,13 and that neutropenia occur early in the course of therapy 33 , versus the alternative of <40%. If G3 + ANC was observed in ≤23 patients, the 5-days-on/2-days-off schedule will be deemed as having less neutropenia than the standard schedule. As a subsequent pooled analysis of safety data from three randomized trials (PALOMA-1, 2 and 3) indicates, the rate of G3 + neutropenia in C1 in the palbociclib arm was 44.7% 18 , lower than what we originally expected. A post hoc power calculation was performed on testing against the null hypothesis H0: G3 + neutropenia rate >44.7% versus the observed 21.3% (10 out of 47, including the occurrences on C2D1 beyond C1 D1 to D28) in this trial. With N = 47, the post hoc power is 94.04% based on a one-sided Binomial exact test at 5% alpha level. Secondary endpoints include the rate of G3 + ANC in all cycles, palbociclib dose intensity/reduction/interruption/ discontinuation, AEs, PFS for the overall population and for the endocrine sensitive or resistant population as defined by ESMO guideline 21 , objective response rate (ORR: CR + PR (complete and partial responses)) and clinical benefit rate (CBR: CR + PR + Stable disease (SD) ≥ 24 weeks by RECIST 1.1). Other endpoints include sTK1 at baseline, C1D15, C2D1, and progression, in relation to PFS and CBR, as well as, the lead time from sTK1 rise during therapy to disease progression defined by RECIST.
Statistical analysis
Patient characteristics and AEs were summarized by descriptive statistics. AE rate, ORR, and CBR were estimated accompanied with 95% confidence interval (CI). PFS was defined from the date on treatment to off-study date (due to radiographic progression, clinical deterioration, investigator decision, AE) or date of death and to the date of the last imaging scan demonstrating no progression if patients had no events. Survival endpoints were analyzed by Kaplan-Meier (KM) method and survival difference was compared between patient groups of interests by log-rank test. Cox proportional hazard model was applied to estimate the hazard ratio (HR) with 95% CI. sTK1 ≤ 20 Du/L was deemed undetectable and was replaced with 19 Du/L for analysis. sTK1 was compared between time points by Wilcoxon signed-rank test with p values corrected by the Benjamini-Hochberg method to control false discovery rate (FDR). Baseline and on-treatment sTK1 was compared between patients who had the clinical benefit (CB) or progressive disease (PD) versus not by the Wilcoxon rank-sum test. sTK1 was dichotomized to high versus low by a pre-defined cutoff of 200 Du/L at baseline 28,29 and 20 Du/L at on-treatment time points 24 and was then analyzed in relation to PFS by KM method and logrank test. Diagnostic test operating characteristics including specificity, sensitivity, negative and PPV of the dichotomized sTK1 at different time points were derived for the binary outcomes (CB vs non-CB; PD vs non-PD).
Reporting summary
Further information on research design is available in the Nature Research Reporting Summary linked to this article.
DATA AVAILABILITY
The datasets used and analyzed during the current study are available from the corresponding authors on reasonable request. | 2023-02-09T16:21:29.259Z | 2022-03-21T00:00:00.000 | {
"year": 2022,
"sha1": "eac1253cb7b3572626677922223780dd650d8a0d",
"oa_license": "CCBY",
"oa_url": "https://www.nature.com/articles/s41523-022-00399-w.pdf",
"oa_status": "GOLD",
"pdf_src": "SpringerNature",
"pdf_hash": "eac1253cb7b3572626677922223780dd650d8a0d",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": []
} |
53985556 | pes2o/s2orc | v3-fos-license | Pilot Therapeutic Protocol for the Treatment of Local Advanced Disease, such as the Generalized Peritoneal Carcinomatosis and for the Treatment of Distant Metastases in Human Malignant Neoplastic Disease – Targeting the Medium
The phenomenon of life and the open thermodynamic systems in general may be described from the existence of i) a distinct separating border from the surrounding environment ii) specific structures inside it, which are functioning with some form of energy exchange with the environment and iii) the medium inside wherewith these functions are taking place. For living cells, this distinct border is the cell membrane, the structures are the cellular organelles and the medium is the cytoplasm (which is mainly consists of water). The research idea of this therapeutic protocol aims at the selective destruction of the “medium”, namely the cytoplasm of the cancer cell, with concurrently preservation of the integrity of the cytoplasm of the nearby normal cells/tissues. This may be achieved with the gradual lowering of the core body temperature of the patient and concurrently introducing of a therapeutic solution of complementary, “anti-sense”, polypeptides in the cancer cell, individually synthesized for every patient, accordingly to his malignancy type, with theoretically increasing of the freezing point of the malignant cell’s cytoplasm, subsequent crystallization, expansion and rupture of its membrane at a temperature where the cytoplasm of the nearby healthy cells/tissues will remain intact.
Introduction
The phenomenon of life and the open thermodynamic systems in general may be overall described from the existence of i) a distinct separating border from the surrounding environment ii) specific structures inside them, which are functioning with some form of energy exchange with the environment and iii) the medium inside wherewith these functions are taking place. In the case of the cell, the distinct border is the cell membrane, the medium is the cytoplasm (which is mainly consists of water) with all the cellular organelles (mitochondria, lysosomes, endoplasmic reticulum, ribosomes, nucleus, etc.), where the entire basic cellular functions for the homeostasis and reproduction are taking place. Regarding cancer therapy, current pharmaceutical targets of the cancer cell are only concerning the border (cell membrane-receptors-monoclonal antibodies), the intracellular structures (DNA, microtubules) and the cellular functions that occur by these (cell cycle, reproduction).
Aim
The research idea of the specific pilot therapeutic protocol aims at the selective destruction of the "medium", namely the cytoplasm of the cancer cell with its crystallization and expansion at low temperature, the subsequent rupture of its cell membrane, with concurrently preservation of the integrity of the cytoplasm of the nearby normal cell/tissue.
Method
The above therapeutic target may be achieved with the gradual lowering of the core body temperature of the patient in the operating room in general anesthesia or cardioplegia and cardiopulmonary bypass or "reverse" Hyperthermic Intraperitoneal Chemotherapy "HIPEC" technique or submersion in cold fluid medium (e.g., water), with the concurrent introduction in the cancer cell of a therapeutic solution, individualized for every patient, based on the kind of its malignant neoplasia, so as to cause increase of the freezing point and subsequent crystallization of the cytoplasm of the cancer cell at higher temperature than that of the normal nearby cell/tissue. With the achievement of the crystallization of the cytoplasm of the cancer cell, the controllable cooling of the patient stops and the controllable rewarming of the body commences, so as in the place of cancer cells to have remained water due to their cell membrane rupture from crystallization and the subsequent melting of crystals from rewarming, while the structure of the normal cells, where there was no crystallization occur, will have been remained unaltered (water to water and dust to dust).
The aforementioned therapeutic solution will be created based on the tertiary molecular structure of the cancer proteins, which are overexpressed in the cancer cell or based on proteins, which are selectively expressed inside the cancer, but not in the normal, cell. These proteins will be identified during the operation by taking biopsy from the malignant neoplastic tissue and they will be precise identified through immunochemical staining and specialized molecular techniques. Therefore, as an immediate analysis of their tertiary structures (e.g., through crystallography X-ray diffraction technique J o ur nal of C a n c e r Sc icen c e a n d R esearc and Fourier transforms) and the subsequent identification of the amino acid sequence of the "malignant" polypeptides (e.g., through mass spectrometry). When the amino acid sequence is identified, it will be followed, with the use of the proteomic technology (e.g., Solid Phase Peptide Synthesizer-SPPS). The creation of a solution of enantiomeric, "anti-sense", polypeptides, theirs tertiary structures to be exactly complementary with the tertiary structure of the overexpressed proteins at the cytoplasm of the cancer cell, correspondingly with the precise symmetry seen among D-and L-enantiomers. The complementaries enantiomeric proteins could be created either with natural L-amino acids either with the use of D-enantiomeric amino acids from the expanded genetic code, either with the combination of both forms (L-& D-) amino acids. The synthesis technique of the enantiomerics polypeptides may be accomplished by following the reverse amino acid sequence of the natural polypeptides, fact that needs further research study.
With the introduction of the therapeutic solution of the "anti-sense" proteins in the cancer cell, a racemic solution of enantiomeric polypeptides in the cytoplasm will be created with the concurrently controllable lowering of body temperature of the patient with slow programmable freezing cryopreservation techniques. The crystallization of the racemic mixture will occur and subsequent of the cytoplasm, followed by its expansion and rupture of the cancer cell membrane. Concurrent the preservation of the normal tissues in their liquid phase will have also be achieved. Also, mixtures of cryoprotective solutions could be utilized, such as DMSO (Dimethyl-Sulfoxide) and glycerol or sugar trehalose, which are successfully used at the preservation of living tissues, so as in the cytoplasm of the normal cell to become vitrification and not crystallization. The crystallization, expansion and rupture of the cancer cell by the controllable lowering of the body temperature (0.5-1°C/min) could be theoretically assisted by the presence inside the cytoplasm of the cancer cell of larger amounts of carbon dioxide, produced by the the oxidative phosphorylation, due to the increased combustion and the higher metabolic turn over, as in correspondence with the easier crystallization and expansion of the beverages in the freezer than of the bottled water. This hypothesis is supported theoretically from the law of Wallach, which states that the racemic crystals tend to be denser from the respective ones of their enantiomers, thus it is expected to occur greater expansion of the cytoplasm of the cancer cell at the freezing point.
Since the time where Pasteur [1] discovered the existence of the enantiomers in nature with the separation of the tartaric acid in wine1 the study of the physicochemical properties of the racemic mixtures was followed. Regarding the crystallization of the racemic mixtures, this is done in conglomerates, racemic compounds, pseudoracemates, and quasiracemates. From those, conglomerates have always higher melting point from their enantiomeric analogues (Carnelley's law), especially with higher molecular symmetry, while the freezing point varies according to the racemic solution and its crystallization type. Thus, special research interest is focused primarily at the in vitro study of the crystallization type of the solutions of racemic mixtures from natural cancer proteins from various malignant neoplasms with their synthetic enantiomers, regarding the increase of their freezing point and their crystallization at higher temperature from that required for the crystallization of natural cancer protein solutions [2][3][4][5][6][7][8].
More specific, it is firstly suggested the in vitro laboratory study of the crystallization of cancer cell lines in relation to normal cell lines at the above described conditions, namely their culture in medium which contains solution of "anti-sense" enantiomeric proteins (with or without cryoprotective solutions) and with controllable lower of temperature and observation if crystallization, expansion and cancer cell rupture is occurring in contrast with the normal cell lines. It is also necessary to record with a special thermometer the exact temperature that the aforementioned goal is accomplished, in order to further investigate if it can be applied in living organisms and by which safe conditions.
Applications-indications
i) In patients with generalized metastatic disease (stage IV) in one or multiple organs, when the current anti-neoplastic treatment has no results. ii) In patients with local advanced disease without distance metastases, such as the generalized peritoneal carcinomatosis of the abdomen, as an alternative of multiple organs surgical resection or when a RO surgical resection is not feasible. The use of a type HIPEC technology machine, where, with the introduction of the therapeutic solution of the «anti-sense" proteins in the peritoneal cavity during the exploratory laparotomy, a lowering of the core body temperature will be caused instead, but with the concurrent use of the protective type "HILOTHERM" head cask for the maintenance higher temperature of the brain [9][10][11][12][13].
Arguments in favor of the particular research idea
The science of medicine uses successfully apparent extreme therapeutic methods, where for a smaller of greater amount of time the physiologic function of the organism is completed inhibited, in order the therapy of particular diseases to be achieved. Examples are: i) In Cardiology, the use of defibrillator or the intravenous administration of adenosine for the treatment of malignant arrhythmias by temporary pause of myocardial function and the new undertake (reset) of the sinus node, ii) In Anesthesiology, the administration of complete neuromuscular blockade and mechanical ventilation for the performance of major surgical operations, iii) In Intensive Care and Neonatology, the use of Extracorporeal Oxygenation Through Membrane (ECMO) for the support of critical pulmonary conditions (ARDS or the hyaline membrane disease of preterm newborns respectively), iii) In Cardiac Surgery, the administration of cardioplegia and extracorporeal circulation for the performance of transplantations, aortocoronary bypasses and other complex operations.
It is worth noticed that the controllable cooling of the patient causes the appearance of the mammalian diving reflex (submersion reflex) for the protection of the central nervous system. While at the current protocols of Cardiopulmonary Resuscitation, the therapeutic hypothermia is being used for in cases of circulation recovery after cardiac arrest and cardiopulmonary resuscitation for central nervous system protection [11]. In addition, in the international literature cases of complete recovery of hypothermic patients have been recorded after prolonged CPR without neurologic sequelae. Finally, the clinical experience shows that the majority of patients with malignant neoplastic disease are people with normal cardiovascular system, fact that increases the possibility of complete recovery.
Note of the writer Dr. Panagiotis Bouras, son of Christos and Christina Boura (genus of Tzamou): The above research idea has no intent to compete or replace the current pharmaceutical or other antineoplastic treatments and aims at the cases, where there is no current definitive treatment of the malignant neoplastic disease, as it is described in the applications-indications session. The above described scientific methods may be reviewed, rechecked or replaced with newer, more effective ones, from fellow scientists, who wish to participate at the research study, carrying always my signature as verification, after my written information and consent and without any deviation from the basic research idea, as it is described at the introduction-aimmethods sessions. The successful future application of the particular method in humans with malignant neoplastic disease should always be governed by the principles of medical ethics and human moral values and care should be taken by the international, political, economic and social organizations and state institutions that all patients worldwide to have access to the specific treatment. The aforementioned therapeutic directions are currently theoretic, based on the international existing literature, and waiting for support for further evaluation, research and implementation from the medical community worldwide.
Conflict of Interest
There is no financial support or benefits if any from commercial sources for the work reported in the manuscript, or any other financial interests that any of the authors may have, which could create a potential conflict of interest or the appearance of a conflict of interest with regard to the work. | 2019-08-19T16:32:16.608Z | 2016-01-01T00:00:00.000 | {
"year": 2017,
"sha1": "d13e4ae25396adca3ba8427668a15f71491204d3",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.4172/2572-4126.1000108",
"oa_status": "GOLD",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "619c781dd6ee49c81ed4f8aaabd59dd952cc4694",
"s2fieldsofstudy": [
"Medicine",
"Engineering"
],
"extfieldsofstudy": []
} |
119315459 | pes2o/s2orc | v3-fos-license | Developments in Topological Gravity
This note aims to provide an entr\'ee to two developments in two-dimensional topological gravity -- that is, intersection theory on the moduli space of Riemann surfaces -- that have not yet become well-known among physicists. A little over a decade ago, Mirzakhani discovered \cite{M1,M2} an elegant new proof of the formulas that result from the relationship between topological gravity and matrix models of two-dimensional gravity. Here we will give a very partial introduction to that work, which hopefully will also serve as a modest tribute to the memory of a brilliant mathematical pioneer. More recently, Pandharipande, Solomon, and Tessler \cite{PST} (with further developments in \cite{Tes,BT,STa}) generalized intersection theory on moduli space to the case of Riemann surfaces with boundary, leading to generalizations of the familiar KdV and Virasoro formulas. Though the existence of such a generalization appears natural from the matrix model viewpoint -- it corresponds to adding vector degrees of freedom to the matrix model -- constructing this generalization is not straightforward. We will give some idea of the unexpected way that the difficulties were resolved.
Introduction
There are at least two candidates for the simplest model of quantum gravity in two spacetime dimensions. Matrix models are certainly one candidate, extensively studied since the 1980's. These models were proposed in [7][8][9][10][11] and solved in [12][13][14]; for a comprehensive review with extensive references, see [15]. A second candidate is provided by topological gravity, that is, intersection theory on the moduli space of Riemann surfaces. It was conjectured some time ago that actually two-dimensional topological gravity is equivalent to the matrix model [16,17].
This equivalence led to formulas expressing the intersection numbers of certain natural cohomology classes on moduli space in terms of the partition function of the matrix model, which is governed by KdV equations [18] or equivalently by Virasoro constraints [19]. These formulas were first proved by Kontsevich [20] by a direct calculation that expressed intersection numbers on moduli space in terms of a new type of matrix model (which was again shown to be governed by the KdV and Virasoro constraints).
A little over a decade ago, Maryam Mirzakhani found a new proof of this relationship as part of her Ph.D. thesis work [1,2]. (Several other proofs are known [21,22].) She put the accent on understanding the Weil-Petersson volumes of moduli spaces of hyperbolic Riemann surfaces with boundary, showing that these volumes contain all the information in the intersection numbers. A hyperbolic structure on a surface Σ is determined by a flat SL(2, R) connection, so the moduli space M of hyperbolic structures on Σ can be understood as a moduli space of flat SL(2, R) connections. Actually, the Weil-Petersson symplectic form on M can be defined by the same formula that is used to define the symplectic form on the moduli space of flat connections on Σ with structure group a compact Lie group such as SU (2). For a compact Lie group, the volume of the moduli space can be computed by a direct cut and paste method [23] that involves building Σ out of simple building blocks (three-holed spheres). Naively, one might hope to do something similar for SL(2, R) and thus for the Weil-Petersson volumes. But there is a crucial difference: in the case of SL(2, R), in order to define the moduli space whose volume one will calculate, one wants to divide by the action of the mapping class group on Σ. (Otherwise the volume is trivially infinite.) But dividing by the mapping class group is not compatible with any simple cut and paste method. Maryam Mirzakhani overcame this difficulty in a surprising and elegant way, of which we will give a glimpse in section 2.
Matrix models of two-dimensional gravity have a natural generalization in which vector degrees of freedom are added [24][25][26][27][28][29]. This generalization is related, from a physical point of view, to twodimensional gravity formulated on two-manifolds Σ that carry a complex structure but may have a boundary. We will refer to such two-manifolds as open Riemann surfaces (if the boundary of Σ is empty, we will call it a closed Riemann surface). It is natural to hope that, by analogy with what happens for closed Riemann surfaces, there would be an intersection theory on the moduli space of open Riemann surfaces that would be related to matrix models with vector degrees of freedom. In trying to construct such a theory, one runs into immediate difficulties: the moduli space of open Riemann surfaces does not have a natural orientation and has a boundary; for both reasons, it is not obvious how to define intersection theory on this space. These difficulties were overcome by Pandharipande, Solomon, and Tessler in a rather unexpected way [3] whose full elucidation involves introducing spin structures in a problem in which at first sight they do not seem relevant [4][5][6].
In section 3, we will explain some highlights of this story. In section 4, we review matrix models with vector degrees of freedom, and show how they lead -modulo a slightly surprising twist -to precisely the same Virasoro constraints that have been found in intersection theory on the moduli space of open Riemann surfaces.
The matrix models we consider are the direct extension of those studied in [12][13][14]. The same problem has been treated in a rather different approach via Gaussian matrix models with an external source in [30] and in chapter 8 of [31]. See also [32] for another approach. For an expository article on the relation of matrix models and intersection theory, see [33].
2 Weil-Petersson Volumes And Two-Dimensional Topological Gravity
Background And Initial Steps
Let Σ be a closed Riemann surface of genus g with marked points 1 p 1 , . . . , p n , and let L i be the cotangent space to p i in Σ. As Σ and the p i vary, L i varies as the fiber of a complex line bundle -which we also denote as L i -over M g,n , the moduli space of genus g curves with n punctures. In fact, these line bundles extend naturally over M g,n , the Deligne-Mumford compactification of M g,n . We write ψ i for the first Chern class of L i ; thus ψ i = c 1 (L i ) is a two-dimensional cohomology class. For a non-negative integer d, we set τ i,d = ψ d i , a cohomology class of dimension 2d. The usual correlation functions of 2d topological gravity are the intersection numbers τ d 1 τ d 2 . . . τ dn = Mg,n τ 1,d 1 τ 2,d 2 · · · τ n,dn = Mg,n ψ d 1 1 ψ d 2 2 · · · ψ dn n , (2.1) where d 1 , . . . , d n is any n-plet of non-negative integers. The right hand side of eqn. (2.1) vanishes unless n i=1 d i = 3g − 3 + n. To be more exact, what we have defined in eqn. (2.1) is the genus g contribution to the correlation function; the full correlation function is obtained by summing over g ≥ 0. (For a given set of d i , there is at most one integer solution g of the condition n i=1 d i = 3g − 3 + n, and this is the only value that contributes to τ d 1 τ d 2 . . . τ dn .) Let us now explain how these correlation functions are related to the Weil-Petersson volume of M g . In the special case n = 1, we have just a single marked point p and a single line bundle L and cohomology class ψ. We also have the forgetful map π : M g,1 → M g that forgets the marked point. We can construct a two-dimensional cohomology class κ on M g by integrating the four-dimensional class τ 2 = ψ 2 over the fibers of this forgetful map: κ = π * (τ 2 ). (2.2) More generally, the Miller-Morita-Mumford (MMM) classes are defined by κ d = π * (τ d+1 ), so κ is the same as the first MMM class κ 1 . κ is cohomologous to a multiple of the Weil-Petersson symplectic form ω of the moduli space [34,35]: Because of (2.2), it will be convenient to use κ, rather than ω, to define a volume form. With this choice, the volume of M g is Mg exp(κ). (2.4) The relation between κ and τ 2 might make one hope that the volume V g would be one of the correlation functions of topological gravity: (2.5) Such a simple formula is, however, not true, for the following reason. To compute the right hand side of eqn. (2.5), we would have to introduce 3g − 3 marked points on Σ, and insert τ 2 (that is, ψ 2 i ) at each of them. It is true that for a single marked point, κ can be obtained as the integral of τ 2 over the fiber of the forgetful map, as in eqn. (2.2). However, when there is more than one marked point, we have to take into account that the Deligne-Mumford compactification of M g,n is defined in such a way that the marked points are never allowed to collide. Taking this into account leads to corrections in which, for instance, two copies of τ 2 are replaced by a single copy of τ 3 . The upshot is that V g can be expressed in terms of the correlation functions of topological gravity, and thus can be computed using the KdV equations or the Virasoro constraints, but the necessary formula is more complicated. See section 2.4 below. For now, we just remark that this approach has been used [36] to determine the large g asymptotics of V g , but apparently does not easily lead to explicit formulas for V g in general. Weil-Petersson volumes were originally studied and their asymptotics estimated by quite different methods [37].
M g,n likewise has a Weil-Petersson volume V g,n of its own, which likewise can be computed, in principle, using a knowledge of the intersection numbers on M g,n for n > n. Again this gives useful information but it is difficult to get explicit general formulas.
Mirzakhani's procedure was different. First of all, she worked in the hyperbolic world, so in the following discussion Σ is not just a complex Riemann surface; it carries a hyperbolic metric, by which we mean a Riemannian metric of constant scalar curvature R = −1. We recall that a complex Riemann surface admits a unique Kahler metric with R = −1. We recall also that in studying hyperbolic Riemann surfaces, it is natural 2 to think of a marked point as a cusp, which lies at infinity in the hyperbolic metric ( fig. 1).
Instead of a marked point, we can consider a Riemann surface with a boundary component. In the hyperbolic world, one requires the boundary to be a geodesic in the hyperbolic metric. Its circumference may be any positive number b. Let us consider, rather than a closed Riemann surface Σ of genus g with n labeled marked points, an open Riemann surface Σ also of genus g, but now with n labeled boundaries. In the hyperbolic world, it is natural to specify n positive numbers b 1 , . . . , b n and to require that Σ carry a hyperbolic metric such that the boundaries are geodesics of lengths b 1 , . . . , b n . We denote the moduli space of such hyperbolic metrics as M g;b 1 ,b 2 ,...,bn or more briefly as M g, b , where b is the n-plet (b 1 , b 2 , . . . , b n ).
As a topological space, M g, b is independent of b. In fact, M g, b is an orbifold, and the topological type of an orbifold cannot depend on continuously variable data such as b. In the limit that b 1 , . . . , b n all go to zero, the boundaries turn into cusps and M g, b turns into M g,n . Thus topologically, M g, b is equivalent to M g,n for any b. Very concretely, we can always convert a Riemann surface with a boundary component to a Riemann surface with a marked point by gluing a disc, with a marked point at its center, to the given boundary component. Thus we can turn a Riemann surface with boundaries into one with marked points without changing the parameters the Riemann surface can depend on, and this leads to the topological equivalence of M g, b with M g,n . If we allow the hyperbolic metric of Σ to develop cusp singularities, we get a compactification M g, b of M g,b which coincides with the Deligne-Mumford compactification M g,n of M g,n .
M g,n and M g, b have natural Weil-Petersson symplectic forms that we will call ω and ω b (see [38]). Since M g,n and M g, b are equivalent topologically, it makes sense to ask if the symplectic form ω b of M g, b has the same cohomology class as the symplectic form ω of M g,n . The answer is that it does not. Rather, one has (see [2], Theorem 4.4) (This is a relationship in cohomology, not an equation for differential forms.) From this it follows that the Weil-Petersson volume of M g, b is 3 Equivalently, since compactification by allowing cusps does not affect the volume integral, and the compactification of M g, b is the same as M g,n , one can write this as an integral over the compactification: This last result tells us that at b = 0, V g, b reduces to the volume V g,n = (1/2π 2 ) 3g−3+n Mg,n e ω of M g,n . Moreover, eqn. (2.8) implies that V g, b is a polynomial in b 2 = (b 2 1 , . . . , b 2 n ) of total degree 3g − 3 + n. In evaluating the term of top degree in V g, b , we can drop ω from the exponent in eqn. (2.8). Then the expansion in powers of the b i tells us that this term of top degree is (Only terms with i d i = 3g − 3 + n make nonzero contributions in this sum.) In other words, the correlation functions of two-dimensional topological gravity on a closed Riemann surface appear as coefficients in the expansion of V g, b . Of course, V g, b contains more information, 4 since we can also consider the terms in V g, b that are subleading in b.
Thus Mirzakhani's approach to topological gravity involved deducing the correlation functions of topological gravity from the volume polynomials V g, b . We will give a few indications of how she computed these volume polynomials in section 2.3, after first recalling a much simpler problem.
A Simpler Problem
Before explaining how to compute the volume of M g, b , we will describe how volumes can be computed in a simpler case. In fact, the analogy was noted in [2].
Let G be a compact Lie group, such as SU (2), with Lie algebra g, and let Σ be a closed Riemann surface of genus g. Let M g be the moduli space of homomorphisms from the fundamental group of Σ to G. Equivalently, M g is the moduli space of flat g-valued flat connections on Σ. Then [38,39] M g has a natural symplectic form that in many ways is analogous to the Weil-Petersson form on M g . Writing A for a flat connection on Σ and δA for its variation, the symplectic form of M g can be defined by the gauge theory formula where (for G = SU (2)) we can take Tr to be the trace in the two-dimensional representation.
Actually, the Weil-Petersson form of M g can be defined by much the same formula. The moduli space of hyperbolic metrics on Σ is a component 5 of the moduli space of flat SL(2, R) connections over Σ, divided by the mapping class group of Σ. Denoting the flat connection again as A and taking Tr to be the trace in the two-dimensional representation of SL(2, R), the right hand side of eqn. (2.10) becomes in this case a multiple of the Weil-Petersson symplectic form ω on M g .
There is also an analog for compact G of the moduli spaces M g, b of hyperbolic Riemann surfaces with geodesic boundary. For b = (b 1 , . . . , b n ), M g, b can be interpreted as follows in the gauge theory language. A point in M g, b corresponds, in the gauge theory language, to a flat SL(2, R) connection 4 This additional information in principle is not really new. Using facts that generalize the relationship between Vg,n and the correlation functions of topological gravity that we discussed at the outset, one can deduce also the subleading terms in V g, b in terms of the correlation functions of topological gravity. However, it appears difficult to get useful formulas in this way. 5 The moduli space of flat SL(2, R) connections on Σ has various components labeled by the Euler class of a flat real vector bundle of rank 2 (transforming in the 2-dimensional representation of SL(2, R)). One of these components parametrizes hyperbolic metrics on Σ together with a choice of spin structure. If we replace SL(2, R) by P SL(2, R) = SL(2, R)/Z2 (the symmetry group of the hyperbolic plane), we forget the spin structure, so to be precise, Mg is a component of the moduli space of flat P SL(2, R) connections. This refinement will not be important in what follows and we loosely speak of SL(2, R). In terms of P SL(2, R), one can define Tr as 1/4 of the trace in the three-dimensional representation.
on Σ with the property that the holonomy around the i th boundary is conjugate in SL(2, R) to the group element diag(e b i , e −b i ).
In this language, it is clear how to imitate the definition of M g, b for a compact Lie group such as SU (2). For k = 1, . . . , n, we choose a conjugacy class in SU (2), say the class that contains U k = diag(e iα k , e −iα k ), for some α k . We write α for the n-plet (α 1 , α 2 , . . . , α n ), and we define M g, α to be the moduli space of flat connections on a genus g surface Σ with n holes (or equivalently n boundary components) with the property that the holonomy around the k th hole is conjugate to U k . With a little care, 6 the right hand side of the formula (2.10) can be used in this situation to define the Weil-Petersson form κ b of M g, b , and the analogous symplectic form ω α of M g, α . Thus in particular, M g, α has a symplectic volume V g, α . Moreover, V g, α is a polynomial in α, and the coefficients of this polynomial are the correlation functions of a certain version of two-dimensional topological gauge theory -they are the intersection numbers of certain natural cohomology classes on M g, α .
These statements, which are analogs of what we described in the case of gravity in section 2.1, were explained for gauge theory with a compact gauge group in [40]. Moreover, for a compact gauge group, various relatively simple ways to compute the symplectic volume V g, α were described in [23]. None of these methods carry over naturally to the gravitational case. However, to appreciate Maryam Mirzakhani's work on the gravitational case, it helps to have some idea how the analogous problem can be solved in the case of gauge theory with a compact gauge group. So we will make a few remarks.
First we consider the special case of a three-holed sphere (sometimes called a pair of pants; see fig. 2(a)). In the case of a three-holed sphere, for G = SU (2), M 0, α is either a point, with volume 1, or an empty set, with volume 0, depending on α. The volumes of the three-holed sphere moduli spaces can also be computed (with a little more diffculty) for other compact G, but we will not explain the details as the case of SU (2) will suffice for illustration. Now to generalize beyond the case of a three-holed sphere, we observe that any closed surface Σ can be constructed by gluing together three-holed spheres along some of their boundary components ( fig. 2(b)). If Σ is built in this way, then the corresponding volume M g, α can be obtained by multiplying together the volume functions of the individual three-holed spheres and integrating over the α parameters of the internal boundaries, where gluing occurs. (One also has to integrate over some twist angles that enter in the gluing, but these give a trivial overall factor.) Thus for a compact group it is relatively straightforward to get formulas for the volumes V g, α . Moreover, these formulas turn out to be rather manageable. 6 On the gravity side, Mirzakhani's proof that the cohomology class of κ b is linear in b 2 did not use eqn. (2.10) at all, but a different approach based on Fenchel-Nielsen coordinates. On the gauge theory side, in using eqn. (2.10), it can be convenient to consider a Riemann surface with punctures (i.e., marked points that have been deleted) rather than boundaries. This does not affect the moduli space of flat connections, because if Σ is a Riemann surface with boundary, one can glue in to each boundary component a once-punctured disc, thus replacing all boundaries by punctures, without changing the moduli space of flat connections. For brevity we will stick here with the language of Riemann surfaces with boundary. a) b) Figure 2: (a) A three-holed sphere or "pair of pants." (b) A Riemann surface Σ, possibly with boundaries, that is built by gluing three-holed spheres along their boundaries. Each boundary of one of the three-holed spheres is either an external boundary -a boundary of Σ -or an internal boundary, glued to a boundary of one of the three-holed spheres (generically a different one). The example shown has one external boundary and four internal ones.
If we try to imitate this with SU (2) replaced by SL(2, R), some of the steps work. In particular, if Σ is a three-holed sphere, then for any b, the moduli space M 0, b is a point and V 0, b = 1. What really goes wrong for SL(2, R) is that, if Σ is such that M 0, b is not just a point, then the volume of the moduli space of flat SL(2, R) connections on Σ is infinite. For SU (2), the procedure mentioned in the last paragraph leads to an integral over the parameters α. Those parameters are angular variables, valued in a compact set, and the integral over these parameters converges. For SL(2, R) (in the particular case of the component of the moduli space of flat connections that is related to hyperbolic metrics), we would want to replace the angular variables α with the positive parameters b. The set of positive numbers is not compact and the integral over b is divergent. This should not come as a surprise as it just reflects the fact that the group SL(2, R) is not compact. The relation between flat SL(2, R) connections and complex structures tells us what we have to do to get a sensible problem. To go from (a component of) the moduli space of flat SL(2, R) connections to the moduli space of Riemann surfaces, we have to divide by the mapping class group of Σ (the group of components of the group of diffeomorphisms of Σ). It is the moduli space of Riemann surfaces that has a finite volume, not the moduli space of flat SL(2, R) connections.
But here is precisely where we run into difficulty with the cut and paste method to compute volumes. Topologically, Σ can be built by gluing three-holed spheres in many ways that are permuted by the action of the mapping class group. Any one gluing procedure is not invariant under the mapping class group and in a calculation based on any one gluing procedure, it is difficult to see how to divide by the mapping class group.
Dealing with this problem, in a matter that we explain next, was the essence of Maryam Mirzakhani's approach to topological gravity. b) a) Figure 3: A "cut" of a Riemann surface with boundary along an embedded circle may be separating as in (a) or non-separating as in (b).
How Maryam Mirzakhani Cured Modular Invariance
Let Σ be a hyperbolic Riemann surface with geodesic boundary. Ideally, to compute the volume of the corresponding moduli space, we would "cut" Σ on a simple closed geodesic . This cutting gives a way to build Σ from hyperbolic Riemann surfaces that are in some sense simpler than Σ. If cutting along divides Σ into two disconnected components ( fig. 3(a)), then Σ can be built by gluing along two hyperbolic Riemann surfaces Σ 1 and Σ 2 of geodesic boundary. If cutting along leaves Σ connected ( fig. 3(b)), then Σ is built by gluing together two boundary components of a surface Σ . We call these the separating and nonseparating cases.
In the separating case, we might naively hope to compute the volume function V g, b for Σ by multiplying together the corresponding functions for Σ 1 and Σ 2 and integrating over the circumference b of . Schematically, where we indicate that Σ 1 and Σ 2 each has one boundary component, of circumference b, that does not appear in Σ. In the nonseparating case, a similarly naive formula would be where we indicate that Σ , relative to Σ, has two extra boundary components each of circumference b.
The surfaces Σ 1 , Σ 2 , and Σ are in a precise sense "simpler" than Σ: their genus is less, or their Euler characteristic is less negative. So if we had something like (2.11) or (2.12), a simple induction would lead to a general formula for the volume functions.
The trouble with these formulas is that a hyperbolic Riemann surface actually has infinitely many simple closed geodesics α , and there is no natural (modular-invariant) way to pick one. Suppose, however, that there were a function F (b) of a positive real number b with the property that where the sum runs over all simple closed geodesics α on a hyperbolic surface Σ, and b α is the length of α . In this case, by summing over all choices of embedded simple closed geodesic, and weighting each with a factor of F (b), we would get a corrected version of the above formulas. In writing the formula, we have to remember that cutting along a given α either leaves Σ connected or separates a genus g surface Σ into surfaces Σ 1 , Σ 2 of genera g 1 , g 2 such that g 1 + g 2 = g. In the separating case, the boundaries of Σ are partitioned in some arbitrary way between Σ 1 and Σ 2 and each of Σ 1 , Σ 2 has in addition one more boundary component whose circumference we will call b . So denoting as b the boundary lengths of Σ, the boundary lengths of Σ 1 and Σ 2 are respectively b 1 , b and b 2 , b , where b = b 1 b 2 (here b 1 b 2 denotes the disjoint union of two sets b 1 and b 2 ) and Σ is built by gluing together Σ 1 and Σ 2 along their boundaries of length b . This is drawn in fig. 3(a), but in the example shown, the set b consists of only one element. In the nonseparating case of fig. 3(b), Σ is made from gluing a surface Σ of boundary lengths b, b , b along its two boundaries of length b . The genus g of Σ is g = g − 1. Assuming the hypothetical sum rule (2.13) involves a sum over all simple closed geodesics α , regardless of topological properties, the resulting recursion relation for the volumes will also involve such a sum. This recursion relation would be (2.14) In the first term, the sum runs over all topological choices in the gluing; the factor of 1/2 reflects the possibility of exchanging Σ 1 and Σ 2 . The factors of F (b ) in the formula compensate for the fact that in deriving such a result, one has to sum over cuts on all simple closed geodesics. By induction (in the genus and the absolute value of the Euler characteristic of a surface), such a recursion relation would lead to explicit expressions for all V g, b .
There is an important special case in which there actually is a sum rule [41] precisely along the lines of eqn. (2.13) and therefore there is an identity precisely along the lines of eqn. (2.14). This is the case that Σ is a surface of genus 1 with one boundary component.
The general case is more complicated. In general, there is an identity that involves pairs of simple closed geodesics in Σ that have the property that -together with a specified boundary component of Σ -they bound a pair of pants ( fig. 4). This identity was proved for hyperbolic Riemann surfaces with punctures by McShane in [41] and generalized to surfaces with boundary by Mirzakhani in [1], Theorem 4.2.
This generalized McShane identity leads to a recursive formula for Weil-Petersson volumes that is similar in spirit to eqn. (2.14). See Theorem 8.1 of Mirzakhani's paper [1] for the precise statement. The main difference between the naive formula (2.14) and the formula that actually works is the following. In eqn. (2.14), we imagine building Σ from simpler building blocks by a more or less arbitrary gluing. In the correct formula -Mirzakhani's Theorem 8.1 -we more specifically build Σ by gluing a pair of pants onto something simpler, as in fig. 4. There is a function F (b, b , b ), analogous to F (b ) in the above schematic discussion, that enters in the generalized McShane identity and therefore in the recursion relation. It compensates for the fact that in deriving the recursion relation, one has to sum over infinitely many ways to cut off a hyperbolic pair of pants from Σ.
In this manner, Mirzakhani arrived at a recursive formula for Weil-Petersson volumes that is similar to although somewhat more complicated than eqn. (2.14). Part of the beauty of the subject is that this formula turned out to be surprisingly tractable. In [1], section 6, she used the recursive formula to give a new proof -independent of the relation to topological gravity that we reviewed in section 2.1 -that the volume functions V g, b are polynomials in b 2 1 , . . . , b 2 n . In [2], she showed that these polynomials satisfy the Virasoro constraints of two-dimensional gravity, as formulated for the matrix model in [19]. Thereby -using the relation between volumes and intersection numbers that we reviewed in section 2.1 and to which we will return in a moment -she gave a new proof of the known formulas [17,20] for intersection numbers on the moduli space of Riemann surfaces, or equivalently for correlation functions of two-dimensional topological gravity.
Volumes And Intersection Numbers
We conclude this section by briefly describing the formula that relates Weil-Petersson volumes to correlation functions of topological gravity.
Given a surface Σ with n + 1 marked points, there is a forgetful map π : M g,n+1 → M g,n that forgets one of the marked points p. If we insert the class τ d+1 at p and integrate over the fiber of π, we get the Miller-Morita-Mumford class κ d = π * (τ d+1 ), which is a class of degree 2d in the cohomology of M g,n .
As a first step in evaluating a correlation function τ d+1 k j=1 τ n j , one might to try to integrate over the choice of the point at which τ d+1 is inserted. Integrating over the fiber of π : M g,n+1 → M g , one might hope to get a formula (2.15) This is not true, however. The right version of the formula has corrections that involve contact terms in which τ d+1 collides with τ n j for some j. Such a collision generates a correction that is an insertion of τ d+n j . For a fuller explanation, see [17]. Taking account of the contact terms, one can express correlation functions of the τ 's in terms of those of the κ's, and vice-versa.
A special case is the computation of volumes. As before, we write just κ for κ 1 , and we define the volume of M g as Mg κ 3g−3 /(3g − 3)!. This can be expressed in terms of correlation functions of the τ 's, but one has to take the contact terms into account.
As an example, we consider the case of a closed surface of genus 2. The volume of the compactified moduli space M 2 is and we want to compare this to topological gravity correlation functions such as By integrating over the position of one puncture, we can replace one copy of τ 2 with κ, while also generating contact terms. In such a contact term, τ 2 collides with some τ s , s ≥ 0, to generate a contact term τ s+1 . Thus for example where the factor of 2 reflects the fact that the first τ 2 may collide with either of the two other τ 2 insertions to generate a τ 3 . The same process applies if factors of κ are already present; they do not generate additional contact terms. For example, Similarly Taking linear combinations of these formulas, we learn finally that This is equivalent to saying that V 2 , which is the term of order ξ 3 in is equally well the term of order ξ 3 in The generalization of this for higher genus is that The volume of M g is the coefficient of ξ 3g−3 in the expansion of either of these formulas. To prove eqn. (2.24), we write W (ξ) for the right hand side, and we compute that Next, one replaces the explicit τ 2 term in the parentheses on the right hand side with κ plus a sum of contact terms between τ 2 and the τ k 's that appear in the exponential. These contact terms cancel the τ r 's inside the parentheses, and one finds Repeating this process gives for all s ≥ 0 28) and the fact that this is true for all s ≥ 0 is equivalent to eqn. (2.24).
Eqn. (2.24) has been deduced by comparing matrix model formulas to Mirzakhani's formulas for the volumes [42]. We will return to this when we discuss the spectral curve in section 4.2. For algebro-geometric approaches and generalizations see [43,44]. It is also possible to obtain similar formulas for the volume of M g, b .
Preliminaries
In this section, we provide an introduction to recent work [3][4][5][6] on topological gravity on open Riemann surfaces, that is, on oriented two-manifolds with boundary.
From the point of view of matrix models of two-dimensional gravity, one should expect an interesting theory of this sort to exist because adding vector degrees of freedom to a matrix model of two-dimensional gravity gives a potential model of two-manifolds with boundary. 7 We will discuss matrix models with vector degrees of freedom in section 4. Here, however, we discuss the topological field theory side of the story.
Let Σ be a Riemann surface with boundary, and in general with marked points or punctures both in bulk and on the boundary. Its complex structure gives Σ a natural orientation, and this induces orientations on all boundary components. If p is a bulk puncture, then the cotangent space to p in Σ is a complex line bundle L, and as we reviewed in section 2.1, one defines for every integer d ≥ 0 the cohomology class τ d = ψ d of degree 2d. The operator τ 0 = 1 is associated to a bulk puncture, and the τ d with d > 0 are called gravitational descendants.
A boundary puncture has no analogous gravitational descendants, because if p is a boundary point in Σ, the tangent bundle to p in Σ is naturally trivial. It has a natural real subbundle given by the tangent space to p in ∂Σ, and this subbundle is actually trivialized (up to homotopy) by the orientation of ∂Σ. So c 1 (L) = 0 if p is a boundary puncture.
Thus the list of observables in 2d topological gravity on a Riemann surface with boundary consists of the usual bulk observables τ d , d ≥ 0, and one more boundary observable σ, corresponding to a boundary puncture. Formally, the sort of thing one hopes to calculate for a Riemann surface Σ with n bulk punctures and m boundary punctures is where M is the (compactified) moduli space of conformal structures on Σ with n bulk punctures and m boundary punctures. The d i are arbitrary nonnegative integers, and we note that the cohomology class n i=1 ψ d i i that is integrated over M is generated only from data at the bulk punctures (and in fact only from those bulk punctures with d i > 0). The boundary punctures (and those bulk punctures with d i = 0) participate in the construction only because they enter the definition of M, the space on which the cohomology class in question is supposed to be integrated. Similarly to the case of a Riemann surface without boundary, to make the integral (3.1) nonzero, Σ must be chosen topologically so that the dimension of M is the same as the degree of the cohomology class that we want to integrate: Assuming that we can make sense of the definition in eqn. (3.1), the (unnormalized) correlation function τ d 1 τ d 2 · · · τ dn σ m of 2d gravity on Riemann surfaces with boundary is then obtained by summing τ d 1 τ d 2 · · · τ dn σ m Σ over all topological choices of Σ. (If Σ has more than one boundary component, the sum over Σ includes a sum over how the boundary punctures are distributed among those boundary components.) It is also possible to slightly generalize the definition by weighting a surface Σ in a way that depends on the number of its boundary components. For this, we introduce a parameter w, and weight a surface with h boundary components with a factor of 8 w h . 8 Still more generally, we could introduce a finite set S of "labels" for the boundaries, so that each boundary is labeled by some s ∈ S. Then for each s ∈ S, one would have a boundary observable σs corresponding to a puncture inserted on a boundary with label s, and a corresponding parameter vs to count such punctures. Eqn. (3.3) below corresponds to the case that w is the cardinality of the set S, and vs = v for all s ∈ S. This generalization to include labels would correspond in eqn. (4.49) below to modifying the matrix integral with a factor s∈S det(zs − Φ). Similarly, one could replace w h by s∈S w hs s , where hs is the number of boundary components labeled by s and there is Introducing also the usual coupling parameters t i associated to the bulk observables τ i , and one more parameter v associated to σ, the partition function of 2d topological gravity on a Riemann surface with boundary is then formally (3. 3) The sum over Σ runs over all topological types of Riemann surface with h boundary components, and specified bulk and boundary punctures. The exponential on the right hand side is expanded in a power series, and the monomials of an appropriate degree are then evaluated via eqn. (3.1).
There are two immediate difficulties with this formal definition: (1) To integrate a cohomology class such as i ψ d i i over a manifold M , that manifold must be oriented. But the moduli space of Riemann surfaces with boundary is actually unorientable.
(2) For the integral of a cohomology class over an oriented manifold M to be well-defined topologically, M should have no boundary, or the cohomology class in question should be trivialized along the boundary of M . However, the compactified moduli space M of conformal structures on a Riemann surface with boundary is itself in general a manifold with boundary.
Dealing with these issues requires some refinements of the above formal definition [3][4][5][6]. The rest of this section is devoted to an introduction. We begin with the unorientability of M. Implications of the fact that M is a manifold with boundary will be discussed in section 3.9.
The Anomaly
The problem in orienting the moduli space of Riemann surfaces with boundary can be seen most directly in the absence of boundary punctures. Thus we let Σ be a Riemann surface of genus g with h holes or boundary components and no boundary punctures, but possibly containing bulk punctures.
First of all, if h = 0, then Σ is an ordinary closed Riemann surface, possibly with punctures. The compactified moduli space M of conformal structures on Σ is then a complex manifold (or more precisely an orbifold) and by consequence has a natural orientation. This orientation is used in defining the usual intersection numbers on M, that is, the correlation functions of 2d topological gravity on a Riemann surface without boundary.
This remains true if Σ has punctures (which automatically are bulk punctures since so far Σ has no boundary). Now let us replace some of the punctures of Σ by holes. Each time we replace a bulk puncture by a hole, we add one real modulus to the moduli space. If we view Σ as a two-manifold a separate parameter ws for each s. This corresponds to including in the matrix integral a factor s∈S (det(zs −Φ)) ws . Such generalizations have been treated in [45].
with hyperbolic metric and geodesic boundaries, then the extra modulus is the circumference b around the hole.
By adding h > 1 holes, we add h real moduli b 1 , b 2 , · · · , b h , which we write collectively as b. We denote the corresponding compactified moduli space as M g,n, b (here n is the number of punctures that have not been converted to holes). A very important detail is the following. In defining Weil-Petersson volumes in section 2, we treated the b i as arbitrary constants; the "volume" was defined as the volume for fixed b, without trying to integrate over b. (Such an integral would have been divergent since the volume function V g,n, b is polynomial in b. Moreover, what naturally enters Mirzakhani's recursion relation is the volume function defined for fixed b.) In defining twodimensional gravity on a Riemann surface with boundary, the b i are treated as full-fledged modulithey are some of the moduli that one integrates over in defining the intersection numbers. Hopefully this change in viewpoint relative to section 2 will not cause serious confusion.
If we suppress the b i by setting them all to 0, the holes turn back into punctures and M g,n, b is replaced by M g,n+h . This is a complex manifold (or rather an orbifold) and in particular has a natural orientation. Restoring the b i , M g,n, b is a fiber bundle 9 over M g,n+h with fiber a copy of R h + parametrized by b 1 , . . . , b h . (Here R + is the space of positive real numbers and R h + is the Cartesian product of h copies of R + .) Orienting M g,n, b is equivalent to orienting this copy of R h + .
If we were given an ordering of the holes in Σ up to an even permutation, we would orient R h + by the differential form However, for h > 1, in the absence of any information about how the holes should be ordered, R h + has no natural orientation.
Thus M g,n, b has no natural orientation for h > 1. In fact it is unorientable. This follows from the fact that a Riemann surface Σ with more than one hole has a diffeomorphism that exchanges two of the holes, leaving the others fixed. (Moreover, this diffeomorphism can be chosen to preserve the orientation of Σ.) Dividing by this diffeomorphism in constructing the moduli space M g,n, b ensures that this moduli space is actually unorientable.
We can view this as a global anomaly in two-dimensional topological gravity on an oriented two-manifold with boundary. The moduli space is not oriented, or even orientable, so there is no way to make sense of the correlation functions that one wishes to define.
As usual, to cancel the anomaly, one can try to couple two-dimensional topological gravity to some matter system that carries a compensating anomaly. In the context of two-dimensional topological gravity, the matter system in question should be a topological field theory. To define a theory that reduces to the usual topological gravity when the boundary of Σ is empty, we need a topological field theory that on a Riemann surface without boundary is essentially trivial, in a sense that we will see, and in particular is anomaly-free. But the theory should become anomalous in the presence of boundaries.
These conditions may sound too strong, but there actually is a topological field theory with the right properties. First of all, we endow Σ with a spin structure. (We will ultimately sum over spin structures to get a true topological field theory that does not depend on the choice of an a priori spin structure on Σ.) When ∂Σ = ∅, we can define a chiral Dirac operator on Σ (a Dirac operator acting on positive chirality spin 1/2 fields on Σ). There is then a Z 2 -valued invariant that we call ζ, namely the mod 2 index of the chiral Dirac operator, in the sense of Atiyah and Singer [46,47]. ζ is defined as the number of zero-modes of the chiral Dirac operator, reduced mod 2. ζ is a topological invariant in that it does not depend on the choice of a conformal structure (or metric) on Σ. A spin structure is said to be even or odd if the number of chiral zero-modes is even or odd (in other words if ζ = 0 or ζ = 1). For an introduction to these matters, see [48], especially section 3.2.
We define a topological field theory by summing over spin structures on Σ with each spin structure weighted by a factor of 1 2 (−1) ζ . The reason for the factor of 1 2 is that a spin structure has a symmetry group that acts on fermions as ±1, with 2 elements. As in Fadde'ev-Popov gaugefixing in gauge theory, to define a topological field theory, one needs to divide by the order of the unbroken symmetry group, which in this case is the group Z 2 . This accounts for the factor of 1 2 . The more interesting factor, which will lead to a boundary anomaly, is (−1) ζ . It may not be immediately apparent that we can define a topological field theory with this factor included. We will describe two realizations of the theory in question in section 3.4, making it clear that there is such a topological field theory. We will call it T .
On a Riemann surface of genus g, there are 1 2 (2 2g + 2 g ) even spin structures and 1 2 (2 2g − 2 g ) odd ones. The partition function of T in genus g is thus This is not equal to 1, and thus the topological field theory T is nontrivial. However, when we couple to topological gravity, the genus g amplitude has a factor g 2g−2 st , where g st is the string coupling constant. 10 The product of this with Z g is (2g 2 st ) g−1 . Thus, as long as we are on a Riemann surface without boundary, coupling topological gravity to T can be compensated 11 by absorbing a factor of √ 2 in the definition of g st . In that sense, coupling of topological gravity to T has no effect, as long as we consider only closed Riemann surfaces.
Matters are different if Σ has a boundary. On a Riemann surface with boundary, it is not possible to define a local boundary condition for the chiral Dirac operator that is complex linear and sensible (elliptic), and there is no topological invariant corresponding to ζ. Thus theory T cannot be defined as a topological field theory on a manifold with boundary.
It is possible to define theory T on a manifold with boundary as a sort of anomalous topological field theory, with an anomaly that will help compensate for the problem that we found above with the orientation of the moduli space. To explain this, we will first describe some more physical constructions of theory T . First we discuss how theory T is related to contemporary topics in condensed matter physics.
Relation To Condensed Matter Physics
Theory T has a close cousin that is familiar in condensed matter physics. One considers a chain of fermions in 1 + 1 dimensions with the property that in the bulk of the chain there is an energy gap to the first excited state above the ground state, and the further requirement that the chain is in an "invertible" phase, meaning that the tensor product of a suitable number of identical chains would be completely trivial. 12 There are two such phases, just one of which is nontrivial. The nontrivial phase is called the Kitaev spin chain [50]. It is characterized by the fact that at the end of an extremely long chain, there is an unpaired Majorana fermion mode, usually called a zero-mode because in the limit of a long chain, it commutes with the Hamiltonian. 13 The Kitaev spin chain is naturally studied in condensed matter physics from a Hamiltonian point of view, which in fact we adopted in the last paragraph. From a relativistic point of view, the Kitaev spin chain corresponds to a topological field theory theory that is defined on an oriented two-dimensional spin manifold Σ, and whose partition function if Σ has no boundary is (−1) ζ . We will see below how this statement relates to more standard characterizations of the Kitaev spin chain. Our theory T differs from the Kitaev spin chain simply in that we sum over spin structures in defining it, while the Kitaev model is a theory of fermions and is defined on a two-manifold with a particular spin structure. Moreover, as we discuss in detail below, when Σ has a boundary, the appropriate boundary conditions in the context of condensed matter physics are different from what they are in our application to two-dimensional gravity. Despite these differences, the comparison between the two problems will be illuminating.
Because we are here studying two-dimensional gravity on an oriented two-manifold Σ, timereversal symmetry, which corresponds to a diffeomorphism that reverses the orientation of Σ, will not play any role. The Kitaev spin chain has an interesting time-reversal symmetric refinement, but this will not be relevant.
Theory T has another interesting relation to condensed matter physics: it is associated to the high temperature phase of the two-dimensional Ising model. In this interpretation [51], the triviality of theory T corresponds to the fact that the Ising model in its high temperature phase has only 12 Triviality here means that by deforming the Hamiltonian without losing the gap in the bulk spectrum, one can reach a Hamiltonian whose ground state is the tensor product of local wavefunctions, one on each lattice site. 13 A long but finite chain has a pair of such Majorana modes, one at each end. Upon quantization, they generate a rank two Clifford algebra, whose irreducible representation is two-dimensional. As a result, a long chain is exponentially close to having a two-fold degenerate ground state. In condensed matter physics, this degeneracy is broken by tunneling effects in which a fermion propagates between the two ends of the chain. In the idealized model considered below, the degeneracy is exact. one equilibrium state, which moreover is gapped.
Two Realizations Of Theory T
We will describe two realizations of theory T , one in the spirit of condensed matter physics, where we get a topological field theory as the low energy limit of a physical gapped system, and one in the spirit of topological sigma models [52], where a supersymmetric theory is twisted to get a topological field theory.
First we consider a massive Majorana fermion in two spacetime dimensions. It is convenient to work in Euclidean signature. One can choose the Dirac operator to be where one can choose the gamma matrices to be real and symmetric, for instance This ensures that γ = γ 1 γ 2 is real and antisymmetric: These choices ensure that the Dirac operator D m is real and antisymmetric. We call m the mass parameter; the mass of the fermion is actually |m|.
Formally, the path integral for a single Majorana fermion is Pf(D m ), the Pfaffian of the real antisymmetric operator D m . The Pfaffian of a real antisymmetric operator is real, and its square is the determinant; in the present context, the determinant det D m = (Pf(D m )) 2 can be defined satisfactorily by, for example, zeta-function regularization. However, the sign of the Pfaffian is subtle. For a finite-dimensional real antisymmetric matrix M , the sign of the Pfaffian depends on an orientation of the space that M acts on. In the case of the infinite-dimensional matrix D m , no such natural orientation presents itself and therefore, for a single Majorana fermion, there is no natural choice of the sign of Pf(D m ).
Suppose, however, that we consider a pair of Majorana fermions with the same (nonzero) mass parameter m. Then the path integral is Pf(D m ) 2 and (since Pf(D m ) is naturally real) this is real and positive. This actually ensures that the topological field theory obtained from the low energy limit of a pair of massive Majorana fermions of the same mass parameter is completely trivial. Without losing the mass gap, we can take |m| → ∞, and in that limit, Pf(D m ) 2 produces no physical effects at all except for a renormalization of some of the parameters in the effective action. 14 To get theory T , we consider instead a pair of Majorana fermions, one of positive mass parameter and one of negative mass parameter. Just varying mass parameters, to interpolate between this theory and the equal mass parameter case, we would have to let a mass parameter pass through 0, losing the mass gap.
This suggests that a theory with opposite sign mass parameters might be in an essentially different phase from the trivial case of equal mass parameters. To establish this and show the relation to theory T , we will analyze what happens to the partition function of the theory when the mass parameter of a single Majorana fermion is varied between positive and negative values. To be more precise, let γ = iγ be the chirality operator, with eigenvalues 1 and −1 for fermions of positive or negative chirality. What we called the chiral Dirac operator when we defined the mod 2 index in section 3.2 is the operator D restricted to act on states of γ = +1. Since γ is imaginary and D is real, complex conjugation of an eigenfunction reverses its γ eigenvalue while commuting with D; thus zero-modes of D occur in pairs with equal and opposite chirality. Now we can answer the original question. Since the partition function for the theory with two equal mass parameters is trivial (up to a renormalization of some of the low energy parameters), the partition function of the theory with one mass of each sign is (−1) ζ (up to such a renormalization). Thus we have found a physical realization of theory T .
The result we have found can be interpreted in terms of a discrete chiral anomaly. At the classical level, for m = 0, the Majorana fermion has a Z 2 chiral symmetry 15 ψ → γψ. The mass parameter is odd under this symmetry, so classically the theories with positive or negative m are equivalent. Quantum mechanically, one has to ask whether the fermion measure is invariant under the discrete chiral symmetry. As usual, the nonzero modes of the Dirac operator are paired up in such a way that the measure for those modes is invariant; thus one only has to test the zeromodes. Since ψ → γψ leaves invariant the positive chirality zero-modes and multiplies each negative chirality zero-mode by −1, this operation transforms the measure by a factor (−1) s = (−1) ζ , where s is the number of negative (or positive) chirality zero-modes, and ζ is the mod 2 index.
Finally, we will describe another though closely related way to construct the same topological field theory. The extra machinery required will be useful later. We consider in two dimensions a theory with (2, 2) supersymmetry and a single complex chiral superfield Φ. We work in flat spacetime to begin with and assume a familiarity with the usual superspace formalism of (2, 2) supersymmetry and its twisting to make a topological field theory. Φ can be expanded Here φ is a complex scalar field; ψ + and ψ − are the chiral components of a Dirac fermion field; and F is a complex auxiliary field. We consider the action Thus the superpotential is W (Φ) = imΦ 2 /2. In general, here m is a complex mass parameter, but for our purposes we can assume that m > 0. After integrating over the θ's and integrating out the auxiliary field F , the action becomes 16 If we expand the Dirac fermion ψ in terms of a pair of Majorana fermions χ 1 , χ 2 by ψ = (χ 1 + iχ 2 )/ √ 2, we find that χ 1 and χ 2 are massive Majorana fermions with a mass matrix that has one positive and one negative eigenvalue, as in our previous construction of theory T . The massive field φ does not play an important role at low energies: its path integral is positive definite, and in the large m or low energy limit, just contributes renormalization effects. So at low energies the supersymmetric theory considered here gives another realization of theory T . However, the supersymmetric machinery gives a way to obtain theory T without taking a low energy limit, and this will be useful later. Because the superpotential W = imΦ 2 /2 is homogeneous in Φ, the theory has a U (1) R-symmetry that acts on the superspace coordinates as θ ± → e iα θ ± . Because 15 With our conventions, the operator γ is imaginary in Euclidean signature and one might wonder if this symmetry makes sense for a Majorana fermion. However, after Wick rotation to Lorentz signature (in which γ 0 acquires a factor of i), γ becomes real, and it is always in Lorentz signature that reality conditions should be imposed on fermion fields and their symmetries. Thus actually ψ → γψ is a physically meaningful symmetry and ψ → γψ (which may look more natural in Euclidean signature) is not. Under the latter transformation, the massless Dirac action actually changes sign, so it is indeed ψ → γψ and not ψ → γψ that is a symmetry at the classical level. 16 Here αβ is the Levi-Civita antisymmetric tensor in the two-dimensional space spanned by ψ+, ψ−.
W is quadratic in Φ, one has to define this symmetry to leave ψ invariant and to transform φ by φ → e iα φ. When one "twists" to make a topological field theory, the spin of a field is shifted by one-half of its R-charge. In the present case, as ψ is invariant under the R-symmetry, it remains a Dirac fermion after twisting, but φ acquires spin +1/2 (it transforms under rotations like the positive chirality part of a Dirac fermion).
The twisted theory can be formulated as a topological field theory on any Riemann surface Σ with any metric tensor. We use the phrase "topological field theory" loosely since the twisted theory, as it has fields of spin 1/2, requires a choice of spin structure. To get a true topological field theory, one has to sum over the choice of spin structure. The supersymmetry of the twisted theory ensures that the path integral over φ cancels the absolute value of the path integral over ψ, leaving only the sign (−1) ζ . Thus the twisted theory is precisely equivalent to theory T , without taking any low energy limit.
In [49], this last statement is deduced in another way as a special case of an analysis of a theory with Φ r superpotential for any r ≥ 2.
For our later application, it will be useful to know that the condition for a configuration of the φ field to preserve the supersymmetry of the twisted theory is The generalization of this equation for arbitrary superpotential is which has been called the ζ-instanton equation [53]. A small calculation shows that if we set with D m the massive Dirac operator (3.6).
Boundary Conditions In Theory T
Our next task is to consider theory T on a manifold with boundary. Here of course we must begin by discussing possible boundary conditions. In this section, we will use the realization of theory T in terms of a pair of Majorana fermions with opposite masses.
The main requirement for a boundary condition is that it should preserve the antisymmetry of the operator D m . If tr denotes the transpose, then the antisymmetry means concretely that (3. 16) In verifying this, one has to integrate by parts, and one encounters a surface term, which is the boundary integral of χ tr γ ⊥ ψ, where γ ⊥ is the gamma matrix normal to the bondary. This will vanish if we impose the boundary condition γ ψ = ηψ, (3.17) where η = +1 or −1, γ is the gamma matrix tangent to the boundary, and | represents restriction to the boundary. Just to ensure the antisymmetry of the operator D m , either choice of sign will do. With either choice of sign, D m is a real operator, so its Pfaffian Pf(D m ) remains real.
The boundary conditions γ ψ = ±ψ have a simple interpretation. Tangent to the boundary, there is a single gamma matrix γ . It generates a rank 1 Clifford algebra, satisfying γ 2 = 1. In an irreducible representation, it satisfies γ = 1 or γ = −1. Thus the spin bundle of Σ, which is a real vector bundle of rank 2, decomposes along ∂Σ as the direct sum of two spin bundles of ∂Σ, namely the subbundles defined respectively by γ ψ = ψ and by γ ψ = −ψ. These two spin bundles of ∂Σ are isomorphic, since they are exchanged by multipication by γ ⊥ , which is globally-defined along ∂Σ. Thus the spin bundle of Σ decomposes along ∂Σ in a natural way as the direct sum of two copies of the spin bundle of ∂Σ, and the boundary condition says that along the boundary, ψ takes values in one of these bundles. We will write S for the spin bundle of Σ and E for the spin bundle of ∂Σ. Now let us discuss the behavior near the boundary of a Majorana fermion that satisfies one of these boundary conditions. We work on a half-space in R 2 , say the half-space for some ψ 0 . In this geometry, γ 2 is the same as γ . We see that if ψ satisfies the boundary condition γ ψ = ηψ, then this mode is normalizable if and only if mη > 0. (3.20) If mη < 0, the theory remains gapped, with a gap of order m, even along the boundary. But if mη > 0, the mode that we have just found propagates along the boundary as a 0 + 1-dimensional massless Majorana fermion.
We will now use these results to study the boundary anomaly of theory T , with several possible boundary conditions.
Boundary Anomaly Of Theory T
Let us first recall that for a real fermion field with a real antisymmetric Dirac operator such as D m , in general there is an anomaly in the sign of the path integral Pf(D m ). The anomaly is naturally described mathematically by saying that there is a real Pfaffian line PF associated to the Dirac operator, and the fermion Pfaffian Pf(D m ) is well-defined as a section of PF .
In our problem, there are two Majorana fermions, say ψ 1 and ψ 2 , with possibly different masses and possibly different boundary conditions. Correspondingly there are two Pfaffian lines, say PF 1 and PF 2 , and the overall Pfaffian line is the tensor product 17 In general, the Pfaffian line of a Dirac operator does not depend on a fermion mass, but it may depend on the boundary conditions. Indeed, as we will see, there is such a dependence in our problem and it will play an essential role.
We will now consider the boundary path integral and boundary anomaly in our problem for several choices of boundary condition.
The Trivial Case
The most trivial case is that the two masses and also the two boundary conditions are the same. Moreover, we choose the masses and the signs so that mη < 0.
Since the two boundary conditions are the same, PF 1 is canonically isomorphic to PF 2 , and therefore PF = PF 1 ⊗ PF 2 is canonically trivial.
Since the two Majorana fermions have the same mass and boundary condition, the combined Dirac operator D of the two modes is just the direct sum of two copies of the same Dirac operator D m . Thus the fermion path integral Pf(D) satisfies Pf(D) = Pf(D m ) 2 , and in particular Pf(D) is naturally positive (relative to the trivialization of PF that reflects the isomorphism PF 1 ∼ = PF 2 ).
Since mη < 0, there are no low-lying modes near the boundary and the theory has a uniform mass gap of order m along the boundary as well as in the bulk. Therefore, after renormalizing a few constants in the low energy effective action, the path integral Pf(D) is just 1.
In other words, with equal boundary conditions for the two modes, the trivial theory with equal masses remains trivial along the boundary.
Assuming we allow ourselves to make a generic relevant deformation of the theory (as we would 17 There is a potential subtlety here. If a fermion field has an odd number of zero-modes, its Pfaffian line should be considered odd or fermionic. Accordingly, if ψ1 and ψ2 each have an odd number of zero-modes, then PF 1 and PF 2 are both odd and the correct statement is that PF = PF 1 ⊗PF 2, where ⊗ is a Z2-graded tensor product (this notion is described in section 3.6.2). We will not encounter this subtlety, because always at least one of ψ1 and ψ2 will satisfy one of the boundary conditions (3.17). A fermion field obeying one of those boundary conditions has an even number of zero-modes, since there are none at all if mη < 0 and the number is independent of m mod 2. Note that on a Riemann surface with boundary, there is no notion of the chirality of a zero-mode and we simply count all zero-modes. By contrast, the mod 2 index that is used in defining theory T on a surface without boundary is defined by counting positive chirality zero-modes only.
certainly do in condensed matter physics, for example), this is still true if we pick the boundary conditions for the two Majorana fermions to be equal but such that mη > 0. Then we generate two 0 + 1-dimensional massless Majorana fermions, say χ 1 , χ 2 . But given any such pair of Majorana modes in 0+1 dimensions, one can add a mass term iµχ 1 χ 2 to the Hamiltonian (or the Lagrangian), with some constant µ, removing them from the low energy theory. The theory becomes gapped and the renormalized partition function is again 1.
Fermi statistics do not allow the addition of a mass term for a single massless 1d Majorana fermion. Hence the number of 1d Majorana modes along the boundary is a topological invariant mod 2. We will discuss next the case that this invariant is nonzero.
Boundary Condition In Condensed Matter Physics
For theory T , or for the Kitaev spin chain, we consider two Majorana fermions, with opposite signs of m. In the context of condensed matter physics, to study the theory on a manifold with boundary, we want a boundary condition that makes the theory fully anomaly-free. In other words, we want to ensure that the Pfaffian line bundle PF = PF 1 ⊗ PF 2 remains canonically trivial. This is straightforward: since PF is in general independent of the masses, we simply use the same boundary condition as in section 3.6.1 -namely the same sign of η for both Majorana fermionsand then PF remains canonically trivial, regardless of the masses.
However, since the two Majorana fermions have opposite signs of m, we see now that regardless of the common choice of η, precisely one of them has a normalizable zero-mode (3.19) along the boundary. This means that the mass gap of the theory breaks down along the boundary. Although it is gapped in bulk, there is a single 0 + 1-dimensional massless Majorana fermion propagating along the boundary. As we noted in section 3.3, this is regarded in condensed matter physics as the defining property of the Kitaev spin chain. Now let us discuss a consequence of this construction that has been important in mathematical work [3][4][5][6] on 2d gravity on a manifold with boundary. We will see later the reason for its importance. In general, suppose that Σ has h boundary components ∂ 1 Σ, ∂ 2 Σ, · · · , ∂ h Σ. On each boundary component, one makes a choice of sign in the boundary condition, and this determines a real spin bundle E i of ∂ i Σ. Along each ∂ i Σ, there propagates a massless 1d Majorana fermion χ i . In propagating around ∂ i Σ, χ i may obey either periodic or antiperiodic boundary conditions. Indeed, on the circle ∂ i Σ, there are two possible spin structures, which in string theory are usually called the Neveu-Schwarz or NS (antiperiodic) spin structure and the Ramond (periodic) spin structure. The NS spin structure is bounding and the R spin structure is unbounding. The underlying spin bundle S of Σ determines whether E i is of NS or Ramond type. For general S, the only general constraint on the E i is that the number R of boundary components with Ramond spin structure is even.
The field χ i , in propagating around the circle ∂ i Σ, has a zero-mode if and only if E i is of Ramond type. This is not an exact zero-mode, but it is exponentially close to being one if m is large (compared to the inverse of the characteristic length scale of Σ). Let us write ν i , i = 1, . . . , R, for these modes. The ν i have much smaller eigenvalues of D m than any other modes of ψ 1 and ψ 2 , so there is a consistent procedure in which we integrate out all other modes and leave an effective theory of the ν i only.
Since the underlying theory was chosen to be anomaly-free, it must determine a well-defined measure for the ν i . This condition is not as innocent as it may sound. A measure on the space parametrized by the ν i is something like However, a priori, this expression does not have a well-defined sign. First of all, its sign is obviously changed if we make an odd permutation of the ν i , that is of the Ramond boundary components.
But in addition, we should worry about the signs of the individual ν i . Since the ν i are real, we can fix their normalization up to sign by asking them to have, for example, unit L 2 norm. But there is no natural way to choose the signs of the ν i , and obviously, flipping an odd number of the signs will reverse the sign of the measure (3.21).
There is no natural way to pick the signs of the ν i up to an even number of sign flips, and likewise, there is no natural way to pick an ordering of the ν i up to even permutations. However, the fact that there is actually a well-defined measure on the space spanned by ν 1 , · · · , ν R means that one of these choices determines the other. This fact (originally proved in a very different way) is an important lemma in [3][4][5][6].
The existence of a natural measure on the space spanned by the ν i can be expressed in the following mathematical language. For i = 1, . . . , R, let ε i be the 1-dimensional real vector space generated by ν i . The Z 2 -graded tensor product 18 of the ε i , denoted ⊗ i ε i , is equivalent to the ordinary tensor product once an ordering of the ε i is picked, by an isomorphism that reverses sign if two of the ε i are exchanged. The lemma that we have been describing is equivalent to the statement that the Z 2 -graded tensor product of the ε i is canonically trivial: (3.22)
Boundary Condition In Two-Dimensional Gravity
For the application of theory T to two-dimensional gravity -or at least to the theory studied in [3][4][5][6] -we need a different boundary condition. In this application, we want theory T to remain gapped along the boundary as well as in bulk. But it will have an anomaly that will help in canceling the gravitational anomaly. 18 Since this notion may be unfamiliar, we give an example, following P. Deligne. Let Si, i = 1, . . . , t be a family of circles, and let T be the torus t i=1 Si. Then εi = H 1 (Si, R) is a 1-dimensional vector space, as is α = H n (T, R). There can be no natural isomorphism between α and the ordinary tensor product ⊗iεi, since the exchange of two of the circles acts trivially on ⊗iεi, while acting on α as −1. But there is a canonical isomorphism α ∼ = ⊗ i εi.
Thus, the two Majorana fermions must remain gapped along the boundary, even though they have opposite masses. To achieve this, we must give the two Majorana fermions opposite boundary conditions, so that mη < 0 for each of the two modes.
Given that the theory has a uniform mass gap of order m even near the boundary, its path integral, after renormalizing a few parameters in the effective action, is of modulus 1. Moreover, this path integral is naturally real. Thus it is fairly natural to write the path integral as (−1) ζ , just as we did in the absence of a boundary. 19 However, (−1) ζ is no longer a number ±1; it now takes values in the real line bundle PF . In fact, since it is everywhere nonzero, (−1) ζ is a trivialization of PF .
This theory actually challenges some of the standard terminology about anomalies. The line bundle PF is clearly trivial, because the renormalized partition function (−1) ζ provides a trivialization. However, because this trivialization is provided by the path integral itself, rather than by more local or more elementary considerations, it is not natural to call the theory anomaly-free. When we say that a theory is anomaly-free, we usually mean that its path integral can be defined as a number, rather than as a section of a line bundle; that is not the case here.
In our problem, PF cannot be trivialized by local considerations. Rather, local considerations will give an isomorphism where the product is over all boundary components with Ramond spin structure. This claim is consistent with the claim that PF is trivial, because we have shown in eqn. (3.22 To explain what we mean in saying that (3.23) can be established by local considerations, first set Then the statement (3.23) is equivalent to where for a vector space V , det V is its top exterior power. (Note that exchanging two summands ε i and ε j in V acts as −1 on det V , and likewise acts as −1 on the Z 2 -graded tensor product in (3.23).) We will use the following fact about Pfaffian line bundles. Consider a family of real Dirac operators parametrized by some space W (in our case, W represents the choice of metric on Σ). As long as the space of zero-modes of the Dirac operator has a fixed dimension, it furnishes the fiber of a vector bundle V → W . The Pfaffian line bundle PF → W is then det V , the top exterior power of V .
More generally, instead of considering zero-modes, we can consider any positive number a that (in a given portion of W ) is not an eigenvalue of iD m , and let V be the space spanned by eigenvectors of the Dirac operator with eigenvalue less than a in absolute value. One still has an isomorphism PF ∼ = det V .
Furthermore, the Pfaffian line bundle PF is independent of fermion masses. This means that to compute PF in our problem, instead of considering the case that the masses are opposite and the signs in the boundary conditions are also opposite, we can take the masses to be the same while the boundary conditions remain opposite.
In this situation, one of the fields ψ 1 , ψ 2 has positive mη and one has negative mη. So although the interpretation is different, we are back in the situation considered in section 3.6.2: one fermion has a mass gap m that persists even along the boundary, and the other has a single low-lying made along each Ramond boundary component. The space of low-lying fermion modes is thus Eqn. (3.23) will suffice for our purposes, but it is perhaps worth pointing out that it has the following generalization, which is analogous to Theorem B in [54]. Instead of flipping the boundary condition simultaneously along all boundary components of Σ, it makes sense to flip the boundary condition along one boundary component at a time. Let S be a particular boundary component of Σ and let PF and PF be the Pfaffian line bundles before and after flipping the boundary condition of one fermion along S. If the spin structure along S is of NS type, then that is, changing the boundary condition has no effect. But if it is of Ramond type, then 20 where ε is the space of fermion zero-modes along S. Repeated application of these rules, starting with the fact that PF is trivial if ψ 1 and ψ 2 have the same sign of η, leads to eqn. (3.23) for the case that they have opposite signs of η.
To justify the statements (3.26) and (3.27), we use the fact that by the excision property of index theory, the change in the Pfaffian line when we flip the boundary condition along S depends only on the geometry along S and not on the rest of Σ. Thus we can embed S in any convenient Σ of our choice. It is convenient to take Σ to be the annulus S × I, where I = [0, 1] is a unit interval, and we consider S to be embedded in S × I as the left boundary S × {0}. We want to compute the effect of flipping the boundary condition at S × {0}, keeping it fixed at S × {1}. We can take the fermion mass to be 0, so the Dirac operator becomes conformally invariant and we can take the metric on the annulus to be flat. A fermion zero-mode is then simply a constant mode that satisfies the boundary conditions. For the case of an NS spin structure, the fermions are antiperiodic in the S direction and so have no zero-modes. Thus the space of zero-modes is V = 0, so that det V = R. This justifies (3.26) in the NS case. In the R case, flipping the boundary condition at one end adds or removes a zero-mode (depending on the boundary condition at the other end). The relevant space of zero-modes is V ∼ = ε, so that det V ∼ = ε, leading to (3.27).
Anomaly Cancellation
We are finally ready to explain how the anomaly that we described in section 3.2 has been canceled in [3][4][5][6]. We consider first the case that all boundaries of Σ are of Ramond type, and to start with, we omit boundary punctures. We denote the circumference of the i th boundary as b i . We recall that the reason for the anomaly is that there is no natural sign of the differential form Ω = db 1 db 2 · · · db R (eqn. (3.4)). However, after coupling to theory T , what needs to have a natural sign is the product of this with (−1) ζ , the path integral of theory T : (3.28) We recall, in addition, that (−1) ζ takes values in ⊗ i ε i , where ε i is a 1-dimensional vector space of zero-modes along the i th Ramond boundary.
Here ⊗ i is a Z 2 -graded tensor product, meaning that ⊗ i ε i changes sign if any two of the boundary components are exchanged. But the original anomaly was that db 1 db 2 · · · db R likewise changes sign if any two boundary components are exchanged. The upshot then is that the product Ω does not change sign under permutations of boundary components. It naturally takes values in the ordinary tensor product of the ε i : What have we gained? The anomaly has not disappeared, but it has become local: it has turned into an ordinary tensor product of factors associated to individual boundary components; because it is an ordinary tensor product, it can be canceled by a local choice made independently on each boundary component.
The last step in canceling the anomaly is to say that a boundary of Σ is not just a "bare" boundary: it comes with additional structure. Let S i be the i th boundary component of Σ, and let E i be its spin structure. In the theory developed in [3][4][5][6] (but for the moment still ignoring boundary punctures) each S i is endowed with a trivialization of E i , up to homotopy. For the moment we consider Ramond boundaries only. Since E i is a real line bundle, and is trivial on a Ramond boundary, it has two homotopy classes of trivialization over each Ramond boundary. In addition to summing over spin structures on Σ and integrating over its moduli, one is supposed to sum over (homotopy classes of) trivializations of E i for each Ramond boundary S i .
A fermion zero-mode on S i is a "constant" mode that is everywhere non-vanishing, so the choice of such a zero-mode gives a trivialization of E i . This means that, still in the absence of boundary punctures, trivializations of E i correspond to choices of the sign of the zero-mode on S i . Hence once we trivialize all the E i , the right hand side of (3.29) is trivialized and Ω acquires a well-defined sign.
Thus once theory T is included and the boundaries are equipped with trivializations of their spin bundles, the problem with the orientation of the moduli space is solved. However, without some further ingredients, all correlation functions would vanish. Indeed, summing over the signs of the trivializations of the ε i will imply summing over the sign of Ω.
Moreover, what we have said does not make sense for boundaries with NS spin structure, since their spin bundles cannot be globally trivialized.
The additional ingredient that has to be considered is a boundary puncture. One postulates that locally, away from punctures, E i is trivialized, but that this trivialization changes sign in crossing a boundary puncture.
With this rule, it is possible to incorporate NS as well as Ramond boundaries. A simple example of a boundary component with NS spin structure is the boundary of a disc ( fig. 5). Its spin structure is of NS or antiperiodic type, and cannot be trivialized globally. It can be trivialized on the complement of one point, but then the trivialization changes sign in crossing that point. In the theory of [3][4][5][6], that point would be interpreted as a boundary puncture. So an NS boundary with one boundary puncture is possible in the theory, but an NS boundary with no boundary punctures is not. More generally, the number of punctures on a given NS boundary can be any positive odd number, since the spin structure of an NS boundary can have a trivialization that jumps in sign any odd number of times in going around the boundary circle.
As an example, a disc with n boundary punctures and m bulk ones has a moduli space of dimension 2m + n − 3. The fact that n is odd means that this number is even. This is actually a necessary condition for some of the correlation functions M i ψ d i i (eqn. (3.1)) to be nonzero, since the cohomology classes ψ i are all of even degree.
The spin structure of a Ramond puncture is globally trivial, so it is possible to have a Ramond boundary with no boundary punctures. Of course, this is the case we started with. More generally, a Ramond boundary can have any even number of punctures.
On any given boundary component of either NS or Ramond type, there are two allowed classes of piecewise trivialization of the spin structure. One can pick an arbitrary trivialization at a given starting point (not one of the punctures), and then the extension of this over the rest of the circle is uniquely determined by the condition that the trivialization jumps in sign whenever a boundary puncture is crossed. This description of boundaries and their punctures may seem bizarre at first, but we will see in section 3.8 that it is not too difficult to give it a plausible physical interpretation. But first, let us ask whether incorporating boundary punctures has reintroduced any problem with the orientation of moduli space. We will deal with this question by describing a consistent recipe [3][4][5][6] for dealing with the sign questions. We expect that this recipe could be deduced from the framework of section 3.8, but we will not show this. bundle that is inevitably of NS type. This real line bundle is not trivial globally over S, but -since the number of boundary punctures is odd -it can be trivialized on the complement of the boundary punctures in such a way that the trivialization changes sign whenever one crosses a boundary puncture. circumference b and it has an odd number n of boundary punctures that have a natural cyclic order. Let us pick an arbitrary starting point p ∈ S and relative to this label the punctures in ascending order by angles α 1 < α 2 < · · · < α n . So b and α 1 , . . . , α n are the moduli that are associated to S. To orient this parameter space, we can use the differential form Υ = db dα 1 dα 2 · · · dα n . (3.30) We note that Υ has a natural sign: because the number of α's is odd, moving a dα from the end of the chain to the beginning does not affect the sign of Υ. Also, since Υ is of even degree, it commutes with similar factors associated to other boundary components. Therefore, an NS boundary component raises no problem in orienting the moduli space. Now let S have Ramond spin structure. In this case, n is even. This has two consequences. First, we get a sign change if we move a dα from the end of the chain to the beginning. However, just as in the case n = 0 that we started with, the sign of (−1) ζ depends on how one trivializes the spin structure of a Ramond boundary. A consistent recipe is to define the sign of (−1) ζ using the trivialization that is in effect just to the right of the starting point p ∈ S relative to which we measured the α's. Then moving one of the boundary punctures from the end of the chain to the beginning will reverse the sign of Υ while also reversing the sign of (−1) ζ . Also, because n is even, Υ is of odd degree in the case of a Ramond boundary. Therefore the Υ factors associated to different Ramond boundaries anticommute with each other. Just as we discussed for the case n = 0, this compensates for the fact that (−1) ζ is odd under exchanging any two Ramond boundaries.
The ζ-Instanton Equation And Compactness
In the present section, we will attempt to interpret the possibly strange-sounding picture just described in terms of the physics of branes.
For this, it will be helpful to use the second realization of theory T that was presented in section 3.4. This was based on topologically twisting a two-dimensional theory with (2, 2) supersymmetry and a complex chiral superfield Φ. The bottom component of Φ is a complex field φ. The theory also has a holomorphic superpotential, which in our application is W (Φ) = i 2 m 2 Φ 2 , but we will write some formulas for a more general W (Φ).
The condition for a configuration of the φ field to be supersymmetric is the ζ-instanton equation one gets when one topologically twists the theory using the R-symmetry that exists for quasi-homogeneous W . For example, in the case of a quadratic W , after topological twisting, φ has to be interpreted as a section of a chiral spin bundle L → Σ, a square root of the canonical bundle K → Σ. In [49], a more general case W ∼ Φ r was considered, and then in the topologically twisted version of the theory, φ is a section of an r th root of K (this r th root may have singularities at specified points in Σ where "twist fields" are inserted).
Certain important properties hold whenever the ζ-instanton equation can be defined, whether in a topologically-twisted version or simply in a naive version in which φ is a complex field. In particular, if Σ has no boundary, then the ζ-instanton equation has only "trivial" solutions. This is proved in a standard way: take the absolute value squared of the equation, integrate over Σ, and then integrate by parts, to show that any solution satisfies If Σ has no boundary, we can drop the total derivatives ∂ z W and ∂ z W , and we learn that on a closed surface Σ, any solution has dφ = 0 and ∂W/∂φ = 0; in other words, φ must be constant and this constant must be a critical point of W . For a large class of W 's, this implies that, on a surface Σ without boundary, the space of solutions of the ζ-instanton equation is compact (and in fact "trivial"). This compactness is an important ingredient in the well-definedness of the twisted topological field theory constructions related to the ζ-instanton equation.
Boundary Condition In The ζ-Instanton Equation
If Σ has a boundary, then we have to pick a boundary condition on the ζ-instanton equation. Let us first ignore the twisting and treat φ as an ordinary complex scalar field. If we also set W to 0, the equation for φ becomes the Cauchy-Riemann equation saying that φ is holomorphic. The topological σ-model associated to counting solutions of this equations is then an ordinary A-model. Though the topological field theory associated to theory T is not an ordinary A-model -because of the superpotential and because φ is twisted to have spin 1/2 -it will be useful to first discuss this more familiar case.
A boundary condition for the Cauchy-Riemann equations that is sensible (elliptic) at least locally can be obtained by picking an arbitrary curve ∈ C and asking that the boundary values of φ should lie in . Adding a superpotential to get the ζ-instanton equation does not affect this statement, which only depends on the "leading part" of the equation (the terms with the maximum number of derivatives). Here we may loosely call a brane, although to be more precise, it is the support of a brane. As we will discuss later, there can be more than one brane with support . More generally, as is usual in brane physics, we may impose such a boundary condition in a piecewise way. For this, we pick several branes α , we decompose the boundary ∂Σ as a union of intervals I α that meet only at their endpoints, and for each α, we require that I α should map to α . (A common endpoint of I α and I β must then map to an intersection point of α and β .) What sort of should we use? At first sight, it may seem that the A-model is most obviously well-defined if is compact. Actually, a compact closed curve in C is a boundary, and with such a choice of , the A-model with target C is actually anomalous, as explained from a physical point of view in [53], section 13.5. This anomaly is an ultraviolet effect that is related to a boundary contribution to the fermion number anomaly on a Riemann surface. More intuitively, if is a closed curve in the plane, that it can be shrunk to a point and is not interesting topologically. Thus we should consider noncompact , for example a straight line in C.
With such a choice, we avoid the ultraviolet issues mentioned in the last paragraph, but the noncompactness of raises potential infrared problems. The space of solutions of the Cauchy-Riemann equation ∂φ = 0, with boundary values in the noncompact space , is in general not compact, and this poses difficulties in defining the A-model with target C.
There are a number of approaches to resolving these difficulties, depending on what one wants. One approach leads mathematically to the "wrapped Fukaya category." For our purposes, we want to use the superpotential W to prevent φ from becoming large. This corresponds mathematically to the Fukaya-Seidel category [55]; for a physical interpretation, see [53], especially sections 11.2.6 and 11.3. To see the idea, let us return to the identity (3.32), but now allow for the possibility that Σ has a boundary. For instance, we can take Σ to be the upper half z-plane. Setting z = x 1 + ix 2 , the identity becomes Now it becomes clear what sort of brane we should consider. We should choose so that Im W → ∞ at ∞ along . Then the boundary term in the identity will ensure that φ cannot become large along ∂Σ, and given this, the bulk terms in the identity ensure that φ cannot become large anywhere.
That is an essential technical step toward being able to define the A-model.
Let us implement this in our case that W (φ) = i 2 mφ 2 , with m > 0 and φ = φ 1 + iφ 2 . We have Im W (φ) = m 2 (φ 2 1 − φ 2 2 ). Thus near infinity in the complex φ plane, there are two regions with Im W → +∞: this happens near the positive real φ axis and also near the negative axis.
A noncompact 1-manifold is topologically a copy of the real line, with two ends. To ensure that Im W → ∞ at ∞ along , we should pick so that each of its ends is in one of the good regions near the positive or negative φ axis. Beyond this, the precise choice of does not matter, because of the fact that the A-model is invariant under Hamiltonian symplectomorphisms of C. All that really matters is whether φ tends toward +∞ or −∞ at each of the two ends of . Moreover, if φ tends to infinity in the same direction at each end of , it is "topologically trivial" in the sense that it can be pulled off to infinity in the φ-plane while preserving the fact that Im W → ∞ at ∞ along . So the only interesting case is that φ tends to −∞ at one end of and to +∞ at the other. Further details do not matter. Therefore, we may as well simply take 21 to be the real φ axis. In other words, the boundary condition on φ is that it is real along ∂Σ, or in other words if φ = φ 1 + iφ 2 , then φ 2 = 0 at x 2 = 0.
This has an interesting interpretation in the topologically twisted model that we are really interested in. We recall that in this model, φ is a section of the chiral spin bundle L of Σ. The fiber of L at a point in Σ is a complex vector space of dimension 1. This is actually the same as a real vector space of rank 2. Thus, we can alternatively view the complex line bundle L → Σ as a rank 2 real vector bundle S → Σ. The resulting S is simply the real, nonchiral spin bundle of Σ. Thus, it is possible to view the real and imaginary parts of φ as a two-component real spinor field over Σ. In fact, we have already made much the same statement in eqn. (3.15), where we asserted that the ζ-instanton equation for φ is equivalent to the massive Dirac equation Now recall that in section 3.5, we defined a rank 1 real spin bundle E → ∂Σ by saying that a section of E is a section φ of the rank 2 spin bundle S of Σ (restricted to ∂Σ) that satisfies γ φ = φ. (The opposite sign in this relation, γ φ = −φ, defines another equivalent real spin bundle of ∂Σ.) For Σ the upper half plane, the tangential gamma matrix is γ = γ 1 , and the representation that we have used of the gamma matrices (eqn. (3.7)) is such that γ 1 φ = φ is equivalent to φ 2 = 0.
Thus, we can state the boundary condition that we have found in a way that makes sense in general for the twisted topological field theory under study. In bulk, that is away from ∂Σ, φ is a section of the chiral spin bundle L → Σ. The boundary condition satisfied by φ is that along ∂Σ, it is a section of the real spin bundle E → ∂Σ. The merit of this boundary condition is the same as it is in the ordinary A-model, which we used as motivation: it ensures that the surface terms in eqn. (3.32) vanish, and therefore that the only solution of the ζ-instanton equation on a Riemann surface Σ with boundary is φ = 0.
We can gain some more insight by comparison to the ordinary A-model. To construct a brane with support , we need to pick an orientation of . There are two possible orientations, so there are two possible branes, which we will call B and B . Neither one is distinguished relative to the other.
In the ordinary A-model, we could at our discretion introduce B or B or both. The twisted model that is related to theory T , in which φ is a chiral spinor rather than a complex-valued field, is different in this respect. The reason it is different is that B and B represent choices of orientation of the real spin bundle E → ∂Σ, but in general this real spin bundle is unorientable. Thus, if one goes all the way around a component of ∂Σ with NS spin structure, then B and B are exchanged. Accordingly, in the model relevant to theory T , if we introduce one of these branes, we have to also introduce the other.
Once we introduce branes B and B , we are very close to the picture developed in the mathematical literature [3][4][5][6]. The boundary of Σ is decomposed as a union of intervals I α that have only endpoints in common, and each interval is labeled by B or B . This labeling here means simply a chosen orientation of E → ∂Σ. Since E is a real vector bundle of rank 1, a choice of orientation of E is (up to homotopy) the same as a trivialization of E, the language used in section 3.7.
There is really just one more puzzle. In the theory developed in [3][4][5][6], whenever one crosses a boundary puncture, the orientation of E jumps. Why is this true?
A quick answer is the following. In general, for any brane B, (B, B) strings in the A-model correspond to local operators that can be inserted on the boundary of the string in a region of the boundary that is labeled by brane B. Our model is only locally equivalent to an A-model, but this is good enough to discuss local operators. In the case of the branes B and B , as is contractible, the only interesting local (B , B ) or (B , B ) operator is the identity operator. However, in topological string theory, what we add to the action along the boundary of the string worldsheet is really a descendant of a given local operator. In the case of a boundary local operator O, what we want is the 1-form operator V that can be deduced from O via the descent procedure. If O is the identity operator, then V = 0. (Recall that V is characterized by {Q, V} = dO, where Q is the BRST operator of the theory; if O is the identity operator, then dO = 0 so V = 0.) Therefore we cannot get anything interesting from (B , B ) or (B , B ) strings.
The analogy with the standard A-model indicates that the space of (B , B ) or (B , B ) strings is also 1-dimensional (see sections 3. 8.3 and 3.8.4), but now a (B , B ) or (B , B ) string corresponds to a local operator that causes a jumping in the brane that labels the boundary, and this is certainly not the identity operator. Thus the gravitational descendant will not vanish.
Another crucial detail concerns the statistics of the operators. The identity operator is bosonic, so its 1-form descendant, if not zero, would be fermionic. A fermionic boundary puncture operator is not what we need for the theory of [3][4][5][6] There is also an important detail on which the analogy to the standard A-model is a little misleading, because it is only valid locally. In an A-model with branes B and B , the (B , B ) and (B , B ) local operators would be independent operators, and we would potentially include them (or their 1-form descendants) with independent coupling parameters. In the present context, there is not really any way to say which is which of B and B ; one can only say that they differ by the orientation of the real spin bundle. 22 So there is really only one type of boundary puncture, which one can think of as (B , B ) or (B , B ), and correspondingly there is only one boundary coupling.
It follows, incidentally, that even if the identity (B , B ) or (B , B ) operator had a nontrivial 1-form gravitational descendant, it could not play a role. We would have to identify these two operators, so we would have a single such operator with a fermionic coupling constant υ. As the correlation functions of topological gravity are bosonic, they could not depend on a single fermionic variable υ.
Orientations and Statistics
Consider a brane B in an arbitrary A-model with some target space X. The support of B is a Lagrangian submanifold L ⊂ X. Take B to have trivial Chan-Paton bundle. 23 If we consider N copies of brane B , we get an effective U (N ) gauge theory along L. We will give a simple example to explain why this must be the case. For a familiar setting, take X to be a Calabi-Yau three-fold. The effective gauge theory for N copies of a brane is actually a U (N ) gauge theory. Let us denote the gauge field as A. The theory also has a 1-form field φ in the adjoint representation, which describes fluctuations in the position of the brane. The effective action is a multiple of the Chern-Simons three-form for the complex connection A = A + iφ: (3.34) 22 For example, B and B are exchanged in going all the way around a circle with NS spin structure. Perhaps more fundamentally, orienting the real spin bundle of one boundary of Σ does not in general tell us how to choose such an orientation for other boundaries. So we can say locally how B and B differ but there is no global notion of which is which. 23 For example, L might be topologically trivial (as it is in our application, with L = ). We will ignore various subtleties related to the K-theory interpretation of branes; these are not relevant for our purposes.
Here CS(A) = Tr A ∧ dA + 2 3 A ∧ A ∧ A is the Chern-Simons three-form and g st is the string coupling constant. There is no problem, given L purely as a bare three-manifold, so define the three-form CS(A). But to integrate a three-form over L requires an orientation of L. There is no natural choice, but a choice is part of the definition of a brane with support L. That is one way to understand the fact that in order to define a brane B or B with support L, one needs to endow L with an orientation; and there are in fact two A-branes B and B with the same support L that differ only by which orientation is chosen. The sign of the effective action I is opposite for B relative to B . We recall that the supertrace of an N |M -dimensional matrix is defined, in an obvious notation, as where the relative minus sign is just what we need so that the supertrace of a Chern-Simons three-form of U (N |M ) leads to opposite signs for the U (N ) and U (M ) parts of the action.
A consequence of going from U (N + M ) to U (N |M ) is that the statistics of the off-diagonal blocks V and W is reversed. At the end of section 3.8.2, that is what we needed so that the (B , B ) strings are fermionic, and have bosonic 1-form descendants.
The situation just described does not usually arise in physical string theory, because there one usually is interested in branes that satisfy a stability condition involving the phase of the holomorphic volume form of the Calabi-Yau manifold, restricted to the brane. For a given Lagrangian submanifold, this condition is satisfied at most for one orientation.
Quantizing the String
In the standard A-model, the space of local operators of type (B 1 , B 2 ), for any branes B 1 and B 2 that may or may not be the same, is the same as the space of physical states found by quantization on an infinite strip with boundary conditions set by B 1 at one end and by B 2 at the other end. Here we will explain the analog of this for the model under consideration here, which is only locally equivalent to a standard A-model.
We will work on the strip 0 ≤ x 2 ≤ a in the x 1 x 2 plane, for some a, and will treat x 1 as Euclidean "time." In eqn. (3.33), there is now a boundary contribution at x 2 = a, as well as the one at x 2 = 0 that was discussed previously. The two contributions have opposite signs, and to achieve compactness the boundary condition at x 2 = a should ensure that Im W → −∞ at infinity.
Thus we take the boundary condition at x 2 = a to be φ 1 = 0, while at x 2 = 0 it is φ 2 = 0, as before. 24 To find the space of physical states with these boundary conditions, the first step is to find the space of classical ground states. With x 1 viewed as "time," these are the x 1 -independent solutions of the ζ-instanton equation that satisfy the boundary conditions at the two ends. For solutions that depend only on x 2 , the ζ-instanton equation reduces to dφ dx 2 + mφ = 0. The only solution of this linear first-order equation with φ 2 = 0 at x 2 = 0 and φ 1 = 0 at x 2 = a is φ = 0. Moreover, this solution is nondegenerate, meaning that when we linearize around it, the linearized equation has trivial kernel. (In the present case, this statement is trivial since the ζ-instanton equation is already linear.) A nondegenerate classical solution corresponds upon quantization to a single state.
If there were multiple classical vacua, we would have to consider possible tunneling effects to identity the quantum states that really are supersymmetric ground states. With only one classical vacuum, this step is trivial. So in our problem, there is just one supersymmetric ground state.
One might be slightly puzzled that we seem to have used different boundary conditions and thus different branes at x 2 = a relative to x 2 = 0. However, if we conformally map the strip to the upper half plane x 2 ≥ 0, mapping x 2 = −∞ in the strip to the origin x 1 = x 2 = 0 in the boundarey of the upper half plane, then this difference disappears. What we have done, on both boundaries, is to require that φ should restrict on ∂Σ to a section of the real spin bundle E → ∂Σ.
The space of supersymmetric ground states that we just obtained corresponds to the space of local operators of type (B , B ), (B , B ), or (B , B ) that can be inserted at x 1 = x 2 . Since we did not have to orient the spin bundles of the boundaries of the strip in order to determine that there is a 1-dimensional space of physical states on the strip, the spaces of local operators of type (B , B ), (B , B ), or (B , B ) are the same if understood just as vector spaces. But these operators have different statistics, as explained in section 3.8.3.
Boundary Degenerations
So far we have concentrated on questions concerning the orientation of the moduli space. However, as explained in section 3.1, in trying to define topological gravity on Riemann surfaces with boundary, there is a second serious problem, which is that the moduli space of Riemann surfaces with boundary, with its Deligne-Mumford compactification, itself has a boundary. Because of this, intersection numbers such as the correlation functions M i ψ d i i of topological gravity (eqn. (3.1)) are a priori not well-defined from a topological point of view. We will explain schematically how this difficulty has been overcome, going just far enough to describe the simplest concrete computations. For full explanations, see [3][4][5][6].
First let us give a simple example to illustrate the problem. A disc Σ with n boundary punctures c) Figure 6: (a) A disc Σ with n boundary punctures that develops a narrow neck. (b) The neck collapses and Σ degenerates to the union of two discs Σ 1 and Σ 2 glued at a point. (c) The picture of part (b) can be recovered by gluing p 1 ∈ Σ 1 to p 2 ∈ Σ 2 . The original boundary punctures of Σ are divided in some way between Σ 1 and Σ 2 .
(and no bulk punctures) has a moduli space M of real dimension n − 3. The disc can degenerate in real codimension 1 by forming a narrow neck ( fig. 6(a)), which then pinches off ( fig. 6(b)) to make a singular Riemann surface Σ that can be obtained by gluing together two discs Σ 1 and Σ 2 ( fig. 6(c)). This occurs in real codimension 1, and thus fig. 6(b) describes a component of ∂M, the boundary of M. As a check, let us confirm that the configuration in fig. 6(b) has precisely n − 4 real moduli, so that it is of real codimension 1 in M. Σ 1 and Σ 2 inherit the boundary punctures of Σ, say n 1 for Σ 1 and n 2 for Σ 2 with n 1 + n 2 = n. In addition, Σ 1 and Σ 2 have one more boundary puncture p 1 or p 2 where the gluing occurs. So in all, Σ 1 and Σ 2 have respectively n 1 + 1 and n 2 + 1 boundary punctures, and moduli spaces of dimension n 1 − 2 and n 2 − 2. The singular configuration in fig. 6(b) thus has a total of (n 1 − 2) + (n 2 − 2) = n − 4 real moduli, as claimed.
Thus, we have confirmed the assertion that moduli spaces of Riemann surfaces with boundary are themselves manifolds (or orbifolds) with boundary. This presents a problem for defining intersection numbers. Now let us reexamine this assuming that Σ is endowed with a spin bundle S and that the induced real spin bundle E of ∂Σ is piecewise trivialized along ∂Σ, as described in section 3.7. We immediately run into something interesting. If Σ is a disc, the spin bundle E → ∂Σ is always of NS type, and the number n of boundary punctures on a disc will have to be odd. But when Σ degenerates to the union of two branches Σ 1 and Σ 2 , with n 1 + 1 punctures on one side and n 2 + 1 on the other side, inevitably either n 1 + 1 or n 2 + 1 is even. But in the theory that we are describing here, a disc is always supposed to have an odd number of boundary punctures. What this means in practice is that either p 1 or p 2 does not really behave as a boundary puncture in the sense of this theory: the piecewise trivializations of the real spin bundles E 1 → Σ 1 and E 2 → Σ 2 jump in crossing either p 1 or in crossing p 2 , but not both. This is explained more explicitly shortly. As a result, the cohomology classes ψ i whose products we want to integrate to get the correlation functions have the property that when restricted to ∂M, they are pullbacks from a quotient space in which either p 1 or p 2 is forgotten. Effectively, then, ∂M behaves as if it is of real codimension 2 and the intersection numbers are well-defined. Now let us explain these assertions in more detail. First we introduce a useful language. In the following, Σ will be a Riemann surface, possibly with boundary. We write K for the complex canonical bundle of Σ and S for its chiral spin bundle. So K is a complex line bundle over Σ, and S is a complex line bundle over Σ with a linear map w : S ⊗ S → K that establishes an isomorphism between S ⊗ S and K.
Along ∂Σ, it is meaningful to say that a one-form is real, and thus K, restricted to ∂Σ, has a real subbundle. Moreover, the Riemann surface Σ is oriented and this induces an orientation of ∂Σ. As a result, it is meaningful to say that a section of K, when restricted to ∂Σ, is real and positive. For example, if Σ is the upper half of the complex z-plane, so that ∂Σ is the real z axis, then the complex 1-form dz is real and positive when restricted to ∂Σ. But if Σ is the lower half of the z-plane, then its boundary is the real z axis now with the opposite orientation, and so in this case, −dz is real and positive along ∂Σ.
This gives a convenient framework in which to describe the real spin bundle E of ∂Σ. We say that a local section ψ of S → Σ is real along ∂Σ if the 1-form w(ψ ⊗ ψ) is real and positive when restricted to ∂Σ. In this case, we say that the restriction of ψ to ∂Σ is a section of E. This serves to define E. For example, if Σ is the upper half of the complex z-plane, then a section ψ of S with the property that w(ψ ⊗ ψ) = dz is real along ∂Σ, and its restriction to ∂Σ provides a section of E. We describe this more informally by writing ψ = √ dz. Note that since (−ψ) ⊗ (−ψ) = ψ ⊗ ψ, in this situation we also have w((−ψ) ⊗ (−ψ)) = dz. So just like the square root of a number, a square root of dz is only uniquely determined up to sign. If Σ is the lower half of the complex z plane, then a section ψ of S that satisfies w(ψ ⊗ ψ) = −dz is real and is a section of E. We describe this informally by writing ψ = ± √ −dz or ψ = ±i √ dz.
A trivialization of the real spin bundle E → ∂Σ is given by any nonzero section of E. For example, if Σ is the upper half z plane, then E → ∂Σ can be trivialized by ψ = ± √ dz, and if Σ is the lower half z plane, then E → ∂Σ can be trivialized by ψ = ±i √ dz. With this in place, we can return to our problem. In fig. 7, we show the same open-string degeneration as in fig. 6, but now we zoom in on the important region where the degeneration occurs and do not specify what the Riemann surface Σ looks like outside this region. The openstring degeneration is drawn in the figure ignoring spin structures and their trivializations. In figure 8, we repeat fig. 7(a), but now providing information about the trivializations of spin structures.
First of all, as there are no boundary punctures in this picture, 25 the real spin bundle of ∂Σ is supposed to be trivialized everywhere in the picture. The trivializations are easy to describe in the regions -the upper and lower left and right in the figure -in which ∂Σ is parallel to the real z axis. We will use the fact that as Σ is a region in the complex z plane, the complex 1-form dz is defined everywhere on Σ; similarly it is possible to make a global choice of sign of ψ = √ dz, though such a ψ will not be everywhere real on ∂Σ. The overall sign of what we mean by √ dz will not be important in what follows.
We begin on the upper right of the picture with E trivialized by ψ = √ dz. (It would add nothing essentially new to use − √ dz in the starting point, as the overall sign of √ dz is anyway arbitrary.) Now on the upper left of the picture, we pick a trivialization ± √ dz. This sign is meaningful, given that we used the trivialization + √ dz on the upper right. Now we continue through the narrow neck into the lower part of the picture. As we do this, the boundary of ∂Σ bends counterclockwise by an angle π on the right of the figure and by an angle −π on the left. As a result, a section of S → ∂Σ has to acquire a phase in order to remain real. The trivialization of E that is defined as √ dz on the upper right will evolve to i √ dz on the lower right, and the trivialization of E that is defined as ± √ dz on the upper left will evolve to ∓i √ dz on the lower left.
We see that with one choice of sign on the left part of the picture, the trivializations agree on the upper left and upper right of the figure but not on the lower left and lower right; with the other choice of sign, matters are reversed. So when Σ degenerates to the union of two branches Σ 1 and Σ 2 that are to be joined by gluing a point p 1 ∈ ∂Σ 1 to a point p 2 ∈ ∂Σ 2 , as in fig. 7(c), the trivialization of the spin structure of the boundary jumps in crossing p 1 but not in crossing p 2 or in crossing p 2 but not in crossing p 1 . In the construction studied in [3][4][5][6], precisely one of p 1 and p 2 plays no role and can be forgotten. This is the basic reason that the boundary of M behaves as if it is of real codimension two and the correlation functions are well-defined. We provide more detail momentarily.
Computations of Disc Amplitudes
Several concrete methods to compute in this framework have been deduced [3][4][5][6]. Here we will just describe the simplest computations of disc amplitudes.
First let us discuss the proper normalization of a disc amplitude. We write g st for the string coupling constant in topological gravity of closed Riemann surfaces with its usual normalization, and g st for the string coupling constant in the present theory.
In the standard approach, genus g amplitudes are weighted by a factor of g 2g−2 st . With theory T included, this is replaced by g 2g−2 st 2 g−1 , where 2 g−1 is the partition function of theory T (eqn. (3.5)). The relation between the two is thus A disc has Euler characteristic 1, so a disc amplitude is weighted by 1/ g st = √ 2/g st . The partition function of theory T on a disc is 1/2 (as a disc has only one spin structure). However, for any given set of boundary punctures, there are two possible piecewise trivializations of the spin structure of the boundary, with the requisite jumps across boundary punctures. These two choices will contribute equally in the simple computations we will discuss, so we can take them into account by including a factor of 2.
The factors discussed so far combine to 2 · 1 2 √ 2/g st = √ 2/g st . In addition, in [3] it was found convenient to include a factor of 1/ √ 2 for every boundary puncture. Thus, let Σ be a disc with m boundary punctures and n bulk punctures labeled by integers d 1 , . . . , d n ; let M be the compactified moduli space of conformal structures on Σ. Then refining eqn. (3.1), the general disc ampitude is This formula agrees with eqn. (18) in [3]. We have included factors of g st in this explanation, because that helps determine the factors of 2 that are needed to ensure that the theory is consistent with the standard normalization in the case that a surface Σ has no boundary. However, in mathematical treatments, g st is often set to 1, and we will do so in the rest of this section. (No topological information is lost, since a given correlation function receives contributions only from surfaces with a given Euler characteristic, and this determines the power of g st .) In interpreting eqn. (3.37), we consider the boundary punctures to be inequivalent and labeled, and we sum over all possible cyclic orderings. For example, let us compute σσσ , which receives a contribution only from a disc with three boundary punctures labeled 1,2,3. There are two cyclic a) b) The simplest method to compute arbitrary disc amplitudes is given by the recursion relations in Theorem 1.5 of [3], and indeed the first of these relations is sufficient. To explain it, first we recall the genus 0 recursion relations of [17]. It is convenient to define Thus τ d 1 τ d 2 · · · τ ds is an amplitude with specified insertions as shown, with all possible additional insertions weighted by powers of the t n . We also write τ d 1 τ d 2 · · · τ ds 0 for the genus 0 contribution to τ d 1 τ d 2 · · · τ ds . Then one has the genus 0 recursion relation The proof goes roughly as follows. For a smooth genus 0 surface Σ, we take the complex z-plane plus a point at infinity. We denote the specified punctures as z 1 , z 2 , z 3 . We will construct a convenient section λ of the line bundle L 1 → M whose fiber is the cotangent bundle to Σ at z 1 . Let ρ be the . (3.42) It has poles at z = z 2 , z 3 , with residues 1 and −1, and elsewhere is regular and nonzero. These properties characterize ρ uniquely, so ρ does not depend on the coordinates used in writing the formula. Upon setting z = z 1 in ρ, we get a holomorphic section λ of L 1 → M; the divisor D of the zeroes of this section represents c 1 (L 1 ). But λ never vanishes when Σ is smooth, because ρ has no zeroes on the finite z-plane or at z = ∞. If Σ degenerates to two components with z 2 and z 3 on opposite sides ( fig. 9(a)), λ is still everywhere nonzero. But if z 2 and z 3 are contained in the same component ( fig. 9(b)), then λ vanishes on the other component. Finally, then, ρ vanishes precisely if, as in the figure, z 1 is contained in the opposite component from the one containing z 2 and z 3 . Moreover, this is a simple zero (because ρ has a simple zero at z 2 = z 3 ). So in τ d 1 = c 1 (L 1 ) d 1 , we can replace one factor of c 1 (L 1 ) with a restriction to the divisor D that is depicted in fig. 9(b). After making this substitution, we are left with an insertion of τ d 1 −1 on one branch and insertions of τ d 2 and τ d 3 on the other; in addition, a new puncture corresponding to an insertion of τ 0 appears on each branch, where the two branches meet. All this leads to the right hand side of eqn. (3.41). It is not difficult to see that this recursion relation uniquely determines all genus zero amplitudes, modulo the statement that the only nonzero amplitude with insertions of τ 0 only is τ 3 0 0 = 1.
The disc recursion relation that we aim to describe can be formulated and proved in almost the same way. Similarly to the previous case, we define and write τ d 1 τ d 2 · · · τ ds σ m D for the disc contribution. The desired recursion relation is Roughly speaking, we are going to again compute c 1 (L 1 ), for one of the bulk punctures, from the zeroes of a convenient section λ of L 1 . However, here because M has a boundary, we have to discuss how to relate c 1 (L 1 ) to the zeroes of a section.
As discussed in section 3.9, the boundary ∂M of M has a forgetful map in which precisely one of the extra boundary punctures that appears at an open-string degeneration is forgotten. Let us write N for the remaining moduli space when this puncture is forgotten, so that the forgetful map is π : ∂M → N .
Simplifying a little, 26 the recipe [3] is that c 1 (L 1 ) can be represented by the zeros of any section s of L 1 that is nonvanishing everywhere along ∂M, and whose restriction to ∂M is a pullback 26 The general recipe has two further complications. First, in general one is allowed to compute using a multisection from N . Alternatively, one can still calculate c 1 (L 1 ) using any section s of L 1 that is everywhere nonzero along the boundary, even if its restriction to the boundary is not a pullback. But in this case, c 1 (L 1 ) is represented by a sum of two contributions, one involving in the usual way the zeroes of s, and the second measuring the failure of the restriction of s to be a pullback.
Setting z = x + iy, we take a smooth disc D to be the closed upper half-plane y ≥ 0 plus a point at infinity. On the left hand side of eqn. (3.44), we see a distinguished bulk puncture that we place at z 1 = x 1 + iy 1 , y 1 > 0, and a distinguished boundary puncture that we place at x 0 . In the present case, there is a convenient section λ of L 1 that is everywhere nonzero along the boundary, but whose restriction to the boundary is not a pullback. To construct it, rather as before, we set This 1-form is regular and nonzero throughout D, except at the boundary point x 0 . Evaluating ρ at z = z 1 , we get a section λ of L 1 that is regular and nonzero as long as D is smooth.
At a closed-string degeneration, where D splits up into the union of a two-sphere and a disc ( fig. 10(a)), λ has a simple zero if and only if z 1 is on the two-sphere component. This is responsible for the first term on the right hand side of the recursion relation (3.44). At an open-string degeneration, where D splits up into the union of two discs ( fig. 10(b)), λ remains everywhere nonzero. However, in case the boundary puncture that is supposed to be forgotten is in the same component as z 1 , λ restricted to ∂M is not a pullback from N . The second term on the right hand side of eqn. (3.44) corrects for this failure. See fig. 10 for an explanation of the statements about the behavior of λ at degenerations.
The Loop Equations
Let us now briefly recapitulate the representation of topological gravity in terms of random matrix models. The simplest models are single matrix models of the form Here Φ is a hermitian N × N matrix integrated with the Euclidean measure for each matrix element, W (x) is a complex polynomial, say of degree d + 1, and g st is the string coupling constant. Since we divide by the volume of the "gauge group" U (N ), this integral should be considered rather than a section. This is important because the conditions on a section that we are about to state are difficult to satisfy. Second, the general procedure allows one to define n i=1 c1(Li) d i , without defining the individual c1(Li), by picking a multisection s of E = ⊕ n i=1 L ⊕d i i . This multisection should obey conditions analogous to the ones that we will state momentarily. Figure 10: (a) A disc D splits up into the union of a disc and a sphere (upper half of the drawing). If the bulk puncture z 1 is contained in the sphere, then the section λ vanishes. To see this, take the closed oriented double cover, obtained here by adding additional components (lower half of the drawing, sketched with dotted lines). It is a union of three spheres connected at double points. The differential ρ has poles only on the bottom two components and vanishes identically on the top component. So, setting z = z 1 to define λ, we learn that, with z 1 being in the top component, λ vanishes. (b) The same disc D splits into a union of two discs, again comprising the upper half of the drawing. The interesting case is that z 1 and x 0 are on opposite sides, as shown. The oriented double cover (the full drawing including the bottom half) is a union of two spheres. ρ has poles at x 0 and z 1 and so is nonzero on both branches; hence λ = 0 along this divisor. On the branch containing z 1 , ρ has an additional pole at the point labeled p 1 where the two branches meet. Therefore λ depends on p 1 , and, if p 1 is the boundary puncture that is forgotten by the forgetful map π : ∂M → N , then along this component of the boundary, λ is not a pullback. and its dual are added to the matrix model. Because Ψ i and Ψ j , i, j = 1, . . . , N carry only a single "index" -rather than the two indices of the matrix M i j -their propagator is naturally represented by a single line rather than the double line of the matrix propagator. These single lines provide boundaries of the surface Σ, so now we get a ribbon graph on Σ with Ψ propagating on the boundary of Σ, as shown. For the model described in the text, the Ψ propagtor is 1/z and this gives a factor 1/z L where L is the length of the boundary. the zero-dimensional analogue of a gauge theory-we integrate over matrices Φ modulo gauge In general, if Re W is not bounded below, one needs to complexify the matrix Φ and pick a suitable integration contour in the space of complex matrices to make the integral well-defined. For a formal expansion in powers of g st and even for the formal expansion in powers of 1/N that we will make shortly, this is not necessary and we can consider (4.1) as a formal expression.
In a perturbative expansion near a critical point of W (Φ), the Feynman diagrams become socalled "fat" or ribbon graphs that can be conveniently represented (see fig. 11(a)) by a double line [57]. These are graphs, in general with loops, that can be naturally drawn on some oriented two-manifold of genus g. The contribution of such a graph to the expansion of the matrix integral is weighted by a factor (g st N ) g 2g−2 st .
(4.3)
The large N or 't Hooft limit is obtained by taking the rank N of the matrix to infinity and simultaneously the coupling g st to zero, keeping fixed the combination µ = g st N. (4.4) In the limit, all graphs with a fixed genus and an arbitrary number of holes contribute in the same order, so the matrix integral has an asymptotic expansion of the form where F g is the contribution of ribbon graphs of genus g. In general, the matrix integral depends on the coefficients of the potential W and the particular critical point around which the expansion is made. We describe the critical points at the end of this section.
Matrix integrals are governed by Virasoro constraints that are associated to the vector fields L n ∼ −Tr Φ n+1 ∂ ∂Φ . Though these constraints can be deduced directly from that representation of L n , a fuller understanding with details that we will need below can be obtained by diagonalizing the matrix as Φ = U ΛU −1 , with U unitary and Λ = diag(λ 1 , λ 2 , · · · , λ N ). The integral over U cancels the factor of 1/vol(U (N )) in the definition of the matrix integral, and the integral becomes are all kept finite. These parameters characterize the saddle-points, and together with the coefficients of the polynomial W (x) play the role of moduli of the matrix model. (In our application, because it only involves a local portion of the spectral curve, we will not really see these parameters.) To derive the Virasoro constraints on the matrix integral, one can start with This implies the identity where the symbol · · · is defined by (4.10) In eqn. (4.9) we see the matrix resolvent Tr (x−Φ) −1 = K (x−λ K ) −1 , but as we will see a slightly more convenient variable is The identity (4.9) is equivalent to and we note that if W is a polynomial, then is a polynomial in x, as is If W is a general function W = n≥0 u n x n regular at x = 0, then P (x) is no longer a polynomial but is regular at x = 0.
When we insert the expression J(x) inside the matrix integral (4.6), where we now consider a general function it can be written as a differential operator Comparing to standard formulas in conformal field theory, we are led to set where ϕ(x) is a chiral boson in a c = 1 conformal field theory with canonical two-point function ∂ϕ(x)∂ϕ(y) ∼ 1/(x − y) 2 . Thus and formally The corresponding stress tensor is (4.20) Making the standard mode expansion the equation (4.12) becomes a set of differential equations for the partition function, Since P (x) is regular at x = 0, it contributes only to the terms in eqn. (4.22) with k ≤ −2 and those terms serve to determine 27 P (x). However, for k ≥ −1, P (x) does not contribute to eqn. (4.22) and we get differential equations satisfied by Z: In this range of n, the L n are So far, all of this is true for any N ; we have not made any large N approximation. For any function h, the quantity g st Tr h(Φ) has a limit for large N , and for any two functions h 1 , h 2 , one has a large N factorization These properties can be demonstrated by an elementary study of the matrix integral. In particular, both J and f (x) have large N limits, and in the large N limit where the subscript denotes the the large N limit. Eqn. (4.12) becomes for large N a hyperelliptic equation for y and defines what is known as the spectral curve C. In eqn. (4.29), y, W , and f all depend on the "coupling parameters" u i , though this is not shown explicitly. Remarkably, the spectral curve fully captures the solution of the matrix model. That is, all the perturbative functions F g can be completely calculated using the geometric data of the spectral curve [58].
Suppose that W is a polynomial of degree d+1, thus with d critical points p 1 , . . . , p d . Concretely, when one takes the large N limit of the matrix integral, the first step is to pick a critical point of the matrix function Tr W (Φ) = N K=1 W (λ K ) about which to expand. The critcal points of this matrix function are found simply by setting each of the λ K equal to one of the p j . Up to a permutation of the λ's, the critical points are classified by the number N i of eigenvalues that equal p i . The N i are subject to one constraint (4.30) A large N limit is obtained in general by taking N → ∞ keeping fixed In the large N limit, the µ i behave as continuous variables constrained only by Thus if W is of degree d + 1, there are d "moduli" µ i that appear in constructing the large N limit. We note from eqn. In the above derivation, for finite N , we discovered that the matrix integral is governed by an operator-valued conformal field ∂ϕ(x). For finite N , this field depends on the parameters of the matrix model, namely the u i and N , as well as x. In the large N limit, the matrix integral can be defined by an expansion around a particular saddle point, and then new parameters appear. For the "bare" matrix integral, without trying to compute the expectation value of the resolvent, the extra parameters are the µ i . When one tries to compute the expectation value of the resolvent, there is an additional binary choice, since J(x) is governed by a quadratic equation with two roots. In the large N limit, and also in the more refined double scaling limit in which N → ∞ with µ = g st N fixed, the formalism with the conformal field ∂ϕ remains valid, but this field now depends on additional parameters -the µ i and the choice of sign of J(x) .
In our application, the µ i will not be very important, since we will consider only the local behavior near a particular branch point. However, the extension of the conformal formalism to include the choice of sign of J(x) is important. It means that ∂φ should be interpreted as a conformal field on the spectral curve C, the double cover of the x-plane that is defined by the hyperelliptic equation (4.29). The hyperelliptic curve has an involution y → −y that exchanges the two choices of the sign of J(x) . Since ∂ϕ is defined as a multiple of J(x) (eqn. (4.17)), ∂ϕ is odd under the hyperelliptic involution.
Double-Scaling Limits And Topological Gravity
Topological gravity and other models of two-dimensional gravity coupled to matter are obtained by taking a suitable double-scaling limit of the generic matrix model. These scaling limits are best understood in terms of the underlying spectral curve. For the so-called (2, 2p − 1) minimal model CFT coupled to gravity, the corresponding spectral curve takes the form This limiting curve can be obtained by starting from the generic case y 2 = P (x), where P is a polynomial of degree 2p, and then making 2p − 1 branch points coincide and sending the remaining one to infinity. In particular for topological gravity, which corresponds to the case p = 1, we choose to write the underlying curve as 1 2 This curve can be obtained, for example, from the simple Gaussian matrix model, with a quadratic polynomial W (x) = x 2 . In this example, the polynomial P is P (x) = x 2 − c, with a constant c.
There are branch points at x = ± √ c. After shifting x by a constant and "zooming in" to a single branch point, one gets the curve of eqn. (4.34).
In the limit that the spectral curve C is described by eqn. (4.34), the operator-valued conformal field ∂ϕ takes a simple form. Because it is odd under the hyperelliptic involution y → −y, its expansion in powers of x has only half-integer powers. We will choose to parametrize the expansion as Here ϕ is what would usually be called a twisted chiral boson on the complex x-plane, with a twist field at x = 0 (and another at x = ∞). The s n are functions of the parameters u n of an underlying matrix model; the precise relationship depends upon exactly what matrix model one starts with before passing to the limit in which the spectral curve C reduces to the curve y 2 = 2x. This relationship is not very important for us.
What is important is the relationship between the s n and the corresponding parameters t n of topological gravity -the parameters that were introduced in eqn. (3.3). This relationship turns out to be t n = (2n + 1)!! 2 n s n . This statement is part of the relationship between the matrix model and intersection theory on M g,n , as proved in [20] as well as [2,21,22]. (Note that the factor 2 n , which is not entirely standard, is a consequence of our particular normalization of the spectral curve in eqn. (4.34).) Inserting these expressions into the loop equations then gives the familiar Virasoro constraints L n Z = 0, n ≥ −1, (4.37) where the operators L n are modes of the stress tensor T = 1 2 (∂ϕ) 2 , with ∂ϕ now given by eqn. (4.35). That is, we have Note that these equations fix the normalization of the partition function. In particular if we set all variables s n = 0 for n > 0, the L −1 constraint gives the genus zero contribution (using s 0 = t 0 ) corresponding to three closed-string punctures on the sphere Note that in that case, with only t 0 non-zero, the spectral curve becomes Returning to a theme from section 2.4, we are now also in a position to write the spectral curve that corresponds to the model computing the volumes of the moduli space of curves. As we have seen in equation (2.24), in that case the values of the coupling constants are t n = (−1) n ξ n−1 (n − 1)! , n ≥ 2, (4.44) which corresponds to s n = n(−1) n 2 2n ξ n−1 (2n + 1)! , n ≥ 2. which is, up to normalization coventions, the known expression for the spectral curve [42].
Perhaps we should add another word about eqn. (4.35). Because the modes of ∂ϕ(x) proportional to x n−1/2 , n ≥ 0, commute, we can just declare them to be multiplication by commuting variables s n . In a derivation that starts with a matrix model based on a function W (x) = u n x n , the s's would be complicated functions of the u's; the precise functions would depend on exactly how one zooms in on a critical point to get to the spectral curve y 2 = 2x. Once the coefficients of x n−1/2 , n ≥ 0 are fixed as s n , the coefficients of other terms in ∂ϕ(x) are uniquely determined by the commutation relations and operator product expansion satisfied by ∂ϕ(x).
Branes And Open Strings
Before we consider open strings within topological gravity, let us first discuss the formulation of open strings in a general random matrix model. 28 Open strings are naturally included by adding vector degrees of freedom. Let Ψ, Ψ be a pair of conjugate U (N ) vectors. We can choose these to be bosonic or fermionic variables. The natural interaction with the matrix variable Φ takes the form dΨ dΨ · exp −zΨ T Ψ + Ψ T · Φ · Ψ (4.47) The effect of adding these additional variables is that now the ribbon graph is naturally drawn on a two-manifold Σ with boundary ( fig. 11(b)). The propagator of the vector variables has a factor 1/z, leading to a factor 1/z L , where L is the length of the boundary of Σ.
The integral over Ψ and Ψ just gives a determinant det(z − Φ) ±1 (4.48) (apart from an irrelevant constant factor that could be absorbed in normalizing the measure). Here the sign in the exponent is −1 or +1 if Ψ, Ψ are bosons or fermions. In terms of the Feynman diagram expansion, this sign means that for fermions, one will get an extra −1 for every component of the boundary of Σ.
are "operators" that create a brane or antibrane with the "modulus" z. We will see that z has the interpretation of a value of x, which parametrizes the base of the hyperelliptic spectral curve 29 y 2 = P (x). More generally, one could add several sets of vector degrees of freedom Ψ a , Ψ a , a = 1, . . . , r, each with its own modulus z a . For definiteness, we will consider the case of insertion of just one factor of V : It is not difficult to derive the modification of the Virasoro equations that reflects the presence of a brane. Repeating the derivation of eqn. (4.9), we get (4.52) Here A V (z) is defined, by analogy with eqn. (4.10), as the expectation of A in the matrix integral Z V (z) . However, it turns out that it is slightly more convenient to make the insertion of V explicit and to write the equivalent identity where A is defined precisely as in eqn. (4.10), with the original matrix integral Z.
We can write the integral (4.51) as (4.56) Because we multiplied the partition function with the factor e −W (z)/2gst , which introduces an extra explicit dependence on the coefficients u n , the formula for J(x) as a differential operator is still given by equation (4.16). We get the identity T (x)V (z) = P (x) + 1 4 where now (4.58) and the definition of f (x) becomes (4.59) P (x) has the same essential properties as before: it is a polynomial of degree 2d if W (x) is a polynomial of degree d + 1, and if W (x) has a general expansion n≥0 u n x n , then P (x) is regular at x = 0. Moreover, P (x) is regular at x = z.
Eqn. (4.57) has a nice interpretation. We can interpret V (z) as an insertion on the spectral curve (which generically is locally parametrized by x) of a primary field of conformal dimension h = 1/4. On the right hand side of eqn. (4.57), we see the expected singular contributions to the T (x)V (z) operator product expansion, while the terms involving nonnegative powers determine P (x) or are trivial identities. However, there are additional terms in the Virasoro generators. We write the Virasoro generators as L n = L c n + L o n , where superscripts c and o represent "closed-string" and "open-string" contributions. L c n comes from T (x) on the left hand side of eqn. (4.57) and is given by the same formula (4.24) as before. To find L o n , we move the singular terms in eqn (4.57) to the left hand side of the equation and expand in powers of 1/x: On top of these Virarsoro constraints, there is another useful relation that should be added. Recall that with the introduction of the brane modulus z, the partition function depends on one more variable, and we expect to find an accompanying relation to determine the matrix model. This extra relation can be considered as the analogue of the BPZ equation for degerate fields. It is obtained as the limit of the expression T (x)V (z) when we take x to z. The equation can be derived by observing that [59] .
(4.68)
In the right-hand side we recognize part of the loop equation (4.53) in the case x = z. Combining the two equations we obtain a second-order differential equation in z Let us now consider these equations in the double-scaling limit, where the spectral curve takes the form 1 2 y 2 = x − t 0 . In the absence of any further deformations-that is, without any other closed string insertions than the bulk puncture t 0 -the open string partition function Z V (z) is very simple to compute. We obtain this case by taking the limit of the Gaussian model W (x) = ax 2 , for which we find Q(x) = a 2 x 2 − c, c = g st (2N + 1). (4.73) and zoom again in on one of the branch points. In this limit the function Q(z) becomes simply Q = 2(z − t 0 ), and consequently equation (4.69) becomes the Airy equation The solution is the Airy function Z V (z) = dv e In this case one can also directly take the double-scaling limit of the exact expression for Z V (z) in the Gaussian model, where it given by the N -th eigenfunction of the harmonic oscillator, see e.g. the discussion in [60].
We know claim that the brane partition function, as computed in the double-scaled matrix model, is related to the topological gravity partition function by a Laplace transform (4.76) Something similar has been encountered in the B-model. It has been claimed in [61,62] for example, that there is an important subtlety if one introduce branes on a spectral curve. One can insert branes at a fixed value x = z or at a fixed value of y = v. These two brane insertions are exchanged by a Laplace (or Fourier 30 ) transform.
We have to compare this answer with the calculation in topological gravity where one computes insertions of the bulk puncture operator τ 0 and the boundary puncture operator σ Z top (v) = exp χ=−n g n st e t 0 τ 0 +vσ (4.77) as a sum over surfaces with Euler number −n. In the absence of other operators, as discussed in section 3.10, only two non-vanishing contributions are expected: the disc with three insertions of σ, or with one insertion of σ and one of τ 0 : So the correct answer should be Z top (v) = e which is consistent with the matrix model calculation (4.75). 30 Note that all these functional transforms are here considered as operations on formal power series.
One can now include arbitrary closed string perturbations and use this identification for the full partition functions. This becomes clear by considering the combined Virasoro constraints. If one takes into account the above Laplace transformation, these now take the form This is indeed the expression given in [3]. This completes the identification of the double-scaled matrix model with the open-closed topological string partition function.
We thank D. Freed, R. Penner, and J. Solomon for comments on the manuscript. Research of EW supported in part by NSF Grant PHY-1606531. | 2018-05-15T14:25:45.000Z | 2018-04-09T00:00:00.000 | {
"year": 2018,
"sha1": "7e096eb06bba6d14114d58a6c7cc14aa04074f30",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/1804.03275",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "7e096eb06bba6d14114d58a6c7cc14aa04074f30",
"s2fieldsofstudy": [
"Mathematics"
],
"extfieldsofstudy": [
"Physics",
"Mathematics"
]
} |
258154783 | pes2o/s2orc | v3-fos-license | Concurrent Obsessive-Compulsive Symptoms in Patients With Schizophrenia: A Retrospective Study From a Tertiary Care Centre in Sindh, Pakistan
Introduction: The present study aimed to evaluate the proportion of concurrent symptoms of obsessive-compulsive symptoms (OCSs) among patients with schizophrenia. Methods: A retrospective study was undertaken at the Department of Psychiatry, Jinnah Postgraduate Medical Center, Sindh, Pakistan between 1st March 2019 and 1st April 2020. All cases with diagnosed schizophrenia irrespective of gender, age, or ethnicity were eligible for the study. We excluded patients with acute psychosis due to isolated substance use disorder or any organic brain disease. The medical records for each patient were retrieved from the departmental database. Sociodemographic factors including age, gender, ethnicity, and presence of OCSs and other psychiatric comorbidities were recorded in a predefined pro forma. The presence of OCSs was noted by the attending psychiatrist during history taking as positive or negative. Results: A total of 139 patients were included. A predominance of the male gender was noted. There were 63 (45.3%) patients with concurrent OCSs. Out of the total patients, 42 (66.67%) males and 21 (33.33%) females had OCSs. A total of 28 (44.44%) patients between 31 and 45 years of age had OCSs. Out of the 63 patients with OCSs, 36 (57.14%) had a history of substance abuse (p = 0.471). In the study, 17 (26.98%) Balochi and 19 (30.16%) Pashtuns had OCSs. However, the difference was statistically insignificant. Conclusion: In conclusion, OCSs were frequent in patients with schizophrenia, according to the current study. We discovered that males, individuals between the ages of 18 and 30 years, Balochis, Pashtuns, and those with a history of substance abuse were more likely to have OCSs. However, the difference was not statistically significant.
Introduction
Schizophrenia is a psychiatric disease that affects cognition, emotion, perception, and other aspects of an individual's behavior causing disruption in the day-to-day activities of a patient [1]. Schizophrenia is among the most common causes of disability globally, affecting an estimated population of 1% [2].
The burden of schizophrenia was approximately 0.28% in 2016 irrespective of gender. Globally, the number of cases rose from 13.1 million to 20.9 million cases within two decades [2]. Schizophrenia is associated with 13 million years of life lived with disability highlighting the substantial burden of the disease. The proportion of schizophrenia cases has risen at an unsettling rate in Pakistan, as compared to other mental diseases [3]. A study from Pakistan revealed that Schizophrenia was the most common disorder among both males (30.4%) and females (25.2%) in the overall sample [4].
In the last few decades, literature has emerged highlighting the co-existence of psychiatric disorders in individuals with schizophrenia. Comorbid conditions in schizophrenia, including obsessive-compulsive symptoms (OCSs), impaired cognition, depression, anxiety, and substance abuse, have an impact on the management plan, patient compliance to medication, and patient outcomes [4][5][6]. The occurrence of OCSs in individuals with schizophrenia has been disclosed in recent studies with a varying proportion from 10% to 64% [5][6][7].
Preliminary studies demonstrated a positive outcome and suggested the relationship between OCSs and schizophrenia was an uncommon phenomenon. However, more recent research has shown that OCSs are present in a considerably larger proportion of schizophrenia patients and have a negative impact on the course and severity of the illness [4][5][6].
According to a meta-analysis of 50 studies published in 2011, 38.3% of individuals with schizophrenia had anxiety issues [7]. Obsessive-compulsive disorder (OCD) was reported in 12.1% of patients with schizophrenia, which is a higher incidence than in the general population [8]. These findings were supported by another meta-analysis involving 3978 patients with schizophrenia, with a prevalence of OCD of 12.3% and OCS of 30.3% [9].
Schizophrenia is a complex mental disorder that has a wide array of symptoms. It is often difficult to diagnose because its symptoms can be similar to those of other mental health conditions, such as OCD, bipolar disorder, or major depressive disorder. Comorbid psychiatric illnesses are harder to diagnose and treat. Furthermore, many people with schizophrenia may not seek treatment due to stigma or a lack of understanding about their condition [10]. Thus, it would be useful for psychiatrists to understand the symptomatology of schizophrenia and its association with comorbid psychiatric illnesses. The present study aimed to highlight the burden of comorbid OCSs in patients with schizophrenia. The findings of the study would identify the need for individualized treatment plans for better patient outcomes. All cases of diagnosed schizophrenia presented between 1st January 2015 and 28th February 2019 were eligible to be entered in the study. All cases with diagnosed schizophrenia, irrespective of gender, age, or ethnicity, were eligible for the study. We excluded patients with acute psychosis due to isolated substance use disorder or any organic brain disease.
Materials And Methods
The OpenEpi software (https://www.openepi.com/SampleSize/SSMean.htm) was used to determine the required sample size. By keeping the estimated prevalence of OCSs among patients with schizophrenia at 10% [11], a margin of error at 5%, and a confidence level of 95%, a sample size of 139 was determined.
Since this was a retrospective study, informed written consent of the patient was not applicable. The medical records for each patient were retrieved from the departmental database from 1st March 2018 to 30th March 2019. All cases of schizophrenia were diagnosed by an experienced consultant psychiatrist. As per the Diagnostic and Statistical Manual of Mental Disorders (DSM-5), an individual was diagnosed with schizophrenia if at least two of these five main symptoms (delusions, hallucinations, speaking incoherence, unusual movements, and negative symptoms) were present for at least six months and have significantly disrupted the individual's ability to work or maintain relationships [12]. Furthermore, the presence of either obsessive or compulsive symptoms or both were identified using an operational definition. Obsessive symptoms were defined as persistent, repetitive, intrusive, and distressful thoughts unrelated to the individuals' delusions while the compulsions were defined as repetitive goal-directed rituals clinically different from mannerisms of schizophrenia [13]. Presence of OCSs was noted by the attending psychiatrist during history taking as positive or negative.
Substance use disorders were defined as uncontrollable use of alcohol and/or drugs that has significant effects on the users' health and their ability to function and meet responsibilities at work, school, or home. The DSM-5 criteria for diagnosing a substance use disorder include signs of impaired control, social problems, risky use, and specific pharmacological criteria [12].
The medical records of all eligible cases were recruited for data extraction. Any personal identifiers such as names, hospital numbers, or home addresses were not collected to maintain the anonymity of the patients. Sociodemographic factors including age, gender, ethnicity, and presence of OCSs and other psychiatric comorbidities were recorded in a predefined pro forma.
Cases with partial or incomplete history or incomplete note-taking at admission or presentation were not included in the final analysis. Data was entered into IBM SPSS Statistics for Windows (Version 23.0. Armonk, NY: IBM Corp.). All continuous data such as age were illustrated as an average. All categorical data including gender, presence of OCSs, and substance use disorder were presented as frequency and proportions. The chisquare test was used to find the impact of sociodemographic and clinical parameters on the concurrence of OCSs in patients with schizophrenia. A p-value of ≤ 0.05 was deemed statistically significant. Table 2). Out of the 75 patients with a history of a substance use disorder, 61 (81.3%) used Cannabis (bhang/chars), eight (10.7%) used N-methyl-D-aspartate (NMDA), three (4%) used phencyclidine, and three (4%) used Naswar.
Discussion
The present study retrospectively assessed the medical records of individuals with schizophrenia for the presence of OCSs. Furthermore, the impact of age, gender, ethnicity, and substance abuse was sought. The current study divulged that 45.3% of individuals had concurrent OCSs. The majority of the individuals with schizophrenia belonged to Balochi and Pashtun ethnicity. There was also a male preponderance observed in the study. Furthermore, the substance abuse rate was alarmingly high among the patients. Age, gender, ethnicity, and substance abuse did not differ significantly. However, we could not find any statistical difference between OCSs.
The current study findings were in accordance with published literature. A study published by Kontis et al. studied 110 patients with schizophrenia and assessed OCSs among them. The study revealed that 51 patients had at least one OCS. Interestingly, the study further noted that patients with at least one OCS had better social functioning than those without OCS [14].
Another study supported this finding revealing that patients with schizophrenia have a U-shaped relationship between functioning and the presence of OCSs. The study concluded that mild OCSs had a direct association with better functioning, whereas moderate and severe OCSs had an inverse relationship with functioning [15].
In contrast to our study, Ahn Robins et al. assessed about 22 thousand patients with schizophrenia and other psychotic disorders. Out of these, 24% had OCSs and 11.9% had OCD. Although the rate of OCSs is much lower compared to our study, the study revealed that individuals with either OCSs or OCD had an increased likelihood of aggressiveness (odds ratio = 1.18; 95% CI, 1.10-1.26) and cognitive impairment (odds ratio = 1.21; 95% CI, 1.13-1.30) [16].
The concurrence of OCSs and OCD has an impact on the severity of schizophrenia, and it influences the disease course and treatment plans. A meta-analysis concluded that individuals with schizophrenia and concurrent OCS or OCD had increased severity of psychotic symptoms (p = 0.0104) [17]. There are certain studies claiming that certain antipsychotics exacerbates the OCSs severity among patients with schizophrenia [18][19][20]. | 2023-04-16T15:26:22.477Z | 2023-04-01T00:00:00.000 | {
"year": 2023,
"sha1": "5ffb7567ec66673db302a7d42af791555cea8eaf",
"oa_license": "CCBY",
"oa_url": "https://assets.cureus.com/uploads/original_article/pdf/132541/20230414-6569-uevnxc.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "83e88c3af8dd23683a55462af6a9aefedf09dd84",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": []
} |
169751427 | pes2o/s2orc | v3-fos-license | Workers Protection in Working Agreements: A Case of Employee’s Diploma Certificate on Company’s Custody as a Warranty
Working agreement is an agreement between the parties which includes rights and obligations of both the employees and the employers, if either party is incapable or incompetent of committing a legal act, then the working agreements may be canceled. Otherwise, if it made without the works as the contracted are contrary to public order, morality, and applicable legislation, the agreements were canceled by law. Each party has to agree and not be forced, it has to be clear so there is no misunderstanding in the future. The working agreements ideally to protect all the interests of the involving parties. In the working agreements, some companies are requiring the employees to deposit their diploma as a job warranty. The reason about the policy of holding the employee’s diploma as a warranty for time work agreement, where often the employees feel uncomfortable working with the company and then quit before the expiration of the working agreements. That could harm the company, in order to employees last longer at least until the expiration of working agreements. However, the policy was not considered to provide the protection to employees even it is disadvantageous to employees party and has no juridical basis for its implementation. That is breaking the 1945 Constitution on Human Rights “that everyone has the rights to find a decent job and to earn a decent living” and Act no. 13 of 2013 on Manpower. The Method used is normative law based on the essential elements also the main objective of the law is justice, benefit, and legal certainty for employees through working agreements with companies in order to create a working relationship which based on Pancasila’s Industrial relation.
Introduction
The working relationship started with the working agreement which was bind to both employees and employers which contain rights and obligations of parties. Working agreement in Dutch called arbeidsoverenkoms in acts no.13 of 2003 on manpower, article 1 paragraphs (14) provides the meaning that " working agreement shall be an agreement between workers or labor and entrepreneurs or employers which contain working requirements, rights, and obligations of parties". The working agreement has nature to be forcing which means the requirements in working agreement on labor law shall obey or follow by working agreement that determined by employer and employees shall not be contrary with the agreement that made by employer and labor union on his company. Similarly, working agreement shall also not be contrary to company rules that made by the employer. According to Soepomo, a relationship between employee and employer, where this working relationship occurs when there is a working agreement between both parties.
They are binding to an agreement, in one hand the employees are willing to work with salary and employers are hiring employees and giving the salary. Often occurs termination of employment with reasons employees disobeyed the company rules. This fact merely it said had disobeyed company rules. The responsibility of employees for disadvantages that caused by that, generally limited to disadvantages that occur because of the deliberate acts or them negligence. Deliberate means if the acts or do not acts intend to disadvantage another interest (employer) which happen because lack of carefulness so disadvantages employee's interest. Further, working agreement on a company ideally protects all parties interest which included in an agreement because an agreement should be made based on the deals between both parties, an agreement requiring employees to deposit their diploma as job warranty. The reason of policy for hold the employee's diploma as warranty especially for the employees on certain working time agreement with reason that the employee felt uncomfortable working on a company then decided to resign before the expiration of working time agreement. That would be a disadvantage to a company, that's one of the things that encourages the company to make policy about diploma held during the time of contract in order to employees stay longer at least unlit the expiration of the working agreement. But this policy has no judicial basis even could be said contrary to the constitution. To understands and establishes an agreement, then parties should be filling the legal requirements of agreement according to article 1320 civil code and article 52 acts no.13 of 2003 on manpower consist of deals from both parties that establish an agreement between company and employee, there is capability from employer and employee to make an agreement. If one of the requirements are not filled or parties that made an agreement is under pressure or being forced, the agreement may be canceled. As stated in the provisions of article 1321 which read "No agreement is of any value if granted by error, obtained by duress or by fraud".
Furthermore, there were certain things and lawful causes. If made without a promised job and contents of the agreement are contrary to legislation, public orders, and morals, then the agreement is null and void. Meanwhile, company rules are the rule made by one party which was the employer which contains provisions of job requirements and a company code of conduct. Based on the above description, the worker or employee diplomas held by fast food companies in Pekanbaru, such as Boga Group which running Japanese restaurant Sushi Tei is violating the 1945 constitution on human rights that every citizen shall have the right to work and to earn a human livelihood.
Research Methods
This study's using an observation and library research to support data accuracy and seek the clarity about job's requirements on diploma held as warranty for company in fast food company in Pekanbaru. Therefore this study implement a descriptive analytics approach within to reveals the findings. The Object of Research was several Fast food company in Pekanbaru which does holding into custody their workers diploma as job warranty. The location of Research taken place in of Pekanbaru, Riau Indonesia
Workers Protection in Working Agreement:
There are several items this study tries to discuss especially in the part of the agreement not only on its legal requirements but also based on the principles of the agreement, which are:
Principle of Freedom of Contract
Freedom of contract indirectly regulated on article 1338 paragraph (1) Civil Code, which emphasized that all legally-made agreements act as the law to those who made the agreement. Freedom of contract is a reflection of the development of free market concept pioneered by Adam Smith with his classical economic theory based his thought on natural law. The same thing became the basis of Jeremy Bentham thought known as utilitarianism. Utilitarianism and classical economic theory laissez faire considered to complete each other and equally alive the liberal thought modernsilistis. Terms and conditions in contract/agreement for certain time finally would violate fair and reasonable rules. In situation mentioned above could apply in relationship between employee and employer who made an agreement, which then cause the negative things means that party who has strong bargaining position can force his/her will on the weak party, and the strong party gains advantage from that act. The principle of Pacta Sunt Servanda emphasized as written in article 1338 paragraph (1): "all legally executed agreements shall bind the individuals who have concluded them by law. they cannot be revoked otherwise than by mutual agreement, or pursuant to reasons which are legally declared to be sufficient. They shall be executed in good faith".
Principle of Good Faith
The principle of good faith means that the execution of agreement cannot be contrary with decency and justice. In article 1338 paragraph (1) explained that "Agreement shall be executed in good faith". This article is based on the principle of good faith.
Principle of Decency
The principle of decency is poured in article 1339 civil code, which related to the provisions of agreement contents required by decency based on the nature of an agreement. Which an agreement should contain appropriate binds by all parties related to the agreement and not contrary to applicable law. Working agreement with the company that applies the agreement clause about diploma held would strengthen company position in achieving its target, on the employees side, would gain disadvantages caused by losing the valuable document that proves he/she has gone through certain educational level and could use the diploma as the main requirement to get better job. On Manpower acts there is no law that the company to hold its employee's diploma, there are only working contract could be made based on the deals that fulfill legal requirements of the agreement which regulated in article 52 Acts no.13 of 2003. According to J. Satrio was allowed, as long there was a deal between employee and employer. The deal between employee and employer usually poured on working agreement which binds employee and employer on working relationship. Diploma held by the company was allowed as long employees agreed and still bind to working relationship. The agreement should be made by parties based on the principles of agreement. Using the principles in order to create stability and maintain the rights of parties before the agreement binds the parties. There was some reason fast food company in Pekanbaru and done by one of the fast food company which is Boga Group that running Sushi Tei which bind its employee in working agreement that held their diploma which are. 1. The diploma is considered as the employee's commitment to work; 2. So the employees not easy to resign from their job; 3. If the employees have done something that could disadvantage the company, the employees should bear the disadvantages; 4. For the security of a company, so the employees do not leak the company's secret.
On the legislation that related to manpower firmly there is no prohibition on holding employees diploma. A lot of companies done this as warranty for employers considering a lot of the employees resign before the expiration of their time working agreement. This working agreement deemed inappropriate and unfeasible because the employees felt not free about their diploma certificate, also often when did the job, employees felt insecure and wanted to resign and seek a decent job as contained in the 1945 constitution.
Settlement on Employees who Resign before the End of Working Agreement
On the working agreement, clauses explained that the first party which is the company has the rights to termination of working relationship to the second party which is the workers if workers as the second party disobey the agreement which is contrary to company rules and or applicable law in Indonesia. The most important element in the agreement is the contents, because of it self-made by parties. so the agreement cannot be unilaterally canceled. If conflict or dispute did by employees resign from the company because felt uncomfortable and want to get a better job, while the agreement/contracts were not over, then first could make a settlement effort like mediation to the company where employees should have been resigned nicely and gave resignation letter. Based on this, companies whose has professional management doesn't hold the diploma because they already have balanced work system between company and employees. Normally companies only ask to show the original diploma for adjustment with given copy, as well another document. Besides, companies could add other warranties in form of some amount of money which both parties agreed.
Conclusion
Protection of workers in working agreement on diploma held as warranty for an employer in the fast food company in Pekanbaru. On Manpower Acts there is no rule that suggests companies to hold its employee's diploma, there is working contract are only working contract could be made based on the deals that fulfill legal requirements of the agreement which regulated in article 52 Acts no.13 of 2003. There was some reason fast food company in Pekanbaru, and done by one of fast food company which is Boga Group that running Sushi Tei which bind its employee in working agreement that held employees diploma which are, its considered as the employees commitment for work, employees not easy to resign from their job, If the employees done something that could disadvantage the company, the employees should bear the disadvantages, For the security of company, so the employees not leak the company's secret.
Settlement to employees who resign before the end of working agreement according to the agreed time of working, The most important element in the agreement is the contents, because of it self-made by parties. so the agreement cannot be unilaterally cancelled. If conflict or dispute did by employees resign from the company because felt uncomfortable and want to get the better job, while the agreement/contracts were not over, then first could make a settlement effort like mediation to the company where employees should have been resigning nicely and gave resignation letter. | 2019-05-30T23:45:11.000Z | 2018-07-24T00:00:00.000 | {
"year": 2018,
"sha1": "7fee2ff80ed8fe996bc834de086d22a020fd05a6",
"oa_license": null,
"oa_url": "https://doi.org/10.1088/1755-1315/175/1/012080",
"oa_status": "GOLD",
"pdf_src": "IOP",
"pdf_hash": "176e8c4fba78b679247a1f2d0566730ceb126457",
"s2fieldsofstudy": [
"Business"
],
"extfieldsofstudy": [
"Business"
]
} |
234621906 | pes2o/s2orc | v3-fos-license | Synergistic Effect of PD-L1 and HLA Class I On Prognosis of Patients With Hepatocellular Carcinoma
Background: Up-regulating the expression of PD-L1 and down-regulating the expression of HLA class I are the two main means of tumor-induced immune tolerance. The purpose of this study is to explore whether there is synergistic effect of up-regulation of PD-L1 and down-regulation of HLA-I on prognosis in hepatocellular carcinoma (HCC). Methods: A cohort of 185 consecutive HCC patients was included in this study. According to the expression of PD-L1 and HLA class I, patients were divided into three subgroups: group A was PD-L1 negative + HLA class I high expression, group B was PD-L1 positive + HLA class I high expression or PD-L1 negative + HLA class I low expression, group C was PD-L1 positive + HLA class I low expression. Results: PD-L1 positive was signicantly associated with cirrhosis and tumor-inltrating lymphocytes (P = 0.026; P= 0.000, respectively). Neither of PD-L1 positive and HLA class I low expression was signicantly associated with shorter survival patients (P = 0.116; P = 0.171, respectively). The overall survival time of group C was signicantly lower than that of group A and group B (31 months vs 58 months vs 49 months, P = 0.004), which was further conrmed by multivariate Cox regression analysis (group A/B vs group C, HR 3.652, 95%CI 1.627-8.200, P= 0.002). Conclusions: The synergistic effect of PD-L1 positive and HLA class I low expression could result into a signicant reduction in survival of patients with HCC, providing theoretical support for the combination of immunotherapy in future.
Introduction
Cancer-immunity cycle indicates that to recognize tumors cells and kill tumor cells constitute tow bolstering pillars in tumor-speci c immune response [1] . Human leukocyte antigen (HLA) class I molecules participate in providing tumor-speci c antigens to CD 8+ T cell and activating CD8 + T cells. CD8 + T cells participate in killing target tumor cells. Dysfunction of HLA-I class molecules and CD8 + T cells can result in immune toleranc [2] . The immune checkpoint, PD-1/PD-L1 (programmed cell death protein-1/ programmed death-ligand), plays an important role in inhibiting the function of CD8 + T cells in hepatocellular carcinoma (HCC) [3] .
Whether PD-L1 up-regulated by HCC can signi cantly reduce survival remains controversial [4,5] . The results of phase I/II clinical trial (Checke040) showed that status of PD-L1 had no signi cant impact on objective response rate among patients with advanced HCC who received the treatment of Nivolumab [6] , an anti-PD-1 monoclonal antibody to blocke the immune checkpoint interaction between PD-1and PD-L1. Phase III trial (KEYNOTE-240) did not yield a satisfactory result about pembrolizumab (anti-PD-1 monoclonal antibody) as second-line treatment in patients with advanced HCC [7] . It could be speculated by these results of clinical trials that PD-L1 expressed by HCC has a limited impact on prognosis in patient with HCC. Rational underlying such speculation is multiple pathways involved in HCC-induced immune tolerance.
Down-regulation of HLA class I molecules gives rise to failure of HCC-associated antigen presentation and subsequent inability of immune system to recognize HCC [2] . Dysfunction of HLA class I molecules on HCC cells may be indicative of a gloomy prognosis, although the results of previous studies were not consistent in other tumors [8,9] . There are sparse studies about mechanism of down-regulation of HLA I antigen in HCC [10,11] . Moreover, prognostic information about HLA class I molecules in HCC is very limited.
Combination of PD-L1 up-regulation and HLA class I antigen down-regulation might bring out a synergistic impact on prognosis in HCC. Therefore, we conducted a retrospective study to investigate the value of PD-L1 and HLA class I antigen in HCC, and their synergistic impact on survival.
Study population
This study was approved by the review board of Peking University First Hospital. Written informed consent before collecting tissue samples was obtained from all patients. We retrospectively reviewed the medical registry at our institution and identi ed all patients diagnosed with HCC between November 2011 and December 2017. The eligibility criteria for inclusion were as follows: (1) underwent surgical resection; (2) de nite pathologic diagnosis of HCC; (3) HCC treatment-naive before surgery. Patients who died within one month after surgery were excluded.
There were 185 patients, 29 females and 156 males, with a mean age of 58 years (rang, 27 to 80 years), who met the above criteria. Clinical characteristics, including age, gender, risk factors (HBV or HCV infection), liver cirrhosis, preoperative serum alpha-fetoprotein (AFP) levels, tumor size, vascular invasion, Child-Pugh classi cation, were retrieved from patients' medical records. Postoperative treatments and surveillance followed a uniform guideline. Survival time was calculated from the date of surgery to the date of death or last follow-up. During the follow-up, 109 patients were censored, and 76 were dead. The median follow-up was 32 months (rang, 2 to 91 months).
Immunohistochemical staining
Immunohistochemical staining was performed on formalin-xed, para n-embedded tumor tissue sections following a standard protocol [12] . PD-L1 expression was detected using rabbit monoclonal antibody (ab205921, ABCAM, 1:400 dilution ratio). HLA class I expression was detected using mouse monoclonal antibody (ab70328, ABCAM, 1:100 dilution ratio). Brie y, 4-μm sections were depara nized in xylene and dehydrated in an ethanol series, followed by heat-mediated antigen retrieval with EDTA buffer in an autoclave and deactivation of endogenous peroxidases with 3% H 2 O 2 . All sections were incubated with anti-PD-L1 or anti-HLA class I monoclonal antibody overnight at 4℃. Subsequently, the sections were rinsed, incubated with second antibodies (horseradish peroxidase/Fab polymer conjugated; PV-6000, ZSGB-BIO). Reaction products were visualized with 3,3'-Diaminobenzidine (ab64238, ABCAM) and counterstained with hematoxylin. Human tonsil tissue was used as positive control. Negative controls were treated identically but without the addition of primary antibodies.
A tumor cell was considered PD-L1 or HLA class I positive when the cell membrane is stained, regardless of cytoplasmatic staining [13] . Samples with membranous expression of PD-L1 on ≥1% of the total cells were de ned as tumors PD-L1 positive [6] (Figure 1). Both staining intensity and percentage of positive tumor cells were considered to assess the expression HLA class I antigen [8] . HLA class I expression was consider as low when the score was less than 5. The scores were calculated based on staining intensity grade (0: no staining;1: week; 2: moderate;3: strong) and staining percentage grade(0 for 0%; 1 for < 10%; 2 for < 30%; 3 for < 80%; 4 for ≥80%) ( Figure 2). The number of tumor-in ltrating lymphocytes (TILs) was counted under a magni cation of ×400, in ltration with ≥100 lymphocytes was de ned was TILs positive [14] (Figure 3). The expression of PD-L1 and HLA class I was independently evaluated by two experienced pathologists without knowledge of any clinical information on the samples, and any discrepancy in expression level was resolved by a mutual discussion.
Statistical analysis
Categorical data were presented as number (n) or percentage, and any differences between the two groups were analyzed by chi-squared test. Alternatively, Fish's exact test or continuity correction was used when the chi-square test was violated. Survival curves were assessed by Kaplan-Meier method and compared by log-rank test. Univariate and multivariate regression analysis for hazard ratios (HR) was performed using the Cox proportional hazards model. All of the statistical tests and p-value were two tailed and p-values of <0.05 were considered statistically signi cant. All analyses were performed using the SPSS 16.0 (IL, USA).
Results
Of the 185 patients enrolled, PD-L1 positive was found in 41 (22.2%) patients, and HLA class I antigen low expression was found in 60 (32.4%) patients. TILs positive was found in 12 patients (6.5%). Based on the immunohistochemical results of PD-L1 and HLA class I antigen, the 185 patients were classi ed into three subgroups: A group (PD-L1 negative/HLA class I antigen high expression), B group (PD-L1 negative/HLA class I antigen low expression or PD-L1 positive/HLA class I antigen high expression) and C group (PD-L1 positive/HLA class I antigen low expression).
Association of PD-L1 and HLA class I antigen with clinical characteristics As is shown in Table 1, PD-L1 expression is signi cantly associated with cirrhosis (P = 0.016) and TILs (P = 0.000). The remaining clinical characteristics, including gender, age, virus infection, AFP, tumor size and vascular invasion, were not signi cantly associated with PD-L1 expression (P > 0.05). None of clinical characteristics, including gender, age, virus infection, cirrhosis, AFP, tumor size and vascular invasion were signi cantly associated with HLA class I antigen expression (P > 0.05). Although low expression of HLA class I antigen was more likely to exhibit low level of AFP and vascular invasion compared with high expression of HLA class I antigen, difference did not reach statistical signi cance (P = 0.095, P = 0.052).
In addition, there was no signi cant difference among the three subgroups (A group, B group and C group) in clinical characteristics (Table 2).
Association of PD-L1 and HLA class I antigen with survival
Although patients with positive PD-L1 had a shorter survival than those with negative PD-L1, difference was not statistically signi cant (P = 0.116) (Fig. 4a). There was also a trend that survival was shorter in patients with low expression of HLA class I antigen than in those with high expression of HLA class I antigen (P = 0.171) (Fig. 4b). However, coexistence of PD-L1 positive and HLA class I antigen low expression was signi cantly associated with worse survival (Fig. 4c), C group had a shorter survival than A group and B group (31 months vs 58 months vs 49 months, P = 0.004).
Hazard ratios were assessed by the Cox proportional hazards model are shown in Table 3
Discussion
Hepatocellular carcinoma (HCC) is one of the most common cancers, which ranks as the second leading cause of cancer-related death worldwide [15] . Is spite of recent advances in treatment options, prognosis remains quite poor, especially in patients with advanced HCC. Patients with advanced HCC have a median survival of less than one year [16] . In light of poor prognosis and resistance to chemotherapy and radiotherapy, other treatment strategies have been investigated extensively. The multikinase inhibitor (sorafenib) was the rst systemic agent to show a signi cant improvement in overall survival for patients with advanced HCC [17] . However, anti-angiogenic agents (sorafenib and lenvatinib) only yield a modest improvement of about 3 months in overall survival [18] . Therefore, immunotherapy, which aims to interrupt immune checkpoint interaction and break immune tolerance, has come under in the spotlight.
Experimental evidence has demonstrated that PD-L1 on tumor cells can delivery inhibitory signals to PD-1 + CD8 + T cells, resulting in suppression of immune response by inducing apoptosis, anergy and functional exhaustion of CD8 + T cells [19] . Further pathologic study showed that PD-L1 positive HCC was signi cantly associated with biological aggressiveness [20] , including vascular invasion, poor differentiation, satellite nodules and high AFP levels; nevertheless, whether PD-L1 expression can in uence prognosis in patients with HCC remains open to debate. Results from several studies investigating prognostic signi cance of PD-L1 in HCC are inconsistent [4,5,21]. A meta-analysis indicated that PD-L1 positive was predictive for shorter overall survival and disease-free survival [22]. However, such meta-analysis suffers several limitations [23]. First the meta-analysis did not screen our all studies in this eld. Second, one included study used serum rather than tumor samples to assess status of PD-L1. Third, included patients received different treatment. All of these limitations increase heterogeneity and undermine the reliability of the results.
Our present study showed that patients with positive PD-L1 had a shorter survival than those with negative PD-L1. However, difference was not statistically signi cant (P = 0.116). PD-L1 expression is signi cantly associated with cirrhosis (P = 0.016) and TILs (P = 0.000). The remaining clinical characteristics, including gender, age, virus infection, AFP, tumor size and vascular invasion, were not signi cantly associated with PD-L1 expression (P > 0.05). Clinical trials of Checkmate 040 and KEYNOTE 240 indicated that anti-PD-1 monoclonal antibody could not signi cantly increase survival and PD-L1 expression did not in uence objective response rate in patients with advanced HCC [6,7] , which might suggest that PD-L1 expression in HCC exert a modest impact on prognosis in patients with HCC. Previous studies have showed that TILs can trigger PD-L1 up-regulation in tumor cells by secreting interferon-γ [24], which is further con rmed by the result that PD-L1 positive is signi cantly with TILs in our present study. Abbreviation: CI, con dence interval.
There has been reported that genetic variations of PD-1 predisposed patients with chronic HBV infection to cirrhosis [25]. Whether an association of tumor PD-L1 positivity with cirrhosis is a causal relationship needs to be further investigated.
The value of HLA class I molecules in HCC was rarely investigated. Down-regulation of HLA class I antigen is one of strategies for HCC-induced immune tolerance [1] . A direct evidence of tumor escape from T cell immunity cause by MHC-I down-regulation is the facial cancer in Tasmanian devil, which is transmissible to histo-incompatible companions [26,27]. The underlying mechanism is that this cancer silenced the genes for antigen presentation at the epigenetic level. Nevertheless, our present study showed that survival was shorter in patients with low expression of HLA class I antigen than in those with high expression of HLA class I antigen, but the difference did not reach a statistical signi cance (P = 0.171). None of clinical characteristics, including gender, age, virus infection, cirrhosis, AFP, tumor size and vascular invasion were signi cantly associated with HLA class I antigen expression (P > 0.05).
Multiple pathways are involved in HCC-induced immune tolerance [1] . Logically, two immune pathways may exert a synergistic impact on immune tolerance, which recommend combination immunotherapy in cancers. Combination of immunotherapy (ipilimumab/nivolumab) also yields a better survival than single immunotherapy (ipilimumab or nivolumab) in advanced melanoma [28]. Our present study showed that coexistence of PD-L1 positive and HLA class I antigen low expression was signi cantly associated with worse survival (P = 0.004), which provide a rational for combination of immunotherapy in HCC. To date, there are two star drugs against PD-1/PD-L1 checkpoint (nivolumab and pembrolizumab), which have been applied clinically. Nevertheless, effective and safe drugs to recover HLA class I antigen remains under investigation, which may be a major target for future studies [29].
In conclusion, PD-L1 positive and HLA class I antigen low expression have no signi cant impact on prognosis when they were analyzed alone, only when they are analyzed together can they yield a signi cant synergistic impact on prognosis in patients with HCC. Such conclusion provide a combining immunotherapy strategy of inhibiting PD-1/PD-L1 and recovering HLA class I antigen for patients with HCC.
Declarations
Availability of data and materials The data generated during the current study are available to any scientist wishing to use them for non-commercial purpose from the corresponding author on reasonable request. However, the clinical data might be available without the privacy data of participates in the current study.
Con ict of interest statement: The authors declare that they have no con ict of interest.
Figure 2
Immunohistochemical staining intensity of HLA class I in HCC specimen (a for negative, b for weak, c for moderate, d for strong).
Figure 3
Tumor-in ltrating lymphocytes (black arrows) in the presence of PD-L1 (a) and in the absence of HLAclass I (b). | 2020-10-28T18:30:46.827Z | 2020-09-22T00:00:00.000 | {
"year": 2020,
"sha1": "080ffd4f92267253e5ca1d961800daea1178ee7e",
"oa_license": "CCBY",
"oa_url": "https://www.researchsquare.com/article/rs-72206/v1.pdf?c=1602694730000",
"oa_status": "GREEN",
"pdf_src": "Adhoc",
"pdf_hash": "747538f4b55c32a76c731ad44df24f9dc7acbd3b",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
103436186 | pes2o/s2orc | v3-fos-license | Effect of pH conditions on the depolymerization of Wucaiwan coal by mixed acids/ultrasound method and the product structures and performance
The cleavage of the aliphatic chain or ether bond connecting the polycyclic aromatic hydrocarbons in coal can be achieved by not only hydrogenation reduction but also oxidative acid treatment. In this paper, coal samples from Wucaiwan in Xinjiang were pretreated with HNO 3 followed by mixed acids/ultrasound treatment. The depolymerized coal samples obtained under different pH conditions were then separated by fractional washing. The structures and properties of the resulting coal samples were studied by elemental analysis, FT-IR, XRD, TG–DTA, TEM, UV–Vis, and PL. The results showed that when pH = 0.012, the obtained coal samples were fragments stripped off from the raw coal samples by ultrasound in strong acid conditions, aliphatic hydrocarbons linked with oxygen-containing groups such as nitro group, a small amount of small aromatic molecules and mineral salts; when pH = 1.99–4.09, the obtained coal samples were polycyclic aromatic hydrocarbons linked with oxygen-containing groups such as nitro group,possessing the annulus wall of multilayer graphene fragment structures built up by sp 2 carbons, and they are typical fluorescent substances of carbon nanoparticle structure. The former has no solubility in organic solvents, while the latter can be well dissolved in polar solvents such as acetone. All the depolymerized coal samples obtained under different pH conditions exhibited good absorption and ability of fluorescence emission.
Introduction
During the coal gasification and liquefaction processes, hydrogen can cleave the alkane chain or ether bond between the macromolecules in coal structures in the presence of catalyst to obtain liquid hydrogenated products (Chen et al. 2015;Li et al. 2015;Guo et al. 2015). The cleavage of these weak bonds can be achieved by not only hydrogenation reduction but also acid oxidation. For instance, Zhao et al. treated Dahuangshan bituminous coal samples with H 2 SO 4 /HNO 3 (3:1) mixed acids and solubilized the treated samples in N, N-dimethylformamide (DMF) and mixed them with polyacrylonitrile (PAN) as spinning solution and produced coal-based carbon fibres by using electrospinning technique ); Hu et al. used Shanxi bituminous coal as carbon source and treated them with nitric acids and prepared coal-based fluorescent carbon dots (Hu et al. 2014). Undoubtedly, treating coal with oxidative acids to destroy the macromolecular structures of coal and break down particular chemical bonds and further prepare coal-based high valueadded products is an effective way of realizing efficient utilization of coal resources (Hu et al. 2016). However, in terms of current research status, the main purpose of acid treatment of coal is to remove the minerals and sulfur in coal and loosen the cross-linked structure of coal so that the conversion processes of coal such as pyrolysis, gasification, liquefaction, and solvent extraction can proceed efficiently (Soneda et al.1998;Li et al. 2004a;Zhang et al. 2007;Izquierdo et al. 2011;Manoj and Narayanan 2013;Li et al. 2004b), and little has been reported on the preparation of specific products from acid treated coal.
Therefore, aiming at preparing coal-based carbon nanoparticles via depolymerizing coal, our research group treated Xinjiang Wucaiwan coal samples with a mixed acids/ultrasound method and found for the first time that varying the system pH values during the washing step can result in different depolymerization degree, product structure, product dispersity in organic solvents, and optical property of product aqueous dispersion. Our results not only presented a method for the preparation of coal-based carbon nanoparticles, but also aid in revealing the coal structures on a molecular level, at the same time, it can provide useful information for the developing technique of mild conversion of coal.
Materials and reagents
Coal sample was collected from Wucaiwan coal mine in Zhundong coal field in Xinjiang, China. It was ground to pass a 200 mesh screen and dried at 105°C for 4 h before use. Dialysis bag (Spectrumlabs, molecular weight cut off 100-500) was purchased from Shanghai Toscience Biotechnology Co., Ltd. H 2 SO 4 , HNO 3 , DMSO, DMF, acetone, and chloroform (CHCl 3 ) are of analytical grade and commercially available.
Depolymerization of coal samples
A weighed amount of coal sample powder passing through 200 mesh screen was added into 2.6 M of HNO 3 aqueous solutions. The mixture was stirred at room temperature for 24 h and refluxed for another 24 h and then cooled to room temperature. The remaining solid was separated from the mixture by centrifugation and air dried. A weighed amount of the dried solid was transferred into a flask and soaked in a certain amount of mixed acids of H 2 SO 4 /HNO 3 (3:1 volume ratio) and sonicated at 60°C for 12 h. Then the solid/acid mixture was transferred into a centrifuge tube with some distilled water added. Centrifugation was carried out at a rate of 10,000 rpm. The supernatant was labeled as No. 1. Distilled water was added into the centrifuge tube. After the precipitation was well dispersed by shaking the tube by hand, centrifugation was carried out again and the supernatant was labeled as No. 2. Repeat the steps until the precipitation was completely depolymerized. No. 1, 2, and 3 supernatants were transferred into dialysis bags with corresponding number and dialyzed in water for 72 h, during which water was constantly refreshed, until the pH was neutral. The solutions in the dialysis bags were the depolymerization products of Wucaiwan coal sample under different pH conditions. They were dried under nitrogen flow and then weighed.
Structural characterization
A variety of instruments were used to characterize three batches of coal samples: the raw coal sample, the coal sample pretreated with HNO 3 , and the depolymerized coal samples obtained under different pH conditions. The content of element C, H, O, N, and S in all three coal samples was determined by FLASHEA-PE2400 elemental analyzer. The ash content in all three samples was estimated by analyzing the weight changes of the samples during the programmed temperature using Kaiyuan 5E-MAG6700 industrial analyzer. FT-IR and XRD characterizations of all three samples were performed by Bruker-EQUINOX 55 Fourier Transform Infrared Spectrometer and MAC-M18XHF22-SRA X-Ray Diffractometer, respectively. The thermal properties of the three samples were determined by PE-DTA/1700 thermal analyzer. The aqueous dispersions of depolymerized coal samples obtained under different pH conditions were diluted and dropped onto copper grid and dried to allow observation of the morphology and structure of all sample particles using Japan Hitachi H-600 Transmission Electron Microscope (TEM). The particle sizes of the aqueous dispersion were measured by using NANO-S90laser particle size analyzer.
Properties of depolymerized coal samples
Depolymerized coal samples obtained under different pH conditions were dispersed in water and then diluted. The aqueous dispersions were transferred into cuvettes and measured by Japan Hitachi UV 3900H UV-Vis spectrophotometer for UV-Vis adsorption spectra and by France Horiba Fluorolog-3 fluorescence spectrometer for emission spectra at different excitation wavelengths, respectively. The solubility of depolymerized coal samples in organic solvents was observed by dispersing the samples into DMSO, DMF, acetone, and CHCl 3 .
3 Results and discussion 3.1 HNO 3 pretreatment of coal samples Table 1 lists the pretreatment method for Wucaiwan coal sample, yield, and elemental and ash analysis results of the treated sample.
Elemental analysis results showed that after pretreatment with HNO 3 , ash in the coal was removed and the content of S decreased while the content of both O and N increased.
To study the structure change of the coal sample before and after the pretreatment by HNO 3 , FT-IR, XRD, and TG-DTG analysis were performed on three samples in Table 1 and the results are shown in Fig. 1, 2. Figure 1a shows that the FT-IR spectrum (curve 1) of Wucaiwan coal sample is consistent with that of other coal samples (Saikia et al. 2008;Manoj and Elcey 2010;Balachandran 2014). Three types of peaks were found: (1) the peaks at 2923 and 1376 cm -1 correspond to the aliphatic structure; (2) the peaks at 3050, 1585, 826, and 763 cm -1 are the characteristic peaks of aromatic ring; (3) the peaks at 3350, 1715, 1250 cm -1 are assigned to the oxygen-containing functional groups. The band between 400 and 600 cm -1 relates to the ash. Curve 2 represents the FT-IR spectrum of the coal sample after HNO 3 treatment at 25°C for 24 h. The shape of curve 2 is almost the same as that of curve 1 except the decrease of the band between 400 and 600 cm -1 and the increase of the peaks at 1715 and 1250 cm -1 , indicating that treating raw coal sample with HNO 3 at room temperature aided in the removal of ash and the increase of oxygen-containing functional groups in the structure of coal. Curve 3 is the FT-IR spectrum of the coal sample after 24 h of reflux. It shows that the intensities of the adsorption at 1715 and 1250 cm -1 further increase for sample 3. Besides that, a weak band appeared between 3000 and 2500 cm -1 and a single peak showed up at 1538 cm -1 , corresponding to the adsorption of COOH and C-NO 2 group, respectively. In addition, both the peak of O-H group at 3350 cm -1 and the peak of aliphatic structure at 2923 cm -1 decrease, and the peak at 1376 cm -1 assigned to the aliphatic structure disappeared, and also the band between 400 and 600 cm -1 associated with the ash almost disappeared. The results of ultimate analysis and FT-IR demonstrate that treating Wucaiwan coal sample with HNO 3 under reflux condition not only was efficient in removing the ash in the coal but also introduced oxygencontaining groups such as nitro group into the organic structure of coal and reduced the content of aliphatic structure relatively. Figure 1b shows that all three samples exhibited two distinct peaks at 26°and 43°in their X-ray diffraction (XRD) pattern, corresponding to the (002) plane of graphite (JCPDS 26-1079) (Hu et al. 2014). After treatment with HNO 3 , the peak for the (002) plane of sample 2 and 3 slightly shifted to higher degree compared with raw coal sample (sample 1), and gradually approached the (002) plane of graphite at 26.6°, but the peak width and height showed a significant difference, i.e., the peak of sample 2 which was only treated at 25°C became wider and shorter while the peak of sample 3 which was treated with HNO 3 under reflux became narrower and higher. The results reveal that HNO 3 treatment under reflux condition can not only lower the ash content in the coal structure but also reduce the content of aliphatic structure, thereby increasing Effect of pH conditions on the depolymerization of Wucaiwan coal by mixed acids/ultrasound… 345 the relative content of polycyclic aromatic rings (Taylor and Bell 1980;Takagi et al. 2004). The XRD results are consistent with the FT-IR results, which may be attributed to the partial breakdown of the aliphatic chains linking the aromatic units by HNO 3 under reflux condition. It has been reported that the weight loss of coal sample under 150°C is due to the vaporization of water; the weight loss between 150 and 300°C corresponds to the volatilization of light component and the decomposition or dehydration condensation of reactive groups such as carboxyl and hydroxyl groups; the weight loss between 300 and 500°C is mainly the pyrolysis of active bridge bonds in the coal, which is a devolatilization process and tar is produced; a second pyrolysis occurs between 600 and 800°C, which is mainly a process of condensation of aromatic structures and dehydrogenation to produce semicoke (Shi et al. 2013). Therefore, the first weight loss peak at 75°C of raw coal sample ( Fig. 2a) is generated by the loss of water, and the second peak at 445°C is the weight loss peak of devolatilization during pyrolysis. As the temperature increases, especially above 600°C, polycondensation dominates and the weight loss becomes more gradual.
As seen in Figs. 2b, c, first of all, sample 2 obtained by HNO 3 treatment at 25°C and sample 3 obtained by HNO 3 treatment under reflux condition, both showed peaks of loss of water at 82 and 69°C, which is similar to the behavior of raw coal sample. Second, sample 2 and 3 showed a major weight loss peak at 284 and 293°C, respectively, corresponding to the volatilization of small molecular compounds and the decomposition or dehydration condensation of oxygen-containing groups such as carboxyl and hydroxyl groups, this is distinctly different from the weight loss behavior of raw coal sample, it indicates that by treatment with oxidative HNO 3 , a large number of active oxygen-containing functional groups appeared in the coal structure. Noted that within this range sample 3 has lower temperatures for water loss peak and decomposition peak of oxygen-containing groups than that of sample 2, demonstrating that the content of oxygen-containing groups in sample 3 is higher than that in sample 2, because the existent of oxygen-containing groups can not only facilitate the decomposition and dehydration condensation of coal but also induce the pyrolysis of organic matter and decrease the pyrolysis temperature (Fletcher et al. 2007;Ruiz et al. 2006). Moreover, the oxygen-containing groups in sample 2 and 3 were basically removed by the first pyrolysis, resulting in higher dehydrogenation condensation coking temperature than that of raw coal sample during the second pyrolysis; therefore, both sample 2 and 3 exhibited weight loss peaks between 500 and 600°C. In addition, the weight loss ratio of sample 2 within the 0-1000°C temperature range is higher than that of sample 3, which is mainly caused by initial loss of water.
Depolymerization of pretreated coal samples by mixed acids/ultrasound
A certain proportion of mixed acids H 2 SO 4 /HNO 3 (3:1 volume ratio) were added into the coal sample pretreated with HNO 3 and the mixture was sonicated at 60°C for 12 h. After sonication, the acid/coal mixture was washed by purified water for several times. Figure 3 are the photos of the washing solution obtained by washing the above sample (first treated with HNO 3 and then by mixed acids/ ultrasound for 12 h) with 100 mL of purified water for 10 times (a), and the aqueous dispersion of the depolymerized coal samples after acid removal by dialysis (b). Both photos were taken under visible light. It can be inferred by the color of the washing solution in Fig. 3 that when washing the mixture to different pH conditions, the amount and structure of depolymerized coal samples dispersing in water may be different. FT-IR, fluorescent spectroscopy, and particle size distribution measurement, etc., were performed on group (b) aqueous dispersions with darker colors (1 0 , 4 0 -8 0 ). The results showed that samples 4 0 -8 0 had similar particle sizes and structures, and also their luminescence behaviors were basically the same, while the FT-IR spectrum, the particle size, and the luminescence behavior of sample 1 0 were distinctly different from that of samples 4 0 -8 0 . Therefore, we grouped samples 4 0 -8 0 for analysis and compared the results with sample 1 0 . Table 2 lists the yield of the water soluble depolymerized coal samples (labeled as sample 4 and 5) obtained by air drying sample 1 0 and the collected solution of samples 4 0 -8 0 and their elemental analysis results.
The results in Table 2 showed that 56.1% of the coal sample pretreated with HNO 3 and sonicated in mixed acids for 12 h depolymerized when washed for the first time (pH of the washing solution was 0.012). However, after dialysis, the color of the washing solution became significantly lighter, and the yield of the water soluble product obtained by air drying of the washing solution was quite low, only 18.7%. The elemental analysis results indicated that the carbon content of this depolymerized coal sample was relatively low, meaning that 81.3% of the depolymerized product of the coal samples obtained under strong acid condition are low-molecular-weight fluidic substance, or short chain small molecules and carbonaceous gas obtained from disruption of the aliphatic structures by strong acid. With the increase of the washing time, the system pH gradually increased, and the depolymerization degree of the coal sample changed from almost zero at pH = 0.85-1.21 to substantial depolymerization at pH = 1.99-4.09, and the amount of the product to remove by dialysis was not much, the yield of the water soluble product reached 64.6% (the color of the washing solution obtained at this pH turned slightly lighter, but it was caused by the penetration of water into the dialysis bag diluting the solution during the dialysis process). Moreover, the content of carbon in the product increased, indicating the content of aromatic structures in the depolymerized coal samples obtained at pH = 1.99-4.09 is relatively high. The ratio of the mass difference of the didn't depolymerization samples before and after mixed acid/ultrasonic treatment and before c The ratio of the mass of the dry matter after dialysis and the sample before mixed acid/ultrasonic treatment d By difference Effect of pH conditions on the depolymerization of Wucaiwan coal by mixed acids/ultrasound… 347
Structures of depolymerized coal samples obtained under different pH conditions
To identify the structures of the depolymerized coal samples obtained under different pH conditions, we analyzed the two samples in Table 2 by means of FT-IR, XRD, TG-DTG, and TEM, etc., and the results are shown in Figs. 4, 5 and 6. It can be seen from Fig. 4a that the depolymerized product obtained under the strongest acid condition (curve 4) showed characteristic peaks of several functional groups besides the peaks at 3050, 1587, 824, and 779 cm -1 corresponding to the adsorption band of aromatic structure: the peak at 2921 cm -1 corresponds to the adsorption band of aliphatic structure; the peaks at 1536, 1341, 634, and 607 cm -1 are attributed to the bending vibration bands of C-NO 2 and C-N=O groups and cis-and trans-O-N=O group that link the aromatic compounds; the adsorption band of COOH group appeared between 3000 and 2100 cm -1 ; the two peaks at 1214 and 1073 cm -1 are indicative of the C-O-C unsymmetric and symmetric stretching vibration band; the shoulder at 1720 cm -1 is associated with the adsorption band of C=O group; the peak at 1410 cm -1 corresponds to the adsorption band of the sulfate group R-O-SO 2 -OR 0 ; the adsorption band of Si-O group showed up as a strong peak at 1073 cm -1 ; the bands at 1000-900 and 506-424 cm -1 are assigned to the adsorption band of SO 4 2-. These results reveal that the depolymerized product obtained under the strongest acid condition contains not only aromatic compounds linked with oxygen-containing groups including carboxyl, nitro, carbonyl, and sulfate groups and aliphatic compounds, but also mineral salts such as sulfate salts and silicon oxides which may be produced by the reaction of the minerals in the coal structure with the mixed acids. Curve 5 is the FT-IR spectrum of the depolymerized product obtained at pH = 1.99-4.09, the characteristic peaks of which are summarized: the peaks at 3076 and 740 cm -1 correspond to the adsorption band of the -CH group in aromatic compounds; the in-plane bending vibration band of the -CH group in aromatic ring appeared at 1223 cm -1 ; the adsorption band of COOH group showed up between 3000 and 2300 cm -1 ; the peak at 3371 cm -1 is assigned to the adsorption band of -OH group; the peak at 1725 cm -1 is the characteristic peak of the carbonyl group C=O; the peak at 1612 cm -1 is attributed to the adsorption band of aryl ketone group; the two peaks at 1538 and 1345 cm -1 correspond to the unsymmetric and symmetric stretching vibration band of the nitro group linking the aromatic compounds, respectively; the peaks at 667 and 600 cm -1 are associated with the bending vibration band of the O-N=O group in aromatic compounds; the adsorption band of sulfur containing groups appeared at 909, 536, and 467 cm -1 . The results indicate that the depolymerized product obtained at pH = 1.99-4.09 is aromatic compound linked with more oxygen-containing groups including hydroxyl, carboxyl, ester, carbonyl, nitro, and sulfonic groups.
It can be seen from Fig. 4b that both samples exhibited diffraction peaks between 15°and 40°corresponding to the (002) plane of graphite, among which the peak of sample 4 is wider with a lot of unidentified peaks, indicating that the content of aliphatic structure and minerals in the depolymerized product obtained at pH = 0.012 is high (Taylor and Bell 1980;Takagi et al. 2004). However, the (002) plane of sample 5 is higher and narrower than that of sample 4 and most of the unidentified peaks disappeared, indicating that the content of aromatic structure in the depolymerized product obtained at pH = 1.99-4.09 increases, and the molecular orientation between aromatic planes is more ordered, i.e., the coal structure is gradually graphitized (Zou et al. 2008).
The weight losses of the depolymerized coal samples obtained at pH = 0.012 and pH = 1.99-4.09 at 1000°C are 72.1% and 52.0%, respectively, as shown in Fig. 5, and weight loss of the latter was largely caused by the loss of water at low temperature, indicating that the depolymerized coal samples obtained at pH = 1.99-4.09 are structurally more stable. In addition, the maximum weight loss of sample 4 (Fig. 5a) basically occurred in the temperature range below 290°C, i.e., when the temperature was above 290°C, the curve smoothed and the weight loss ratio lowered. According to the statement of the reference mentioned above (Shi et al. 2013), weight loss below 290°C corresponds to the volatilization of small molecular compounds and the decomposition or dehydration condensation of oxygen-containing groups such as carboxyl and hydroxyl groups, demonstrating that the depolymerized coal samples obtained under the strongest acid condition mainly consist of small molecular substances linked with oxygen-containing groups, and the content of aromatic structure is relatively low. For sample 5 (Fig. 5b) besides the weight loss peak at 291°C indicative of the decomposition or condensation of the oxygen-containing groups such as carboxyl and hydroxyl groups, the weight loss peak appeared corresponding to the condensation of aromatic structure and the production of semicoke by dehydrogenation between 500 and 600°C, indicating again that the depolymerized coal samples obtained at pH = 1.99-4.09 are mostly aromatic substance linked with oxygen-containing groups.
Particles of all sizes and shapes were obtained after dialysis of the depolymerized coal samples obtained under strong acid condition (pH = 0.012) to remove acid and low-molecular-weight fluidic substance (Fig. 6a) and the particle size distribution (Fig. 6c) is between 1 and 100 nm. It can be seen from its High Resolution TEM (HRTEM) images that the majority of the depolymerized coal sample particles obtained under strong acid condition possess irregular structures similar to that of amorphous carbon.
Sphere particles with uniform shapes were obtained after dialysis of the depolymerized coal samples obtained at pH = 1.99-4.09 (Fig. 6b) and the percentage of the particles with a size distribution (Fig. 6d) between 2 and 10 nm is as high as 79.15%. HRTEM images further show that these sphere particles are annulations, the annulus walls of which have lattice fringes similar to graphite structure (see inset), and the lattice spacing is 0.32 nm and 0.21 nm, corresponding to the (002) and (100) plane of graphite, respectively (Zhao et al. 2008).
Coal is a natural polymer. The two-phase model proposed by Marzec et al. stated that there are a large number of polycyclic aromatic hydrocarbons in coal molecules which are linked by aliphatic chains or ether bonds, forming the macromolecular phase of the coal molecules; the second phase is composed of smaller aromatic molecules which are distributed between the gaps of the first phase (Marzec 1986). This is extremely similar to the coal structure model raised by Shinn in 1986 through the products from the first and second stage liquefaction of coal (Shinn 1984). This type of structure explains the phenomena of organic solvents extracting small molecules from coal and the phenomena of coal swelling in certain organic solvents, thus it is the most authoritative theory for coal structure. Taking the two-phase model as the initial coal structure studied in this paper and combining the results from FT-IR, XRD, TG-DTG, TEM and particle size distribution analysis, we consider that the process of depolymerization of coal under the action of mixed acid and ultrasound as Fig. 7.
Under the action of mixed acid and ultrasound, the raw coal sample was shattered and the aliphatic structures or ether bonds of coal was disrupted at first. Then, the aliphatic hydrocarbons linked with oxygen-containing groups such as nitro group, mineral salts, and small aromatic molecules distributed between the gaps of the first phase was dissolved in water phase when washing the system to pH = 0.012. As the washing number increase (pH = 1.99-4.09), the rest of the sediment depolymerized, and the depolymerized coal samples are composed of only aromatic compound linked with more oxygen-containing groups including hydroxyl, carboxyl, ester, carbonyl, nitro, and sulfonic groups, possessing the annulus wall of multilayer graphene fragment structures built up by sp 2 carbons, as the polycyclic aromatic hydrocarbons lost aliphatic chains or ether bonds is easier to oxidation when the proportion of mixed acid and the water reached to a certain value (Ross et al. 1986). They are typical fluorescent substances of carbon nanoparticle structure (Eda et al. 2010). Figure 8 shows the absorption and fluorescence emission spectra for sample 1 0 (Fig. 8a) and the collected solutions of samples 4 0 -8 0 (Fig. 8b).It can be seen from Fig. 8 that the depolymerized coal samples obtained by mixed acids/ ultrasound treatment all exhibited good absorption and emission ability.
Properties of depolymerized coal samples
The UV-Vis absorbance spectra show that sample 1 0 has broad absorption bands in the ultraviolet-visible light region with a shoulder appearing at 340 nm, which may be related to the incorporation of aliphatic structures and small aromatic molecules linked with oxygen-containing groups such as nitro group, and complex compositions including raw coal fragments and minerals in sample 1 0 , as well as the uneven distributions of particle sizes. When sample 1 0 was excited by the light of 340-440 nm wavelength, two maxima showed up in its fluorescence emission spectrum. The first one is between 416 and 506 nm, the peak shifted to longer wavelength and the intensity decreased gradually as the excitation wavelength increased. The second emission peak is located at around 550 nm, and the maximum wavelength did not shift with the change of the excitation wavelength, but the peak intensity first increased then decreased. When the excitation wavelength is longer than 440 nm, the fluorescence emission peak became a single one, red shift occurred with the increase of the excitation wavelength, and the peak intensity first increased then decreased. The optimal excitation wavelength reached at around 500 nm, where the strongest emission peak appeared at around 550 nm.
The collected solution of samples 4 0 -8 0 contains mainly polycyclic aromatic hydrocarbons with large p bonds, and they are linked with more heteroatom chromophores including C=O, C=N, C=S, N=N, N=O, and -NO 2 , etc., which may result in p ? p* and n ? p* transitions simultaneously (Shull 1964); therefore, they have broad adsorption bands in the ultraviolet and visible light region. When the collected solution of samples 4 0 -8 0 was excited with the light of 360-540 nm wavelength, there was a Effect of pH conditions on the depolymerization of Wucaiwan coal by mixed acids/ultrasound… 351 single peak in the fluorescence emission spectrum, the intensity of which first increased then decreased, red shift occurred with the increase of the excitation wavelength, and the optimal excitation wavelength reached at around 520 nm, where the strongest emission peak appeared at around 560 nm. The fluorescence property is basically the same as other typical luminescent carbon nanoparticles, depending on excitation wavelengths and having multiple excitation and multiple emission (Hana et al. 2009;Zhu et al. 2012;Liu et al. 2012).
In addition, upon exposure to the light of 365 nm wavelength from portable UV lamp, sample 1 0 exhibited blue-green fluorescence and the collected solution of samples 4 0 -8 0 exhibited blue fluorescence (inset in Fig. 8). Meanwhile, we used quinine sulfate as reference and measured the fluorescence quantum yields of sample 1 0 and the collected solution of samples 4 0 -8 0 to be 6.79 and 17.32%, respectively, based on the reference method. 17.32% is higher than the quantum yield of any other coalbased carbon dots ever reported (Hu et al. 2014).
The solubility of the depolymerized coal samples obtained at pH = 0.012 (sample 4) and pH = 1.99-4.09 (sample 5) in DMSO, DMF, acetone, and CHCl 3 were measured as shown in Table 3.
Coal molecules are composed of large aromatic building blocks linked by aliphatic chains or ether bonds, thus coal is considered cross-linked polymer, which can only swell in organic solvents but cannot dissolve (Bazylyak et al. 2007). From the structural characterization results above, we know that the coal structure has undergone a fundamental change after the mixed acids/ultrasound treatment. First of all, under the action of ultrasound in mixed acids, the ether bonds in the structure of HNO 3 pretreated coal sample may be protonated and ruptured by the attack from nucleophiles such as nitro group. Second, strong oxidative mixed acids may directly destroy the aliphatic chains that link the aromatic building blocks with the help of ultrasound, making the aromatic building blocks no longer cross-linked and separate from each other. Third, oxidative groups may be produced on the edge of the aromatic building blocks due to the strong oxidative effect on the aromatic hydrocarbons in the coal structure. The subsequence of the changes above apparently can make the coal sample depolymerize and dissolve in organic solvents. However, in terms of the experimental results, only the depolymerized coal samples obtained at pH = 1.99-4.09 have good solubility in DMSO, DMF and acetone, and the depolymerized coal samples obtained at pH = 0.012 can't dissolve in those solvent (Table 3). Therefore, taking the characterization results of the depolymerized coal samples above into consideration, it was concluded that only when the pH of the coal-acid system was 1.99-4.09 can the oxygen-containing aromatic building blocks in the treated coal samples be disintegrated into independent aromatic units, while when the pH of the coal-acid system was 0.012, the obtained depolymerized coal samples mainly contain destroyed small molecular aliphatic chains, coal fragments and minerals.
In addition, oxygen-containing groups make the aromatic fragments very polar; therefore, the depolymerized coal samples obtained at pH = 1.99-4.09 can be well dissolved in polar solvents such as DMSO, DMF and acetone, but cannot be dissolved in low polarity solvents such as CHCl 3 .
Conclusions
Under the action of mixed acid and ultrasound, 56.1% of the coal samples pretreated with HNO 3 depolymerized (pH of the coal-acid system was 0.012), among which 81.3% of the depolymerized coal samples are low-molecular-weight fluidic substances, or short chain small molecules and carbonaceous gas obtained from disruption of the aliphatic structures by strong acid, and 18.7% are aliphatic compounds linked with oxygen-containing groups including carboxyl, nitro, carbonyl, sulfate groups and small aromatic molecules, and minerals such as sulfate salts and silicon oxides. Continue to wash the sediment haven't depolymerization to pH = 0.85-1.21, it basically did not depolymerize; when washing to pH = 1.99-4.09, a large number of the coal samples depolymerized, and the yield for the water soluble substances reached 64.6%. it is found by analysis that the depolymerized coal samples obtained at pH = 1.99-4.09 have single component consisting of polycyclic aromatic hydrocarbons linked with oxygencontaining groups such as nitro group, can be well dissolved in polar solvents such as acetone, and its aqueous solution exhibited good absorption and ability of fluorescence emission, they are typical graphitized fluorescent substances of carbon nanoparticle structure.
Through the study of the paper, we not only confirmed the structure model of coal put forward by Marzec, but also believe that it is feasible treating coal with oxidative acids to destroy the macromolecular structures of coal and break down particular chemical bonds and further prepare coalbased high value-added products.
Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://crea tivecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. | 2019-04-09T13:06:31.360Z | 2017-11-01T00:00:00.000 | {
"year": 2017,
"sha1": "7be61955f035347182e7f2be5ca25414300c8ce1",
"oa_license": "CCBY",
"oa_url": "https://link.springer.com/content/pdf/10.1007/s40789-017-0183-0.pdf",
"oa_status": "GOLD",
"pdf_src": "Adhoc",
"pdf_hash": "1df2f16496973370ddab6f0e5b4d0d7253e97659",
"s2fieldsofstudy": [
"Chemistry"
],
"extfieldsofstudy": [
"Chemistry"
]
} |
224957810 | pes2o/s2orc | v3-fos-license | A Discriminative Multi-Output Gaussian Processes Scheme for Brain Electrical Activity Analysis
: The study of brain electrical activity (BEA) from different cognitive conditions has attracted a lot of interest in the last decade due to the high number of possible applications that could be generated from it. In this work, a discriminative framework for BEA via electroencephalography (EEG) is proposed based on multi-output Gaussian Processes (MOGPs) with a specialized spectral kernel. First, a signal segmentation stage is executed, and the channels from the EEG are used as the model outputs. Then, a novel covariance function within the MOGP known as the multispectral mixture kernel (MOSM) allows us to find and quantify the relationships between different channels. Several MOGPs are trained from different conditions grouped in bi-class problems, and the discrimination is performed based on the likelihood score of the test signals against all the models. Finally, the mean likelihood is computed to predict the correspondence of new inputs with each class’s existing models. Results show that this framework allows us to model the EEG signals adequately using generative models and allows analyzing the relationships between channels of the EEG for a particular condition. At the same time, the set of trained MOGPs is well suited to discriminate new input data.
Introduction
From the neuroscience perspective, each physiological or cognitive process will produce a particular pattern of electrical interactions linking neurons from different brain regions. Therefore, the electrical response related to the interactions between neuron assemblies allows studying the brain function. Under such a perspective, the neuroscientists have found brain electrical activity (BEA) patterns associated with motor functions, cognitive processes, and neuropathologies [1]. However, the wide range of possible mental conditions hampers the BEA analysis and poses very challenging tasks [2]. In particular, capturing BEA by placing a set of electrodes over the scalp, known as electroencephalography (EEG), gathers and amplifies currents reflected in the brain cortex from all the possible brain sources, yielding a mixture of latent activity sources at each channel. For discovering such a latent activity, the literature considers different types of EEG analyses ranging from time and spectral domain processing [3,4], through connectivity measures between channels [5], to the complex network analysis [6].
Due to the non-stationarity behavior of EEG data, the classical temporal analyses cannot decode differences among several mental states or conditions. Besides, several studies demonstrate that the variations in the EEG oscillatory patterns play a fundamental role in the maintenance of brain functions and the identification of different neural conditions [7]. Hence, spectral decomposition approaches look for relevant information at the brain rhythms (namely alpha, beta, theta, delta, and gamma). For instance, performing working memory or creative tasks evokes discriminative oscillatory patterns [8,9]. In addition, neuropathologies, such as Alzheimer's disease, cause abnormal cortical neural synchronization at resting-state rhythms [10]. The general spectral analysis procedure consists of a channel-wise frequency band splitting, followed by an identification of relevant time intervals, usually supervised by a specialist. The subsequent stages compute descriptors from the time-frequency representation. The power spectral density (PSD) and spectral entropy are among the most considered descriptors due to their straightforward interpretation [11,12]. Although EEG frequency analysis has proven to extract useful information for brain function understanding, the channel-wise approach still lacks the interpretability of several cognitive processes because each channel only holds a reflected version of BEA from a neural assembly [13]. Besides, the low spatial resolution of EEG restricts the extracted information about the behavior of some brain regions involved in high-level cognitive or physiological processes [4].
In recent years, connectivity analysis techniques account for channel interdependencies to enhance EEG spatial resolution through functional relationships captured in the BEA. Metrics from several domains attempt to quantify different properties of the BEA. Specifically, the coherence (COH), phase value index (PVI), and phase-locking value (PLV) capture pair-wise channel dependencies in the frequency domain. Moreover, the latter two gained attention due to their non-linear capability for unraveling latent connectivity patterns, which have proven valuable for applications, such as motor imagery and emotion recognition [14,15]. Nevertheless, the selection of the connectivity measure is not entirely straightforward. Their dependence on an estimated cross-spectrum and subject data variability yield to low generalization capability in a wide range of scenarios [16,17].
The above issues demand reliable brain connectivity approaches to automatically identify the relevant spectral information from the inherent EEG uncertainty. In this regard, a probabilistic modeling framework, such as Gaussian Processes (GPs), along with an appropriate covariance function possesses the capability for characterizing the latent processes from BEA data [18,19]. Moreover, the extension to vector-valued processes or multi-output GPs (MOGPs) adjusts the probabilistic model to map inputs into a multidimensional output space as an EEG channel array [20,21]. Former GP applications to EEG analysis in stress detection and cognitive stimulus recognition demonstrated the potential of GPs for BEA data modeling [22,23]. Furthermore, a recently proposed covariance function, known as the multi-output spectral mixture (MOSM) analyzes dependencies with spectral information for multidimensional output processes [24]. The proposal novelty relies on the PSD design, ensuring the frequency constraints for real-valued signals. Additionally, the inverse Fourier transform of the MOSM covariance results in a temporal kernel with positive definiteness conditions that are harder to accomplish during kernel design.
In this work, we propose the extension of MOGPs with an MOSM kernel for BEA discrimination, termed MOSM-GP. To this end, we learn an MOSM-GP for each EEG trial in a training set to model and quantify channel relationship patterns associated with particular EEG conditions. Then, we implement a likelihood measure to label new trial data into a specific class. The proposed framework of discriminative MOSM-GP, termed DMOSM-GP, is tested in two publicly available EEG datasets acquired under emotional [25] and motor imagery [26] conditions. The attained results show that the likelihood measure of testing data on the trained MOSM-GPs corresponds to a reliable discriminative index.
The manuscript organization is as follows: Section 2.2 describes the theoretical background of MOSM-GPs and introduces the proposed framework. Section 3 presents the attained results on both EEG modeling and classification performance. Finally, Section 4 concludes the works with the most relevant findings.
EEG Databases
This work considers two publicly available EEG datasets for testing the proposed DMOSM-GP methodology. The two datasets are widely used in the development of neuroscience techniques as they hold challenging cognitive conditions. Contained EEG data allows testing the quantification of channel dependencies at the frequency level by the DMOSM-GP for binary classification experiments.
DEAP dataset "A Database for Emotional Analysis using Physiological data" (DEAP) contains EEG data from 32 subjects acquired under 40 emotion elicitation experiments, with one-minute recordings at 128 Hz and 32 channels distributed over the scalp [25]. Each participant rated the emotional stimulus at the end of a video, following two dimensions: arousal and valence.
Other scores, such as dominance and liking, were also reported, although arousal and valence are the most prominent dimensions for affective computing works. These dimensions characterize a more extensive range of emotions than just the classical categorical description in six basic emotions. For our experiments, we consider the classification valence dimension as either low (ranging from one to five) or high (from five to nine) valence. MI dataset The "Brain Computer Interface (BCI) competition 2008-Graz data set A" contains motor imagery experiments from nine subjects performing four specific tasks involving movements of hands and feet [26] under the motor imagery (MI) paradigm. The set of 22 EEG channels was band-pass filtered between 0.5 Hz and 100 Hz, and further down-sampled at 128 Hz. Each subject performed two experiment sessions, consisting of six runs and 48 six second-long trials per run and task. From the BCI dataset, we select the left-and right-hand movement imagination tasks for evaluating the proposed discriminative framework in a subject-wise scheme.
Cross-Spectral Estimation from Kernel Mixtures
The communication between neuron cells is the basis of every neuronal processing task. The electrical impulses result in every possible cognitive or physiological condition, such as behavior, sensation, thoughts, and emotions [27]. Due to equally measuring normal and abnormal BEA, EEG is considered a well-suited neuroimaging technique for diagnosis, treatment, and clinical procedures across several neurological pathologies [1]. Equation (1) presents the mathematical representation of the BEA from an EEG of C channels holding T time instants [28].
with t as the time sample positions of the recordings, and X = {x i } C i=1 holding the brain electrical responses measured by the EEG array at channel i. To quantify the spectral content between channels, the introduced cross-spectrum estimation relies on specific covariance functions. The Cramer's theorem states that a family of integrable functions {κ ij (τ)} C i,j=1 are the covariance functions of a weakly-stationary stochastic process if and only if they admit the following representation: being ι the imaginary unit, each S ij (ω) an integrate complex-valued function S ij : R → C that is also positive definite, and i, j the indices of two EEG channels. This relationship between covariance functions κ ij in the time domain with argument τ ∈ R and their corresponding spectral density S ij with arguments ω in the Fourier domain allows designing a desired spectral density and obtaining a covariance function [24]. Now, a family S = {S ij } C i,j=1 ∈ R C×C of positive-definite complex-valued functions can be used as cross-spectral densities for multi-output data [24]. These functions are designed by including specific parameters that allow physical interpretation of the obtained covariance kernel regarding the input data. Moreover, complex-valued and positive-definite matrices can be decomposed in the form S(ω) = R H (ω)R(ω) where R(ω) ∈ R Q×C , Q represents the rank of decomposition, and (·) H denotes the Hermitian operator. Since Fourier transforms and multiplications of squared exponential (SE) functions are also SE, the autocovariance function R i (ω) of i-th channel is modeled as the complex-valued SE in Equation (3).
With such a choice of functions, the cross-spectral density between channels i and j is given by Equation (4) Finally, in order to guarantee that the model is restricted to real-valued stochastic processes, the spectral density is reassigned to become symmetric with respect to ω by S ij (ω) → 1 2 (S ij (ω) + S ij (−ω)). Then, the inverse Fourier transform of the resulting cross-spectral density becomes the corresponding temporal domain real-valued kernel; the kernel and the symmetric version of the spectral density are presented in Equations (5) and (6), respectively.
where the term α ij = w ij √ 2π|σ ij | 1/2 absorbs the constant resulting from the inverse Fourier transform. Equation (5) allows computing the real-valued autocovariances (i = j) and cross-covariances (i = j) with negatively and positively correlated channels through the magnitude parameter α ij ∈ R; delayed channels through the θ ij = 0 delay parameter, and channels out-of-phase through the φ ij = 0 phase parameter. Moreover, increasing the rank of decomposition Q corresponds to considering more components in the multiple-output spectral mixture (MOSM) kernel as shown in Equations (7) and (8).
denoting the superindex (q) the q−th spectral component. Then, MOSM effectively computes autocovariance and cross-covariances through the spectral-mixture of positive-definite kernels from the Fourier transform of spectral functions S ij (ω). In practice, the adjustment of the cross-spectrum parameters should be performed on the evidence of the EEG data.
Multi-Output Spectral Mixture Gaussian Process
Given an EEG trial, the Gaussian Process (GP) probabilistic framework computes the mixture parameters by maximizing the data likelihood as the cost function. A Gaussian Process (GP) is a real-valued stochastic process ( f (t)) over a input set t, such that for any finite subset of inputs t ∈ {1, . . . , T}, the random variables f (t) are jointly Gaussian [20]. Additionally, the GP is uniquely determined by its mean function m(t) := E t ( f (t)), typically assumed m(t) = 0 and its covariance function κ(t, t) := cov( f (t), f (t )) ∈ R T×T known as the kernel.
Then, the multivariate extension of GPs is derived by assembling C different scalar-valued stochastic processes, one for each EEG channel. Any finite collection of values across all such processes are jointly Gaussian, termed multiple-output Gaussian Process (MOGP). This extension results in a vector-valued process f ∼ GP (m, K), where m(t) ∈ R TC is a concatenated vector from the mean vectors associated to the outputs and K ∈ R TC×TC a block partitioned matrix of the form [20]: where each block K(X i , X j ) is a T × T matrix denoting the covariance between output channels i, j. Furthermore, a multivariate kernel Therefore, we use the MOSM kernel in Equation (7) as the covariance function to be implemented within the MOGP. By defining such a process as the MOSM kernel, the model adjustment to the data is performed by maximizing the data log-probability.
Since the observations in the multiouput case are jointly Gaussian, they are concatenated into the vector y = [x 1 , x 2 · · · , x C ] ∈ R CT the channel observed value. Then, the negative log-likelihood (NLL) can be expressed as in Equation (10).
holding the complete set of parameters. As a result, minimization of NLL with respect to Θ designs an spectral kernel quantifying the EEG channel relationships at automatically tuned frequency bands.
Discriminative Scheme Using MOSM-GP
Let a set of N labeled BEA trials {χ n , l n } N n=1 , each of them belonging either class A or B, that is, l n ∈ {A, B}. In the case of DEAP dataset, classes correspond to low and high valence, while, for the MI dataset, left-and right-hand movement imagination are considered. As stated in Section 2.3, a single MOSM-GP models each BEA trial as the stochastic process f (l) n , resulting in N A and N B MOSM-GPs for classes A and B, respectively. Furthermore, on the evidence of a new BEA trial X * , the marginal likelihood for each learned MOSM-GP is computed as: n (X * ), K(X * , X * )). (11) By evaluating the marginal likelihoods on all training MOSM-GPs, the new BEA trial label is estimated as follows: where E{· : l n = l} denotes the expectation operator over training trials belonging to class l. Figure 1 illustrates the proposed discriminative MOSM-GP framework, termed DMOSM-GP.
Implementation Details
Before the model training stage, a channel selection is carried out to reduce the training computational cost. For the MI dataset, channels are selected based on the evidence that body movement triggers neural activity in the opposite brain hemisphere within the sensorimotor area. Regarding the DEAP dataset, channels related to brain regions more likely to participate in affective states are considered. Figure 2 depicts the subset of selected channels for both EEG datasets. Regarding the DMOSM-GP free parameter, the rank of decomposition is chosen from a grid search within the range Q ∈ {1, . . . , 10} to minimize the mean absolute error (MAE) of the model prediction against the original EEG data. The GPflow framework is employed for the model definition [29], and the kernel function is optimized via the minimization of NLL cost function using the autograd library. Finally, for the statistical significance assessment of the classification performance, an 10-fold cross-validation scheme was applied.
Parameter Tuning and Spectral Modeling
To tune the rank of decomposition Q, we evaluate the MOSM-GP performance for modeling the selected output channels from the data posterior distribution at specific temporal locations. Moreover, the mean absolute error (MAE) quantifies the difference between the target and predicted outputs as a function of the rank. Figure 3 presents the mean MAE across the GP outputs against the number of spectral components defined for the MOSM kernel. As a first insight, the MAE values evidence that the MOSM-GP effectively reconstruct EEG recordings. Nonetheless, the error increases over six spectral components, due to a large number of kernel parameters to be tuned, which in turn increases the computational complexity without providing relevant information to the probabilistic model. On the contrary, a single component lacks the complexity to account for the brain activity changes. Therefore, a rank of decomposition between three and five benefits the most the MAE performance, implying a balance between model complexity and generalization capability. Consequently, for the remaining of the work, we selected three spectral mixtures as the optimal Q for testing the MOSM-GP scheme. For the purpose of visualization, Figure 4 exemplifies the MOSM-GP output for channels FP1 and AF3 using Q = 3. As seen, the posterior MOSM distribution suitably models EEG data at all time locations with bounded deviations.
An analysis of the spectral information quantified by the MOSM-GP is carried out using Equation (6) that decomposes the spectral content shared by two channels into Q terms. Figure 5 plots the component-wise spectral distribution between channels FC2 and FC6 in a trial from left- (Figure 5a) and right-hand movement (Figure 5b). Attained spectra prove that each component automatically fits a particular frequency band. Moreover, the component magnitude α Later, Equation (8) is used to compute the cross-spectrum visualizing the computed covariances by the MOSM kernel for every channel pair. Since there is valuable information on the analysis of the cross-covariances between channel pairs, a complete trial visualization of the quantified spectral information is presented in Figure 6a,b for left-and right-hand movement trials, respectively. Each one of the horizontal axis sections, corresponds to a particular channel i and its MOSM PSD against all the channels. The vertical axis represents the frequency bin at which the connectivity is assessed. Then, lighter green colors are related with strong spectral densities shared between i, j channels, while darker blue colors are related with lower interdependencies. Despite most of the strong spectral density being constrained to the [15,40] Hz band, there is clear evidence that not all the channels are synchronized at this frequency when performing MI activity. Particularly for Figure 6a, there are channels sections that seems to be uncorrelated in the complete spectrum for this particular task. For example, channels Cz, C2, and CP6 seem to have low frequency dependencies shared with the rest of the EEG array. On the other hand, channel CP2 presents strong connections with most of the channels. Furthermore, for the opposite MI task, there exist variations on the captured frequency relationships among the EEG array. C2 and C4 result as the most highly correlated the other channels when performing the MI task. Figure 5. Normalized power spectral density S ij (ω) for a given pair of channels (FC2-FC6) from MI database for two opposite conditions. Top row for left-hand movement condition and low row for right-hand movement condition. Figure 5a,b are the spectral content per spectral component.
Discriminative MOSM-GP
For the DMOSM-GP framework, each EEG trial is trimmed into two-seconds lasting time series to find the desired spectral relationships. Table 1 presents the absolute value of the average likelihood as the class dependency measure, depicted in Equation (12). The first column corresponds to the test condition level regarding the emotional content (valence for this experiment), columns two and three are the values obtained for the model testing, and fourth is the valid tag of the corresponding test signal. Finally, the fifth column is the predicted tag from the magnitude of the mean test likelihood. Similarly, in Table 2, the results for the DMOSM-GP strategy over the BCI database are presented. The first column corresponds to the movement associated with the experiment. Columns two and three are the average likelihoods obtained by testing the new input against each class's models. The fourth column is associated with the resulting tag. Each row in Tables 1 and 2 corresponds to a particular trial, for emotional or motor imagery experiments.
From this test on complete datasets, it can be evidenced that the prediction regarding the likelihood of the trained model with the test signals is promising. The accuracy of the test data is around 73.3% for the DEAP database and about 87.33 for the BCI database. Specifically, for comparison purposes against works using the DEAP dataset, a selection of 9 subjects is performed. This selection is based on the evidence of an uncertainty test performed in [17], where the authors concluded that the subject itself did not adequately tag some experiments of the DEAP database. The scheme of implicit tagging used in this particular database allows the subjects to rate their affective result, so some of the acquired signals seem not correctly related to the emotional tests. The results reported in Table 3 show the accuracy resulting from the cross-validated DMOSM-GP scheme. The subject IDs reported are the same in the database, and some high accuracy (state-of-art comparable) ones were achieved around 78%, which is the case of subject 18.
On the other hand, for the BCI database, the complete set of 9 subjects is employed, and Table 4 reports the accuracy of classification within 5 folds of DMOSM-GP. Some higher accuracy was obtained for the MI database in the subject-dependent experiments around 87% for subject ID 2. In general terms, the results associated with the BCI database are higher than the DEAP database, and it can be related directly with the condition that is tested. In the case of emotional experiments, there is a high degree of subjectivity in the elicitation of the states. In contrast, in the case of motor imagery, the conditions are somehow more consistently replicated. Table 1. DMOSM-GP Test for brain electrical activity (BEA) analysis of emotional conditions of two classes. The mean absolute value of the likelihood from the test signals against the trained models is included-"A Database for Emotional Analysis using Physiological data" (DEAP) database. The boldface indicates the class with the highest log-likelihood.
Results Comparison
The results obtained from the DMOSM-GP strategy are compared with some state-of-the-art results in terms of classification accuracy. However, since the proposed methodology uses generative models for discriminative purposes, it has to be stated that the comparison should be made on a few additional items than the classification accuracy. As can be seen in Tables 5 and 6, the average classification results obtained by the DMOSM-GP strategy are competitive among the state-of-art works. It has to be noticed that this work does not implement a preprocessing or feature extraction stage but uses the acquired data to train the MOSM-GPs and perform the discriminative task. [17].
Discussion and Concluding Remarks
Brain information processing is a complex task that is not yet entirely mapped and understood. Despite the previous knowledge about brain regions interactions in motor or emotional means, works that allow improving the interpretability of results in different BEA scenarios would lead to the development of more precise frameworks for analysis, diagnosis, and treatment of mental pathologies, among other tasks. In this work, we proposed a framework for discriminate BEA using raw data with the support of a spectral kernel that identifies relationships between channels on behalf of a probabilistic methodology of multi-output Gaussian process. One of the essential remarks of this framework is the capability of learning EEG connectivity patterns by estimating raw data spectral components without a feature extraction stage. Further, this proposal of generative models working directly with EEG data has the advantage of adjusting the model relying on the optimization of kernel hyperparameters just from the channels information in a data-driven framework.
The results presented in Section 3 evidenced that introducing the MOSM kernel to MOGPs becomes a reliable tool for BEA modeling, due to the spectral designing properties. It is well known that EEG channels share complex frequency relationships that can be exploited using the design of a covariance function in that particular domain. Regarding this property, the posterior distribution over the data measured by the MAE in Figure 3 shows an adequate adjustment of the model on the original data. It also allows us to conclude that the inclusion of more spectral components into the covariance function benefits the model adjusting to the data.
Moreover, the identification of the spectral relationships between channels, performed by the MOSM kernel, allows gaining a better understanding of the latent functional connectivity between brain regions. As Figure 6 evidenced, the MOSM-GP identifies representative frequency bands for the cognitive process. The positive linear relationships are grouped among channels from the same hemisphere, with strong specific dependencies between channels, like P3 − P7, and Fc6 − A f 4. The negative relationships are explained by lower PSD amplitudes at Cz, C2, and CP6 for the left-hand MI task, and C6, CP1, CP6 for the right-hand MI task. All these interactions quantified by the MOSM kernel in terms of higher or lower values of the PSD can be directly related with the activations of neural cells in different regions of the brain related with emotions (amygdala, hippocampus, and frontal cortex) or related with motor activities (primary motor cortex and posterior parietal cortex). Nevertheless, further studies must be completed from the evidence of these relationships to an accurate source reconstruction before determining the specific brain region of the acquired neural activity.
Finally, the discriminative results regarding the emotional and motor imagery conditions conclude that probabilistic models can be efficiently employed as a classification tool for EEG data. In this case, the probability distribution of tested data against the trained models directly becomes a classification algorithm by following a direct comparison of the mean likelihood value between the models from two classes. Despite lacking a feature extraction stage, the proposed DMOSM-GP produces discriminative information from the data. Moreover, the total accuracy of the subject-dependent and condition-dependent tests is comparable with state-of-art works, as Tables 5 and 6 illustrate.
One of the drawbacks of this framework is the computational complexity of training a considerable number of MOSM-GP models. In addition, increasing the number of MOSM kernel mixtures and the size of BEA data will derive into an exponential growth of the training time. Further improvements of this methodology will be directed towards using lighter versions of probabilistic models, such as the sparse GPs aiming at solving more difficult supervised learning tasks from EEG data as multilabel classification or regression. | 2020-10-19T18:09:37.726Z | 2020-09-27T00:00:00.000 | {
"year": 2020,
"sha1": "f419bd3aed74ba233f5a33979fe19d789f094212",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2076-3417/10/19/6765/pdf",
"oa_status": "GOLD",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "e3fc72575dfddf3ab40e60e2dbe7129bc05e69c7",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Computer Science"
]
} |
220548571 | pes2o/s2orc | v3-fos-license | Acquired Visual Deficits Independent of Lesion Site in Acute Stroke
Most clinical diagnoses of stroke are based on the persistence of symptoms relating to consciousness, language, visual-field loss, extraocular movement, neglect (visual), motor strength, and sensory loss following acute cerebral infarction. Yet despite the fact that most motor actions and cognition are driven by vision, functional vision per se is seldom tested rigorously during hospitalization. Hence we set out to determine the effects of acute stroke on functional vision, using an iPad application (Melbourne Rapid Field-Neural) that can be used to assess vision (visual acuity and visual field sensitivity) at the bedside or in the emergency ward in about 6 min per eye. Our convenience sample comprised 60 (29–88 years, 65 ± 14 years, 33 males) of 160 sequentially presenting first episode, acute (<7 days) ischemic stroke patients at Sunshine Hospital, Melbourne. One hundred patients were excluded due to existing eye disease, inadequate radiological confirmation, inability to comply with English directions or too ill to participate. Stroke cases were compared with 37 (29–85 years, 64 ± 12 years,14 males) similar-aged controls using a Mann-Whitney U-test. A significant loss in visual field sensitivity was measured in 68% of stroke cases (41/60, Mean Deviation: Stroke: −5.39 ± 6.26 dB, Control: 0.30 ± 0.60 dB, MWU = 246, p < 0.0001). Surprisingly, 44% (18/41) of these patients were unaware of their field loss. Although high contrast visual acuity was unaffected in most (55/60) patients, visual acuity-in-noise was reduced in 62% (37/60, Stroke: mean 6/12−2, log MAR 0.34 ± 0.21 vs. Control: mean 6/7·5–2, log MAR 0.14 ± 0.10; MWU = 470, p < 0.0001). Visual field defects were associated with all occipital, parietal and posterior cerebellar artery strokes while 9/15 middle cerebral artery lesions and 11 lesions in other brain regions were also associated with visual field defects. Our findings demonstrate that ~2/3 of acute first episode ischemic stroke patients experience acquired vision deficits, often unrelated to the confirmed lesion site. Our results also imply that visual dysfunction may be associated with a more generalized cerebral dysfunction while highlighting the need for bedside testing of vision for every stroke patient and demonstrating the translational clinical value of the “Melbourne Rapid Field- Neural” iPad application. Clinical Trial: http://www.ANZCTR.org.au/ACTRN12618001111268.aspx.
Most clinical diagnoses of stroke are based on the persistence of symptoms relating to consciousness, language, visual-field loss, extraocular movement, neglect (visual), motor strength, and sensory loss following acute cerebral infarction. Yet despite the fact that most motor actions and cognition are driven by vision, functional vision per se is seldom tested rigorously during hospitalization. Hence we set out to determine the effects of acute stroke on functional vision, using an iPad application (Melbourne Rapid Field-Neural) that can be used to assess vision (visual acuity and visual field sensitivity) at the bedside or in the emergency ward in about 6 min per eye. Our convenience sample comprised 60 years, 65 ± 14 years, 33 males) of 160 sequentially presenting first episode, acute (<7 days) ischemic stroke patients at Sunshine Hospital, Melbourne. One hundred patients were excluded due to existing eye disease, inadequate radiological confirmation, inability to comply with English directions or too ill to participate. Stroke cases were compared with 37 (29-85 years, 64 ± 12 years,14 males) similaraged controls using a Mann-Whitney U-test. A significant loss in visual field sensitivity was measured in 68% of stroke cases (41/60, Mean Deviation: Stroke: −5.39 ± 6.26 dB, Control: 0.30 ± 0.60 dB, MWU = 246, p < 0.0001). Surprisingly, 44% (18/41) of these patients were unaware of their field loss. Although high contrast visual acuity was unaffected in most (55/60) patients, visual acuity-in-noise was reduced in 62% (37/60, Stroke: mean 6/12−2, log MAR 0.34 ± 0.21 vs. Control: mean 6/7·5-2, log MAR 0.14 ± 0.10; MWU = 470, p < 0.0001). Visual field defects were associated with all occipital, parietal and posterior cerebellar artery strokes while 9/15 middle cerebral artery lesions and 11 lesions in other brain regions were also associated with visual field defects. Our findings demonstrate that ∼2/3 of acute first episode ischemic stroke patients experience acquired vision deficits, often unrelated to the confirmed lesion site. Our results also imply that visual dysfunction may be associated with a more generalized cerebral dysfunction while highlighting the need for bedside testing of vision for every stroke patient and demonstrating the translational clinical value of the "Melbourne Rapid Field-Neural" iPad application.
Keywords: visual function, acute stroke, visual field, visual acuity-in-noise, ischemic, vision, Melbourne Rapid Field-Neural (MRFn) INTRODUCTION Stroke is categorized by the World Health Organization as rapidly developing clinical signs of focal cerebral dysfunction due to vascular compromise, lasting more than 24 h, or leading to death (1,2). Stroke is the leading cause of adult disability and the second leading cause of death worldwide (3). The American Stroke Association guidelines for the early management of acute ischemic stroke assessment emphasize testing of the level of consciousness, motor strength, items relating to confrontation visual field measurements, horizontal eye movements and visual inattention (4). However, visual function per se is seldom examined rigorously in the emergency room or during initial hospitalization for stroke (5), despite the central role vision plays in driving most human brain functions such as eye movements (6), attention (7), cognition (8), emotional responses (9), motor actions (6), and occupying larger volumes of cortical and subcortical regions in the human brain than do motor functions (10,11).
Previous studies have reported that ∼92% of the 915 stroke patients (5), who were referred to hospital eye clinics in the UK within a median of 22 days and up to 3 months post-stroke, have been reported as having some form of a visual deficit (12) with post chiasmal lesions in the lateral geniculate body (1%) (13), optic tract (6%), in the optic radiations (33%), and occipital lobes (54%). The commonest persistent visual deficits included visual field loss (hemianopia, quadrantanopia) (5), perceptual (visual inattention/neglect) (14) and eye movement disorders (5). Unfortunately, the recruitment criteria for the study of Rowe and colleagues (5) did not mention the number of unselected patients screened, nor the number with pre-existing eye diseases that may have confounded the effects of acute stroke on vision.
Ptosis has also been identified as a common indicator of transient ischemic attacks and midbrain infarctions (15) while impaired saccades, smooth pursuits (16), and nystagmus are reported to be more prevalent following frontal lobe, cerebellar and brainstem infarctions (17). Other stroke related visual anomalies have also been reported to be under diagnosed as ocular misalignment and gaze deficits can be subtle and patients are often unaware or asymptomatic for these changes (18,19), with two-thirds of patients showing unilateral visual neglect following acute right hemisphere parietal stroke (20). Furthermore, the application of a battery of three bedside oculomotor tests (HINTS) measuring head impulse, nystagmus, and test of skew have proven accurate and reliable for the identification of acute stroke following acute vertigo presentations (21).
Indeed an acute stroke test battery (4) measuring distance visual acuity in each eye (22,23), visual neglect (20), and ocular misalignment has been proposed recently (24). The battery includes tests for diplopia, pupil dysfunction, nystagmus and eye movement deficiency as well as more subtle tropia, phoria, and extraocular motor function in the cardinal positions of gaze, given that the cranial nerves III, IV and VI are supplied by a myriad of arteriole blood vessels on the same side as the eye such Abbreviations: MRFn App, Melbourne Rapid Field Neural App. that they are susceptible to ocular motor dysfunction in ischemic conditions (24). However, the battery is not yet established as a regular neurological routine and most current bedside visual field assessments are performed using hand/finger confrontation (25) even though this method has been reported as having limited value for the detection of visual field loss (26,27).
Confrontation continues to be used for bedside screening of stroke patients due to the difficulty of applying commercial visual field devices that require a degree of patient mobility and head/face coordination for testing (28). As a consequence, the nature of acquired visual field deficit in the acute phase of stroke (<72 h) has not been evaluated rigorously to date, though the advent of modern technology, and in particular tablet devices, afford ideal interfaces and test platforms for the testing of vision in hospitalized patients by their bedside (26,29). A newly developed iPad tablet application for measuring visual field integrity known as Melbourne Rapid Field-Neural (MRFn) has recently been validated against the gold standard Humphrey Visual Field Analyzer (30) making it an useful tool to measure the integrity of functional vision across the visual fields of both eyes in hospitalized patients. The MRFn app also comes with the ability to test high contrast visual acuity with a Landolt C and visual acuity in noise (i.e., visual stimulus is embedded in a background of white noise) aimed at measuring threshold perception following the decomposition of the contrast of the target (30,31). Therefore, measuring visual acuity performance in background noise provides useful insights into the neural mechanisms and computations needed to solve visual recognition (32)(33)(34) as demonstrated in the psychophysical testing of neurotypical and psychiatry patients with major depressive disorder (35).
Thus, the aim of this study was to utilize the MRFn (Melbourne Rapid Field-Neural) iPad application to measure visual acuity with high contrast targets, visual acuity-innoise and visual field integrity in first episode hospitalized ischemic acute stroke patients with no prior history of ocular disorder. We hypothesize a decrement in vision post stroke acutely.
MATERIALS AND METHODS
The clinical ethics has been approved by the local review board (Western Health Ethics Committee HREC/16/WH/1) and was conducted in accordance with the tenets of the Declaration of Helsinki with all participants (or their carers) providing informed consent.
Participants
Our convenience sample of cases comprised 160 sequentially presenting, stroke patients (29-95 years, 68 ± 14.5years, 89 males) admitted to Sunshine Hospital, Melbourne, between June 2017 and July 2018. Patients were invited to volunteer for a subjective assessment of vision (visual acuity [high contrast and in noise] and visual fields) and those who agreed and, who met our inclusion criteria (i.e., first episode ischemic stroke with radiological confirmation, the availability of current habitual reading glasses) (Figure 1) were tested while wearing their habitual reading spectacles at their bedside using the Melbourne Rapid Field-Neural (MRFn) application. Refractions were not performed at the hospital rather their verbal history was used to determine the adequacy of current reading glasses. All testing was performed during the first week (usually day 2 or day 3) of hospital stay. Sixty first episode acute ischemic stroke patients (29-88 years, 65 ± 14 years, 33 males) met our inclusion criteria and had their data analyzed for this study. One hundred patients (63%) were excluded from analysis for the exclusion criteria shown in Figure 1.
Thirty-seven age-similar healthy controls (29-85 years, 64 ± 12 years,14 males) were recruited following a comprehensive routine eye examination at an optometry practice of one of the authors (CW) after providing informed written consent for participation. These participants showed no evidence of current or past ocular and neurological disorders and were wearing their habitual reading glasses.
Stroke diagnosis and localization of the vascular source of the lesion was determined at the time of admission by a neurologist with routine Computed Tomography (CT) or Magnetic Resonance Imaging (MRI). The greater spatial resolution of MRI was utilized to identify small volume ischemic changes often associated with minor strokes (37). This information was used to confirm the diagnosis and facilitate a structure-function analysis with the visual capacity (38).
Melbourne Rapid Field-Neural iPad (MRFn) Application
The Melbourne Rapid Field application (GLANCE Optical Pty Ltd, Melbourne, Australia) measures visual acuity and visual field thresholds across the central visual field using an iPad tablet (12.9 inches iPad Pro) (39). Stroke cases sat on the hospital bed or on a bedside chair during the testing whereas controls performed the test on a bench in a clinical optometry practice at 33-38 cm working distance. The visual field test pattern used by MRFn is a reduced 24-2 Humphrey Field Analyser (HFA) test grid with 4 extra spots added to the fovea (Figure 2) (30). Spot size scaling results in a fixed threshold of 30 dB (Figure 2) at all locations (30). Previous studies find the MRFn returns outcomes that are strongly correlated to HFA thresholds on both a global and regional basis (40,41).
In visual field testing, patients were required to respond to the presence of a spot by either tapping the screen or the spacebar of the iPad keyboard. All chose to tap the keyboard space bar indicating adequate manual control. We found one patient with a frontal lobe lesion who had difficulty tapping the space bar and preferred to tap the screen directly to complete testing. There were two other subjects who adopted their non-dominant hand for motor tasks after the stroke and used it for the visual assessment, all other participants used their dominant hand. Reliability (false positive, false negative and fixation loss) was routinely polled during testing. The visual acuity test presents a high contrast "Landolt C" target ( Figure 2) on a bright background (130 cd/sq.m) as well as the same "Landolt C" target embedded in luminance noise, generated using a psychometric model accounting for true acuity and noise in the visual system, with the reduction of the contrast sensitivity of the background spatial vision by 10% of the high contrast "Landolt C" optotype (42). Visual acuity-in noise has not been previously tested in acute stroke, but given the past reports for abnormality of noise-related tasks in acquired neurological disorders and in stroke cases well-after the onset of stroke (43,44), we tested our acute stroke cohort expecting some may have difficulty recognizing visual targets immersed in noise.
Testing Procedures
Visual acuity and the visual fields of both eyes of all study participants were measured monocularly in ambient hospital room lighting. The lighting has been found to have little impact on test outcomes (45) provided reflections off the screen are avoided. Screen brightness was set to maximum for 10 min prior to testing, to stabilize luminous output (46). Verbal instruction on test performance was given at the bedside and patients were allowed a practice trial before starting the test.
As most participants were naive to tablet perimetry, the preferred eye was tested first with operator feedback for training and learning of how to do the test. This eye was retested after the training phase before testing the fellow eye.
Data Analysis
Comparisons between stroke and control groups were made for visual acuity, visual acuity-in-noise, and the mean deviation (MD) of the visual field. The mean deviation is determined from a pointwise comparison of contrast thresholds (dB) to age-related normals provided by the MRFn App. The time taken to complete vision assessments was also recorded.
Although both eyes were tested, the eye ipsilateral to the CT/MRI defined lesion was analyzed in the stroke group and compared to the RE (Right Eye) of controls (comparison to the fellow eye does not change our findings).
Non-parametric statistics (Mann-Whitney U-tests) were employed given the heterogeneity and variability of data in the stroke group (Figure 3). All group data are shown as boxand-whisker plots, with whiskers identifying the total range of the data set. The 99th percentile of controls was used as the criterion to identify "abnormal" outcomes. Levene's test was used to compare group variances. Statistical analysis was conducted using GraphPad Prism v7.00 for Windows www.graphpad.com.
RESULTS
Of the 160 stroke presentations (Figure 1) MRFn testing could be performed and was successfully completed in 108 (68%) patients. Of these, 48 cases did not meet our inclusion criteria (first episode ischemic stroke with radiological confirmation, Figure 1) leaving 60 cases of acute ischemic stroke for analysis. First episode acute stroke patients were able to perform the tests accurately at their bedside, in under 5.4 ± 0.8 min per eye. Control patients completed all tests in under 4.0 ± 0.3 min per eye.
In the right hemisphere, 17/26 and in the left hemisphere 18/31 presented visual field losses in the form of a hemianopia, quadrantanopia or an altitudinal loss. All right hemispheric vascular based lesions showed twice as greater visual field losses compared to left hemisphere ( Table 1). Despite the presence of substantial hemianopic and quadrantanopic visual field losses, eighteen of the 41 (44%) patients with visual field loss were unaware of any limitation to their vision. (See Tables 1, 2 for more detailed information on vision function in individual lesion regions.) The CT and MRI-scans showed that the commonest site of lesion among the 60 patients was a middle-cerebral artery lesion (n = 15, 25%), followed by cerebellar artery disorders (n = 10, 17%), occipital lobe infarcts (n = 9, 15%), posterior cerebral artery lesions (n = 6, 10%) and parietal lesions (n = 3, 5%). Multiterritorial infarcts were noted in three patients (5%) and the locus for the other 14 cases have been detailed in Tables 1, 2.
DISCUSSION
To the best of our knowledge, few studies have quantified the incidence and nature of acquired visual deficits in acute ischemic stroke patients (<7 days) with no previous history of visual abnormality (26). The key features of our findings are that most patients (except for 5 cases) with radiologically confirmed first episode ischemic stroke, retain near normal high contrast visual acuity, although given we did not rigorously refract participants but had them wear their habitual reading glasses, we cannot dismiss the possibility that these 5 patients show high contrast visual acuity loss due to uncorrected refractive error. On the other hand, 62% of our patient sample showed deficits in visual acuity-in-noise and 68% showed visual field loss. These changes could not have arisen from an uncorrected refractive state. Besides, although the majority of stroke patients presented with varying clinical symptoms including sudden onset unilateral numbness, loss of motor sensation, and hemiparesis, 44% (18 out of 41) of our patients were unaware of their visual field defect or of their altered visual capacity (i.e., acuity-in-noise).
Our results also demonstrate the clinical potential of using tablet based applications to obtain a quantified measure of visual capacity (visual field, visual acuity and acuity-in-noise) in a relatively short duration (<6 min per eye) in an acute stage of a cerebrovascular injury by testing at the bedside of the patient.
In terms of a structure-function analysis, many ischemic lesions throughout the brain can induce acute visual defects (47). As expected, all occipital lesions (n = 9/60) and posterior cerebral artery strokes (n = 15/60) induced visual field deficits. All 3 parietal cortex lesions (Right hemisphere: 2, Left hemisphere: 1) also produced visual deficits. Unexpectedly, ∼33% (20/60) of cases who had lesions in other regions (Tables 1, 2) of the brain were also associated with visual field deficits and showed an acuity-in-noise impairment. Among them, nine of the 15 middle cerebral artery strokes and four of the 10 cerebellar artery strokes produced visual field defects ( Table 1). Two strokes in the left basal ganglia, two out of 4 frontal lobe strokes, and 3 multi-territorial infarcts also caused visual loss ( Table 1). The three multi-territorial infarcts involved more than one site of lesion from brain imaging. Interestingly all parietal strokes and the two multi-territorial infarcts which also had parietal lobe involvement produced visual field defects.
Although hemineglect is commonly associated with parietal cortex lesions (48), we did not assay for this possibility in the current cohort of patients and cannot comment on its presence.
Our findings are similar to those of Rowe et al. who undertook vision assessment 22 days (median) after stroke (range 0-2,543 days) in patients identified during hospitalization as needing ophthalmic referral. Of these patients 63% had previously shown visual field loss during confrontation test whereas only 37% of cases showed visual field deficits when tested on automated static or manual kinetic (Goldmann) methods (49). From these findings, Rowe et al., concluded that 52% of 915 cases had visual field loss (49). We quantified visual field loss in 68% of our cases who did not have pre-existing eye disease.
Rowe et al. (5) have previously advocated the need for vision testing following stroke. The high prevalence of quantifiable visual defects in acute ischemic stroke cases as noted in our study and that of past works (26), coupled with the lack of awareness for such loss, highlights the need for digital appliances that can quantify these losses. The novel MRFn App is an easy, rapid, and sensitive bedside diagnostic tool for routine use in acute neurological assessments and for tracking recovery or change in the patient.
The immediate impact that acute ischemic stroke per se has on visual acuity has not previously been reported even though other neurological diseases such as multiple sclerosis (50) and idiopathic intracranial hypertension (51) are known to be associated with visual acuity loss. Interestingly, 27 out of the 37 patients (73%) who showed deterioration in visual acuityin-noise also showed evidence of abnormal visual fields but preservation of high contrast acuity.
Clinically, visual acuity is a measure of the ability of the foveal visual system to discriminate a letter or optotype from background spatial information. Visual acuity-in-noise measures the ability to discriminate and identify the targets in the presence of added background white noise (52). The addition of luminance noise imputes to a stronger masking effect for the optotypes, and thus more complex processing of the visual information (31). This is likely the cause of the one line reduction in visual acuity in our controls (mean: 6/7.5-2) in the presence of the noise elements (53,54). In our stroke group, however, we found a 2-line deterioration in the visual acuity-in noise with mean of 6/12-2. This involves all parietal strokes, occipital strokes, and the multi-territorial strokes with parietal lobe involvement whereas we did not find such a marked visual acuity-in-noise impairment in controls.
The possibility that the visual acuity-in-noise optotypes and visual field loss are measuring similar neuroanatomical processes can be rejected given that patients who showed deterioration in visual acuity-in-noise and visual field sensitivity had regional diversity of lesions (Tables 1, 2) corrupting any commonality in their structure-function relationship (47,55). Recognition of an acuity target involves the distinction of a static optotype from its background (52). The addition of luminance noise elements raises the threshold of retinal sensitivity as well as the subsequent neural processing needed for stimulus identification (56). This visual processing originates in the primary visual cortex, and involves the dorsal stream via the parietal cortex, for visually guided spatial location and orientation of objects (57). Similarly, ventral processing, which also arises from the primary visual cortex, involves the temporal lobe, and functions in object recognition and the discrimination of object details (58). Thus, it is not surprising that visual acuity-innoise is affected by stroke as it likely requires processing and possibly integration from extensive cortical regions. The recent work of Cavanaugh et al. (43) in patients who have cortical blindness noted elevated intrinsic noise that affected performance in these patients well-after the acute stroke event (up to 276 months).
It is possible to deduce that the use of visual acuity-innoise along with high contrast visual acuity at the bedside, has the potential to aid in the diagnosis of ischemic stroke and differentiate these effects from ocular disease. High contrast visual acuity will be typically affected by eye disease and given that visual acuity-in-noise is a sequential processing of this information by cortical inputs through both the dorsal and ventral pathways, these should also be affected due to the reduced ocular input. In our study, our controls returned 0.1 log MAR (6/7.5) for both forms of acuity, whereas stroke cases had an average high contrast acuity of 0.1 log MAR (6/7.5) and an acuity-in-noise of 0.3 log MAR (6/12) implying discrete nonocular causes for this loss. Patients who had radiologic lesions in their occipital lobes also manifested intact high contrast visual acuity ( Table 2).
As 44% of the patients were unaware of their visual field loss, it is also unlikely many would show subjective symptoms of a reduced visual acuity-in-noise as it is a subtle mechanism detected through the testing of target specific features. Although the presence of significant ischemia/brain edema may require longer times (6) for the identification of surrounding objects, we did not place any time constraints on subject response and do not believe that longer observation times would have affected outcomes.
Limitations of our study include the non-identified source of cortical dysfunction, through functional MRI (59), diffusion tensor imaging (60), EEG or psychophysics associated with processes mediated by other cortical regions such as hemispatial neglect (61) or visuomotor processing (62). However, as both of the latter have been reportedly affected by stroke, albeit in a minority of patients, the prospect of loss in cases of generalized cortical involvement is possible. Furthermore, we were unable to identify an association in visual field deficits and visual acuity in noise and hemisphere of lesion. Future studies using functional connectivity (63) MRI may be able to establish this.
Future studies will be required to better establish the mechanisms of functional connectivity associated with cortical defects following acute stroke and during the post stroke recovery phase, especially in visuomotor processing, or attention mechanisms between the right and left side brain hemispheres underlying hemispatial neglect (20) using larger sample sizes for indicators of generalized edema and if, visual acuity-in-noise and some aspect of visual field defects, in the absence of structure-functional relationship, recover over time.
Longitudinal studies with the MRFn app and MRI imaging will elucidate these changes in adaptation, visual attention, and neuroplasticity as well as provide information regarding any therapeutic response in post-stroke patients.
CONCLUSION
Our findings indicate that acute stroke induces significant vision loss in 2/3 of hospitalized patients, quantifiable as early as 48-h after stroke, and often unrelated to the confirmed lesion site. Visual acuity-in-noise and visual field deficits have emerged as rapid and sensitive biomarkers of acute ischemic brain dysfunction. Our results imply that visual dysfunction may be associated with a more generalized cerebral dysfunction while highlighting the need for bedside testing of vision for every stroke patient and demonstrating the translational clinical value of the "Melbourne Rapid Field-Neural" iPad application as a low cost, rapid, rigorous and easy to administer functional vision test for use in acute stroke patients.
DATA AVAILABILITY STATEMENT
The datasets generated for this study are available on request to the corresponding author.
ETHICS STATEMENT
The studies involving human participants were reviewed and approved by Local review board (Western Health Ethics Committee HREC/16/WH/1) and was conducted in accordance with the tenets of the Declaration of Helsinki with all participants (or their carers) providing informed consent. The patients/participants provided their written informed consent to participate in this study.
AUTHOR CONTRIBUTIONS
CW was involved in planning, design of the experiments, was responsible for recruitment of patients and all aspects of data collection, contributed to analysis of the data, prepared figures and tables, authored and reviewed the paper, and approved the final draft as part of her doctoral research. TW as Head of Hospital Department of Neurology managed ethical concerns, facilitated patient access and recruitment, was involved in design of experiments, led acquisition and interpretation of all radiological data, contributed to drafting of manuscript, and final approval. AV contributed to design of experiments, led data analysis, preparation of figures and interpretation of visual field results, co-authored and reviewed drafts of the manuscript, and approved the final version. SC conceptualized, designed, funded the study via internal grants, contributed to analysis, theoretical interpretation of the data, drafting of manuscript, and final approval. CW, AV, and SC had full access to all the data in the study. All authors contributed to the article and approved the submitted version. | 2020-07-17T13:15:24.170Z | 2020-07-17T00:00:00.000 | {
"year": 2020,
"sha1": "89aeaaed28a548c827af84e07a8009f441a4d395",
"oa_license": "CCBY",
"oa_url": "https://www.frontiersin.org/articles/10.3389/fneur.2020.00705/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "89aeaaed28a548c827af84e07a8009f441a4d395",
"s2fieldsofstudy": [
"Medicine",
"Psychology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
113209738 | pes2o/s2orc | v3-fos-license | Research and Development of Mixed and Standard Type Biomass Gas Turbine System with Enhanced Fuel Applicability
Niigata University and TERI collaborated in developing a biomass gas turbine system with an enhanced fuel applicability for rural villages in south Asia. This paper describes a mixed and standard type gas turbine system for this purpose. The mixed type gas turbine employs a regenerated Brayton cycle with a secondary combustor to burn various fuels. The thermodynamic analysis revealed that the mixed type gas turbine system has a capability to achieve the thermal efficiency between a regenerated Brayton cycle and a complete external fired system. An experimental test was done using mixtures of bio diesel fuel and kerosene. It demonstrated that the engine performance changed only slightly with regard to the mixture ratio. In the experiment, a prototype biomass gas turbine was able to achieve the 600 W power generation. The mixed fuel with the mixture ratio of bio fuel up to 60 % was confirmed to be applicable to the current gas turbine system.
Introduction
In south Asia, almost four billion people live in remote villages not connected to any power system. The utilization of decentralized system based on renewable energy is expected to prevail in such villages. Focusing on the development of the decentralized system is in line with the recent world-wide effort on preventing global warming by suppressing the rising consumption of fossil fuel due to modernization. Recently, researchers have been developing biomass liquefiers, biomass gas producers and reciprocate engine biomass power systems for rural villages. However, such reciprocate engines can only be used with purified bio-fuels. There is a growing demand for a new system that can burn wide variety of fuels including crude bio-fuels.
The present authors collaborated to develop a biomass gas turbine system with an enhanced applicability to various fuels. The gas turbine systems are innately omnivorous that can consume both liquid and gaseous fuels. The external fired (EF) gas turbine systems have an enhanced applicability to crude fuels 1) 2) . The present authors are developing the mixed type gas turbine system. This corresponds to the regenerated gas turbine with secondary combustor for enhancing the fuel applicability.
This paper describes the analysis and the experiment of elementary tasks for developing decentralized biomass gas turbine power system for rural villages. First, the analysis is made for the mixed type gas turbine system. Secondly, the experiment is made for the standard gas turbine using the bio-diesel fuel (BDF).
Field Research in India
The field research has been conducted in Haryana and Tamil Nadu states of India since 2009. Based on the field work and existing literatures, Indian decentralized electrified area can be conceptualized in Fig. 3. Since it is remote from major cities, the energy system needs to be sustained by recovering natural energy from local facilities. The circulation of energy and material is expected to progress into the network for promoting environmental preservation and the industrial advancement as shown in Fig. 4.
Analysis of Mixed-type Gas Turbine
Fig . 5 shows the mixed type gas turbine cycle. This is a mixed system of the standard and the externally fired (EF) gas turbine. This cycle corresponds to a regenerated Brayton cycle equipped with a secondary combustor.
Here, primary combustor is installed before the turbine as the standard Brayton cycle. A secondary combustor is incorporated after the turbine to burn various fuels. The exhaust gas from a secondary combustor does not pass through the turbine. The secondary combustor thus can Heat per unit mass working gas and net work by that are respectively written as The thermal efficiency of the cycle is The heat and the net work can be rewritten by nondimensional parameters as These equations use the pressure ratio, the temperature ratio and the EF temperature ratio defined, respectively, as The EF temperature, T6, must be higher than the turbine exit temperature T4' and lower than limit as (12) The upper limit of the EF temperature ratio, τEF max is decided so that the heat exchanger exit temperature, T7, reaches the turbine inlet temperature, T3. This condition leads to When τEF = τEF max , the cycle corresponds to the external fired system without the assist combustor.
It is notable that equation (7) is a generalized form of heat by regeneration cycle. When the EF temperature, , is the same as the turbine exit temperature, T4', the mixed cycle corresponds to the regenerated cycle without EF combustor. Namely, equation (7) yields the heat of regeneration cycle when replacing τEF with τ. Fig. 8 shows the net work by the mixed gas turbine system for the condition listed in Table 1. The net work of the mixed system is identical to the standard system without the regenerator nor the EF combustor, or to the regeneration system without the EF combustor. The net work depends on the temperature ratio τ, but is irrelevant to the EF temperature ratio, τEF. In the figure, the net work is maximized around the pressure ratio of 4.0.
The thermal efficiency of the mixed system for the condition of Table 1 is shown in Fig. 9. The figure presents the standard gas turbine system without the regenerator nor the EF burner, the regeneration system (τEF = T4'/T1) and the EF system (τEF = τEF max) for the condition shown in Table 1. The thermal efficiency of the mixed, the regeneration and the EF system is larger than the standard cycle when the pressure ratio is less than 5.6. This means that the exhaust heat can be recovered only when the turbine exit temperature at 4 is higher than the compressor exit temperature at 2 for relatively low values of pressure ratio as suggested by T-s diagram of Fig. 7 5) .
In Fig. 9, the thermal efficiency of the mixed cycle and the EF cycle are lower than the regeneration cycle for the whole range of pressure ratio. However, the mixed and the EF cycles enhance the thermal efficiency to 0.26 or 0.27 from 0.2 of the standard cycle at the optimized pressure ratio. The mixed and the EF cycles are obviously advantageous since they can burn wider range of fuels than the standard or the regeneration cycles.
Experimental Test of Standard Gas Turbine Power
Generation An effort has been made also for experimental consideration of the bio-oil fueled gas turbine by present authors. The electricity generation was made by the powerturbine-equipped standard gas turbine shown in (Fig. 12).
The experimental tests were conducted using
Experimental apparatus
This study uses the Germany-made compact gas turbine engine, P60, for the test of bio-oil mixed fuel. The specification of the engine is the same as the engine for power generation. The power turbine of this engine was replaced by the jet nozzle for load. The total length is 225 mm.
The diameter of the jet nozzle is 54.6 mm at the inlet and 40 mm at the exit. The schematic diagram of the gas turbine with nozzle is shown in Fig. 13, and the photograph in Fig. 14. The measurement system is shown in Fig. 15.
The gas turbine is fixed on the linear guide. The load cell transducer is used to measure thrust by the jet stream. However, the viscosity of the BDF is reduced to two times of kerosene at the mixture ratio of 60 %. Therefore, the esterified BDF is less difficult in handling for burning.
Bio-oil-mixed fuel
Also measured in the present study is the density of the bio-oil-mixed kerosene. The result is shown in Fig. 18.
The BDF has slightly lower density than Jatropha oil. The density is shown to change in proportion with the mixture ratio. The lower heating values (LHV) are calculated using the measured density, and they are shown in Fig. 19.
The experiment of gas turbine is made by using kerosene or its mixture with the BDF. In the case of kerosene without the BDF, the oil is blended by 4 % turbine oil for lubrication for the experiment. This paper deals with the mixture ratio of BDF-mixed kerosene changing mixture ratio from 10 % to 60 %. The lower heating value of the BDF is 80-90 % of kerosene due to the chemically bound oxygen as suggested in Fig. 19.
Thrust and exhaust gas temperature
The gas turbine was able to operate using the mixed fuel for the mixture ratio up to 60 %. White smoke was observed for the cases of higher mixture ratio. However, the operation and measurement were successful for the tested range of the mixture ratio.
The fuel consumption by the gas turbine is shown in Fig. 20. The horizontal axis is the engine revolution per minute. The gas turbine consumes the fuel at 0.5 g/s to 3.0 g/s. The gas temperature at the nozzle exit is shown in Fig. 22. The temperature decreases slightly and increases with increasing revolution. These changes imply the change of air fuel ratio discussed later. In the mixed oil cases, there Since this is not consistent with the gas analysis in later discussion, this temperature rise is thought to come from the after burning of the unburnt bio oil.
Gas analysis
The exhaust gases from the jet nozzle are analyzed by the testo gas analyzer. The concentration of oxygen is shown in Fig. 23. The oxygen concentration generally decreases with increase of the engine revolution. This is consistent with the decrease of air fuel ratio hinted by the temperature rise in Fig. 22. The air intake per fuel is thus suggested to decrease with increase of engine revolution.
As to the bio oil effects, the oxygen concentration generally increases over the whole tested range of the revolution.
This increase does not occur by decreasing the air fuel ratio. This is because the temperature averts such trend of the air fuel ratio. The increase of oxygen concentration for the mixed oil cases can occur from the increase of the chemically bound oxygen in bio molecules.
Analysis of engine performance
The engine performance is examined by analyzing the experimental data. The thrust and the power of exhaust gas can be calculated by The mass flow rate of gas is calculated from the fuel consumption and the oxygen concentration in the exhaust gas. Since the incomplete combustion is low, which is suggested by the carbon monoxide level, the calculation assumes complete combustion with approximation of fuel: The enhancement occurs for the engine revolution higher than 120,000 rpm. The enhancement of the efficiency means that the input energy is maintained although the fuel consumption is increased for the BDF-mixed fuel. This essentially comes from the specialty of the bio fuel which include chemically bound oxygen within their molecules.
The stoichiometric air fuel ratio is 14.9 for tetradecane and 12.9 for oleic acid. The complete combustion of bio oil needs less air than that of fossil fuel. This can lead to specific changes in higher engine revolution. However, the advantages of the bio oil cannot be judged by the current experiment alone. Further investigation on bio oil for wider conditions is needed.
Conclusion
The present paper describes the research work for developing the gas turbine system with enhanced fuel applicability. The conclusions are summarized as follow: (1) The field research was conducted in an Indian remote village to know the energy usage in Asian rural areas. The bio fuels such as wood and farm wastes are continuously used in Indian villages. There is an increasing number of bio-gas reciprocate engine generator in rural villages. Therefore, there are much chances to introduce the biomass gas turbine in Indian villages.
(2) The thermodynamic analysis is made for the mixed type gas turbine cycle. The non-dimensional expression for the net work and the heat are deduced to make their diagrams against pressure ratio. The mixed type gas turbine shows the medium thermal efficiency between the external fired (EF) and the regenerated cycle.
The mixed cycle is inferior to the regenerated cycles in the thermal efficiency. However, the mixed cycle is superior to the EF cycle and the standard cycle without regeneration heat exchanger. (4) The experimental test on the gas turbine with jet nozzle revealed that the gas turbine engine can work using the mixed fuel with the mixture ratio up to 60 %.
The carbon monoxide level is increased by raising the mixture ratio. However, the engine thrust is maintained | 2019-04-14T13:05:10.004Z | 2015-12-20T00:00:00.000 | {
"year": 2015,
"sha1": "41ad3960bd8591f0b0bd7287212303ea1fc4515d",
"oa_license": null,
"oa_url": "https://www.jstage.jst.go.jp/article/jie/94/12/94_1362/_pdf",
"oa_status": "GOLD",
"pdf_src": "Adhoc",
"pdf_hash": "81cc81c125c02edb0cce4d52bdb47fd941cf19f0",
"s2fieldsofstudy": [
"Engineering",
"Environmental Science"
],
"extfieldsofstudy": [
"Engineering"
]
} |
259672647 | pes2o/s2orc | v3-fos-license | A study on prevalence of anemia in school going Yanadi tribal children of Nellore District, Andhra Pradesh, India
Owing to the living condition, tribal community is at higher risk for diet-associated health disorders. The indigenous tribal populations like “The Yanadi”, residing in state of Andhra Pradesh in India. Being in the lower economic strata, they are deprived of proper food and access to basic health facilities is also constrained. Yanadi tribal children exhibit characteristic features of IDA, stunted growth, wasting and lower cognitive skills, which significantly affects their education. The study investigated the prevalence of anemia in 384 male Yanadi tribal schoolchildren aged 6-14 years and multiple approaches were adopted for survey-based data on social, economic and environment variables of the study-cohort was gathered along with anthropometric information. It was observed that huge majority of the tribal parents lacked primary school education and economic condition of such families is under dilapidating state, leading to consumption of improper food. Strikingly, 56% of the children exhibited the commonest symptom of anemia, pale conjunctiva. Overall analysis of the study participating children, following the WHO grading, close to 28% were found to be anemic and hemoglobin content (g/L) was observed to be close in both age groups, 11 to 14 years (11.889±1.123) and 7 to 10 years (11.734±1.309). Largely, the Yanadi tribal children projected cognitive impairment in the form of poor memory function (33%), down regulated cognitive functions (46%), and impaired attention functions (74.5%). It is somewhat relieving to see that anemia amongst Yanadi male children is not as severe as observed in children of other populations, as re-ported. However, the study projects out impaired cognitive and behavior skills amongst the participants, emphasizing the need of extending the study in a larger cohort.
INTRODUCTION
Anemia affects both developing and developed countries and it is a global health concern which causes serious impact on human health. Across the globe the complications of anemia vary greatly. The prevalence of anemia is highest in developing countries and common in industrialized countries [1]. Several previous studies have summarized the prevalence of anemia and its global consequences. The WHO estimated that about 30% of the world population was anemic in 1985, 37% of women were anemic in 1992 [2]. Anemia affected 25% of the glob-al population that includes 42% of pregnant women, 30% of nonpregnant women and 47% of preschool children (as per WHO 2008). The recent, global anemia prevalence was estimated to be 29% among pregnant women, 38% in non-pregnant women, and 43% in children, with each group decreasing since 1995 [3]. Anemia was projected to account for 2% of all YLD (years of healthy life lost owing to disability) and 1% of 'disability-adjusted life-years' in the 'Global Burden of Disease' (GBD) 2000 study; similar figures were found in the GBD 2004 update (WHO 2015). According to WHO regional estimates for preschool-aged children, pregnant and non-pregnant women a total of 315 million people in these three populations were anemic. Similarly, the underdeveloped African nations have 48-68% of their population with anemia [4]. In India, the prevalence of anemia is estimated to be higher when compared to all other developing countries [5].
The symptoms associated with anemia are fatigue, dizziness, tiredness, lethargy, drowsiness, becoming breathless easily, headaches, irregular heartbeats (palpitations), tinnitus (ringing in the ears) and alteration in the taste. Both the mother and the neonate are affected during pregnancy, in anemia case with difficulties of low birth weight, preterm delivery and post-natal depression. Anemia is categorized in to Aplastic anemia, Iron-Deficiency Anemia (IDA), Pernicious anemia and Hemolytic anemia. Inadequate iron intake, respiratory infections, helminthic infestations, malaria, diarrhea, vitamin A and C deficiencies are among its multifactorial causes. Iron deficiency can also be caused by blood loss and sloughing of cells (menstrual flow) and transmission to the growing fetus [6].
Anemia is more prevalent in tribal community, preschool children, poverty conditions etc. However, tribal communities live in remote places that are inaccessible for the healthcare workers to identify a medical condition by routine health screening. Hence the exact prevalence percentage of any medical condition including IDA is not clearly known in this group. Their IQ, cognitive skills, mental and physical development is also relatively poor than their non-tribal counterpart. Hence estimating the root cause of IDA prevalence could be helpful to design appropriate strategies to address these issues [7].
Approximately two-thirds of preschool children were ingesting less than 50% of the daily iron intake, which is one of the primary causes of anemia in tribal children. The prevalence of anemia in indigenous children is likewise significantly higher than in the general population. According to the NFHS, approximately 77% of indigenous youngsters were anemic. In poverty-stricken tribal population world-wide and in India the IDA is majorly due to malnourishment and it is reasonably easy to treat it with dietary changes and health supplements. However, tribal communities live in remote places that are inaccessible for the healthcare workers to identify a medical condition by routine health screening.
The primary regulator of iron-homeostasis is a liver hormone-Hepcidin (peptide); mechanistically, it controls iron metabolism by binding with its receptor, "ferroportin (transmembrane iron-export protein), which is highly expressed on the reticuloendothelial macrophage membrane and on duodenal enterocytes.
Diet Survey and Nutritional Assessment
The daily food consumption details were noted from the hostel. Simultaneously, one more food and Nutrition survey was performed on the families of the study participant. The survey team visited the Yanadi children residential area and collected the following data such as daily food consumption style, amount, number of meals served per day, information about the veggies or fruits or meat consumed. Also, the frequency at which the participants are consuming the diet. The survey team also noted the information about Yandi's traditional food habit and amount of money spent for daily diet.
Diagnosis of Nutrient intake
The tribal children nutrient intake data was entered into the validated software 'DietCal' version 3.0 (Profound Tech Solution; http://dietcal.in/), which is based on values from the Nutritive Value of Indian Foods [8]. The data was then compared to the 'Recommended Dietary Allowances' for Indian children. The calculation of nutrient intake has been done by the method of NAR (Nutrient Adequacy Ratio) [9].
Data Collections
School health check-up campaigns were organized post consultation and approval from the school head and data collected. The study participants were thoroughly interviewed and clinically examined by a senior pediatrician for demographic details, including age, and a detailed history of dietary habits, history of any medications, allergies, and associated with any signs of weakness, breathlessness, and anxiety were noted. The study participant age proof was confirmed from the students' school records. Signs of malnutrition and anemia were also observed in clinical examination. Students' performance and progress data were collected from the school teacher or parents. All these were documented into a pre-designed questionnaire sheet. All children were examined for signs of anemia; palpebral conjunctiva, lips, tongue, skin and nail beds for pallor, comorbid conditions, personal hygiene, physical status, nutrition, and physical activity. Anthropometric data (height and weight) of each child was obtained by Auxiliary Nurse Midwife (ANM) and social workers.
Cognitive Performance Data Collection Cognitive development and scholastic performance data were collected by conducting the tests to each participant. Each of the below mentioned tests were selected depending on the paraphernalia that are supposed to incur due to the nutritional im-pact and will be a relatively "pure" measure of the specific cognitive ability of the Yanadi tribal children.
Social Communication
Communication capability of children was tested based on the communication in different situation and evaluated by different task performed in the school. Academic assessments include Thought process, Reading /Writing Skills, and Numeracy Skill.
Attention performance
The ability to inhibit interruption and remain focused with full attention is crucial factors for learning. Hence the attention performances of Yanadi children were assessed with the help of class teachers. This performance test reveals status of various forms of attention in Yanadi children which include -vigilance (sustained focus), ability to inhibit distraction and divided attention. To differentiate distractors from non-distractors the sensory modalities (visual, auditory and motor) were also assessed.
Socio-demographic Data
Socio-Demographic characteristics were noted from hindsight through a baseline household questionnaire during the study period. Before conducting the survey about the socio-demographic status of parent and children, the questionnaire was tested and conducted a model survey to obtaining the best from the participants. The questionnaires were set from various data which was available previously and followed the survey model practiced by the developed organizations. Also, the questionnaires were set as per the current scenario and were special consideration about the Yanadi's life style. All survey questionnaire and study model and field survey were performed based on the long-lasting experience of project coordinators in the study area.
Anthropometry Data Collection Anthropometry Data of Yanadi Tribal children was collected by six membered survey team, which includes; 3 interviewers, 1 Ophthalmic surgeon 2 data collectors (measurers) and 1 supervisor. All the study team members are well-versed with the study protocol. During the school visit, the team members measured the children height and weight accurately. A measurer and an assistant, worked as a pair to perform two independent height measurements.
The reference values were set by the gold standard trainer and measured each parameter twice for getting accurate information. If the data collected by the measurer matched with the trainer data, then it is considered as accurate. Post demonstrating accuracy in data collection, the measures were allowed to collect the data in larger population. All data that met the accuracy and precision requirements were considered.
RESULTS
There are 13 districts in the state of Andhra Pradesh and tribal communities inhabit all the districts. However, in the regions, Chittoor, Nellore and Rayalaseema, the tribal population is higher than the entire state. Among different groups of tribal communities within state, Yanadi is the most vulnerable community in terms of backwardness in several development indices. Tribal children are more susceptible group due to lack of social development, high illiteracy, inadequate food, health security, and high prevalence of malnutrition due to poverty. It is thus crucial from the public health view-point to collect data on health and malnutrition including the anemia prevalence and the risk factor associated in the vulnerable communities. So, the present study was initiated on children belonging to Yanadi tribal community and the occurrence of anemia to assert better management strategies in perspective of the impoverished of a certain portion of society and its detrimental consequences for their physical health.
DISCUSSION
The current study period was of a year, being initiated from July 2019 to June 2020. It was carried out by the "Department of Pediatrics, Narayana Medical College and Hospital", which has a MOU with Lincoln University, Malaysia. Total of 384 Yanadi male tribal children living in tribal hostels were involved in the study, which was initiated after proper approvals from the administrations of the State Health Department and participating schools and hostels. The protocol followed during the study was duly approved by Institutional Ethical Committee (IEC) of Narayana Medical College and Hospital and was completed by adhering to the guidelines laid out by the committee.
The Yanadi male tribal children studying in the schools of designated region and had been receiving mid-day meal were enrolled for the study. The meal served at the school was in accordance with the government's dietary recommendation for a balanced essential diet for the proper development and growth of children.
Aside from gender and age, this study investigated other probable risk factors linked with IDA among student participants, such as low-income households, occasional or no breakfast consumption, red meat, fish, poultry, vegetables, and fruits, and ignorance of anemia and its causes. Inadequate intake of dietary iron, low bio-availability, concurrent inadequate intake of other dietary micronutrients, lack of knowledge of iron deficiency, and poor nutritional status are all possible reasons of the high prevalence rate of IDA in the studied population. Adverse effects of anemia on brain development The importance of nutrition on children's brain development and cognitive performance was purely dependent on the precise timing of all balanced nutrition intake. In this scenario, infancy and childhood timing are crucial for the development of the brain and cognitive performance. There is evolving interest in the effect of nutrition in cognitive enactment and cognitive development of children. Though there seems to be much importance on identifying vital nutrients for cognition and the contrivances by which they might affect the brain, there was comparatively little thought to the assortment of appropriate cognitive outcome measures. However, many studies have reported that iron is a vital element that has played a crucial role in cognitive performance.
Since nutrition deficiency, especially iron deficiency, has a significant role in children's learning ability, it is an essential parameter to assess the impact of iron deficiency on children's cognitive performance. In the study, the impact of nutrition on cognitive and behavioral efficiency was also evaluated. It was conducted through an interview-based questionnaire, an observational check-list and a rating scale with the teachers. The results divulged that impaired cognitive assessment scores (n = 104) were reported in (27.15%) sampled children. "Memory function scores" were poor in 34 (33.34%) of these total sampled children. The higher-level cognitive functions score was also observed to be very low (46.07%). Behavioral problems were identified in 23 (22.54 %) children, with 74.54% having impaired attention functions. Of all the anemic children, 23 (22.45%) had poor basic learning skills in reading while 12 (11.76 %) were assessed to have poor scores in the basic learning skills for numeracy. An overall picture reflects that the participating Yanadi children afflicted with anemia lacked cognitive and behavioral skills, directing towards adoption of good management policies to address the micronutrient deficiency. The percentage of students who ate breakfast every day was considerably greater among students with exceptional outcomes compared to those who failed or just passed their examinations, indicating that good nutrition promotes mental growth.
Iron and anemia relationship
Iron is a trace element required for a number of cellular-metabolic functions, and the body of an adult contains 3-4 g of iron. Because iron is harmful in excess, strict management is necessary to avoid iron deficiency or iron overload. The uses of "serum iron and ferritin, and total iron-binding capacity" were generally using tools with the quantitative assessment of body iron stores.
Serum ferritin (SF), serum iron (SI) and serum TIBC (TIBC) levels were analyzed among the study population and their mean level were found to be 53.57±31.46, 336±73.70 and 25.76±22.64 respectively in the age-group of 7 to 10 years. Similarly, in the 11 to 14 years age-group, it was observed that 48.78± 27.81, 340±73.46 and 30.13±20.67 of SF, SI and TIBC respectively. There's no statistical difference between the age-group of Yanadi children. The SF value was, however, observed to be at the higher end for both the groups. As a result, in this current study, SF levels were increased in children from the IDA community. A significantly high blood ferritin level has been linked to inflammatory diseases and may lead to catastrophic effects, including cancer [11].
The disparity in Hb concentration could be explained partly by poor medical conditions and the standard of living in low-income nations. In contrast, to most previous investigations, we observed a significantly high SF value (an average of 72·6 ng/ ml). Studies in other countries like South Africa and Colombia reported the decreased values than the current study (25·0 and 41·4 ng/ml, respectively). Increased living standards and health knowledge may also lead to higher SF status, as seen in China and the United States (NHANES, 2006). Serum ferritin (SF), serum iron (SI) and serum TIBC (TIBC) levels were analyzed among the study population and their mean level were found to be 53.57±31.46, 336±73.70 and 25.76±22.64 respectively in the age-group of 7 to 10 years. Similarly, in the 11 to 14 years' age-group, it was observed that 48.78±27.81, 340±73.46 and 30.13±20.67 of SF, SI and TIBC respectively. There's no statistical difference between the age-group of Yanadi children. The SF value was, however, observed to be at the higher end for both the groups. As a result, in this current study, SF levels were increased in children from the IDA community. A significantly high blood ferritin level has been linked to inflammatory diseases and may lead to catastrophic effects, including cancer [11].
Diet and Anemia relationship
Breakfast that contains both heme iron and nonheme iron like fat, meat, proteins, fiber, pulses, legumes, grains, fruits, vegetables, minerals and vitamins especially vit-C are required for providing energy and enhancing the iron absorption. Another study [12] Bengali students concluded that amongst the anemic students had regular (41%) and irregular (59%) breakfast intake compared to regular (68.7%) and irregular (31.3%) breakfast intake of non-anemic students. Interesting studies on Bengali students and also Saudi women have depicted that low intake of meat, or fruits, or vegetables have a link with IDA.
The high incidence of IDA in our community might be attributed to poverty, which has resulted in inadequate nutrition and treatment [13]. Aside from gender and age, this study looked into other probable risk factors linked with IDA among student participants, such as low-income households, occasional or no breakfast consumption, red meat, fish, poultry, vegetables, and fruits, and ignorance of anemia and its causes. Inadequate intake of dietary iron, low bioavailability, concurrent inadequate intake of other dietary micronutrients, lack of knowledge of iron deficiency, and poor nutritional status are all possible reasons of the high prevalence rate of IDA in the studied population.
The total energy-rich diet consumed by study participants in different forms were analyzed (Table 4) and it was found that the total energy taken during the hostel days was quite lower than the amount recommended. The total energy consumption was observed to be 1287±238.9 K/cal per day. Whereas, the daily recommended level of energy consumption for healthy adolescents is ~1800 Kcal/ day [14]. Intake of carbohydrate, protein, fat, other micronutrients, dairy-products, and other nutrients in the children 7 to 10 years were178±16.2, 27.2±11.7, 20.5±5.2, 3.2±1.7, 13.5±5.8, 1.2±0.5 respectively and the pattern was nearly similar in elder age-group children. Amount of carbohydrate intake amongst the groups was observed to be nearly equal. However, there has been a striking difference in total energy intake between the groups, but not as often as it ought to be if they followed the recommended diet.
Micronutrient deficiencies have long been a significant health-care issue in India. Nonetheless, significant changes in the region's demographics, economic, political, and social settings have influenced diet, nutrition, and health issues during the previous three decades. Many tribal groups and rural inhabitants are in the midst of a nutrition transition in which malnutrition coexists with non-communicable illnesses linked with various kinds of malnutrition. Micronutrient deficits and/or inadequacies, according to the WHO, have worsened the rising public health care issue posed by non-communicable illnesses, which account for 47 percent of morbidity and 52 percent of total death.
Children had an increased risk of death, high prevalence of anemia, stunting, and wasting as compared to non-Adivasis (http://rchiips.org/nfhs/NFHS-5Report_AP.shtml) for the duration of 2019-2020 These students may have iron deficiency, which might be due to the low bio availability of iron in Indian diet. Proper nutrition during infancy and adolescence is critical for developing health. Several studies have linked IDA to dietary changes [15]. Table 5 divulged the Nutrient intake data of study participants as per nutrient adequacy ratio (NAR). The results revealed that maximum number of study participants had inadequate or fairly adequate NAR ((≥ 0.66) in terms of protein (57.03%), calcium (64.58%), Thiamin (75.78%), riboflavin (72.13%), niacin (72.91%) vitamin C (69.01%), Vitamin A (6.9%) and Folic acid (71.87%). The adequate level of nutrient intake of the study participants was very less. This may be due to unavailability of balanced diet or the daily diet does not contain the daily recommended level of macro-as well micro-nutrient. Many studies suggest the tribal community availing the lower level of micronutrient because of poverty and lower economic status. The present study describes the dietary intake by the Yamada children was very low; only few percentages of children had adequate level of diet in terms of energy. The energy and protein intake were very low in Rajasthan those belongs to BPL girls. Also one more finding reports lower calorie intake as per recommended daily allowance (RDA) in lower socio-economic status adolescent [16].
The food intake pattern was very low in the current study: pulses, milk and milk products intake were less in the study group. This may be the possible causes of energy and protein deficit. Also, the intake of fruits and vegetables were very less in the study participants. This could be reason for deficiency of micronutrient.
In one whole, the Yanadi children were taking the diet not containing all the micronutrients and other essential supplements. The nutrient cap was wider in the study population. The status was comparable with many studies has been done with tribal community, it's just because of socio economic status of the many tribal communities in India ob-served to be nearly equal. However, there has been a striking difference in total energy intake between the groups, but not as often as it ought to be if they followed their commended diet.
Micronutrient deficiencies have long been a significant health-care issue in India. Nonetheless, significant changes in the region's demographics, economic, political, and social settings have influenced diet, nutrition, and health issues over three decades. The majority of tribal groups and rural inhabitants are in the midst of a nutrition transition in which malnutrition coexists with non-communicable illnesses linked with various kinds of malnutrition. Micronutrient deficits and/or inadequacies, according to WHO, have worsened the rising public health care issue posed by non-communicable illnesses, which account for 47 percent of morbidity and 52 percent of total death.
In this study, further analysis of the food items consumed by the participating Yanadi tribal children was initiated to know the source of nutrition they were taking, as depicted in Table.6. The nutrient and food intake survey has been conducted in the residential area of study groups. The results showed that the average consumption of cereals and millets were comparatively low in both the agegroups. The daily intake of cereals and millets was 245±46 in the 7 to10 year's boys and 428±20.7 in 11 to 14 years boys. Whereas, the quantitative intake of pulses was detected to 25±10.6 g/d in 7 to 10 years, and 30.7±2.5 g/d in 11 to 14 age-groups respectively.
Intake of qualitative food such as, tubers, nuts and oils were lower among 7 to 10 years age-group participants and was observed to be 34±16.7, 47±13 & 6±2 g/d respectively. However, average intake of green and leafy vegetables (26±7 g/d) was also slightly lower in 7 to 10 years boys. Likewise, it is perceived in the boys 11 to 14 years age-group, mean intake of tubers (66.03±5.8 g/d), nuts and oils (28.02±77.5 g/d) and green leafy vegetables (38.5± 2.6 g/d).
Overall, 27.86% (n=107/384) of children were found to be anemic. When analyzed following the WHO grading of anemia, 11 participants (2.86%) were found to have moderate anemia, while 89 (23.18%) had mild anemia and 7 (1.82%) had severe anemia in the overall cohort. The prevalence of anemia was found to be highest in the children at the age-group of 7 to 10years, at about 30.39% (31/102) when compared to the older children at the agegroup of 11 to 14 years 26.95% (76/182). The present study found that the anemia prevalence in Yanadi tribal children was 27.86%. However, similar studies from other regions of India have shown overall higher prevalence in school-going tribal children.
The study cohort was of small size and performed in an isolated site and the Yanadi tribal group is distributed in other districts in the state of Andhra Pradesh. The data obtained might change to a higher degree if the study has expanded to other regions in Andhra Pradesh, inhabited by the Yanadi tribal community.
CONCLUSION
The current study was unique in finding the prevalence of anemia among school-going Yanadi tribal children in only one locality of the Andhra Pradesh state. The study findings are strictly limited to one sub-tribe group tribal population and that too encompasses only the school-going tribal children. Additionally, the study included only the male children and might have missed those children who are school-dropouts. Despite the caveats in the study, it provided a very good initial picture of the anemic condition in the tribal children and its associated repercussions in terms of the physical and mental health of the tribal children. Assuming it a base, similar study needed to be undertaken in the entire Yanadi tribal population across all the Yanadi tribal children age-groups. Expanding the study in a larger cohort, coupled with molecular-genetic analysis would elucidate the gravity of the problem. This would provide better insights for formulating the screening and preventive measures against irondeficiency anemia in the tribal children including Yanadi.
Acknowledgement
Author acknowledge the concern department staff in assisting to collect the data and their analysis. | 2023-07-12T07:57:50.594Z | 2023-03-31T00:00:00.000 | {
"year": 2023,
"sha1": "0057489db1c06db7c54566c400b61aa5cfefb1b5",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.37897/rjp.2023.1.7",
"oa_status": "GOLD",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "eaa6daee8ca94a8a9d68a58b6c1f18fccc25401d",
"s2fieldsofstudy": [
"Medicine",
"Environmental Science"
],
"extfieldsofstudy": []
} |
211532749 | pes2o/s2orc | v3-fos-license | Improving cross-lingual model transfer by chunking
We present a shallow parser guided cross-lingual model transfer approach in order to address the syntactic differences between source and target languages more effectively. In this work, we assume the chunks or phrases in a sentence as transfer units in order to address the syntactic differences between the source and target languages arising due to the differences in ordering of words in the phrases and the ordering of phrases in a sentence separately.
Introduction
Model transfer approaches for cross-lingual dependency parsing involve training a parser model using a treebank of a language (source language) and using it to parse sentences of another language (target language). This technique may be used to develop parsers for languages for which no treebank is available.
The performance of cross-lingual parser models often tend to suffer due to the syntactic difference between the source and the target languages (Zeman and Resnik, 2008;Søgaard, 2011;Naseem et al., 2012). Thus, a major challenge in transfer parsing is to bridge the gap between the source and target language. For example, the adjectives appear before the corresponding nouns in English and Hindi while in Spanish and Arabic the adjectives appear after the nouns. Several approaches have been proposed to address the syntactic differences. These include training a parser model using the a selected subset of source language parse trees that are syntactically close to the target language (Søgaard, 2011;Wang and Eisner, 2016), transformation of source language treebank to match the syntax of the target language (Aufrant et al., 2016;Das and Sarkar, 2019b;Wang and Eisner, 2018), target-language independent perturba-tion (Das and Sarkar, 2019a), training a word orderinsensitive parser model (Ahmad et al., 2019) or imposing target-language syntax based constraints while running MST on the edge-score matrix of a graph-based parser to obtain the target language parse tree (Meng et al., 2019).
The syntax of a language may be classified into two categories. Firstly, the syntax of the words within the chunks or phrases (intra-chunk syntax) and secondly, the orientation of the chunks in a sentence (inter-chunk syntax). Consider the following English sentence. EN: (The US) (lost) (yet another helicopter) (to hostile fire) The word groups enclosed by brackets indicate separate chunks or phrases and US, helicopter and fire are the head words of the chunks the US, yet another helicopter and to hostile fire respectively. In this example, the intra-chunk phrase syntax corresponds to the relative ordering of the determiners, adpositions, adjevtival modifiers, auxiliaries etc. with respect to the head words in a phrase whereas the the inter-chunk syntax corresponds to the relative ordering of the chunks in the sentence.
Given a source-target language pair, the syntactic differences may be in the ordering of the words within a phrase, or, in the orientation of the phrases in sentence, or both. For example, the adpositions appear before the corresponding nouns in English while they appear after the corresponding nouns in Hindi. These differences are local to the phrases. Similarly, languages also differ in the orientation of the phrases in the sentences. For example, the English, French, Spanish etc. follows SVO ordering, Japanese, Urdu and Hindi typically follow SOV ordering, while, Arabic and Irish predominantly follows VSO ordering.
Consider the English sentence and its Hindi translation. EN: "(He) (teaches) (the children)" HI: "(va) (bachchOM kO) (paDhAtA hEi)" EN-gloss: "(He) (children to) (teaches is)" Here, the phrase "the children" maps to the Hindi phrase bachchOM kO (children to) and the English phrase "teaches" maps to the Hindi phrase "paD-hAtA hEi" (teaches is). We observe that the phrases have the following differences. The definite article is absent in Hindi. In Hindi, the postposition ko is associated with the word bachchOM (children) while no adposition is associated with the word children in the corresponding English phrase. In the Hindi verb phrase, paDhAtA (teaches) is followed by the copula verb hEi (is). Furthermore, the English sentence follows a SVO ordering of phrases while the Hindi sentence is verb-final.
Both the intra-phrase and inter-phrase differences affect the performance of the transfer parsers. Thus in order to simplify the transfer process we address these two differences separately. We propose to carry out a chunk information guided cross-lingual model transfer for dependency parsing where we treat the chunks as transfer units instead of the words. To this end, we train a source language parser model using the chunks as units. Given a target language sentence, the source language parser model is used to parse the target language chunks followed by the expansion of the target language chunks to obtain the complete trees.
We propose to use chunk information in transfer parsing because a chunker (shallow parser) may be trained using lesser amount of data as compared to a full syntactic parser. Annotating data for a chunker is also much simpler as compared to that of a parser. The chunkers may also be rule-based whose development do not require any data.
Related work
Chunking (shallow parsing) has been used successfully to develop good quality parsers in Hindi language (Bharati et al., 2009;Chatterji et al., 2012). Bharati et al. (2009) have proposed a two-stage constraint-based approach where they first tried to extract the intra-chunk dependencies and resolve the inter-chunk dependencies in the second stage. They have also shown effect of hard and soft constraints to build efficient Hindi parser that outperforms data-driven parsers. Ambati et al. (2010) used disjoint sets dependency relation and performed the intra-chunk parsing and inter-chunk parsing separately. Chatterji et al. (2012) proposed a three stage approach where a rule-based inter-chunk parsing followed a datadriven inter-chunk parsing.
A project for building multi-representational and multi-layered treebanks for Hindi and Urdu (Bhatt et al., 2009) 1 was carried out as a joint effort by IIIT Hyderabad, University of Colorado and University of Washington. Besides the syntactic version of the treebank being developed by IIIT Hyderabad (Ambati et al., 2011), University of Colorado has built the Hindi-Urdu proposition bank (Vaidya et al., 2014) and a phrase-structure form of the treebank (Bhatt and Xia, 2012) is being developed at University of Washington. A part of the Hindi dependency treebank 2 has been released in which the inter-chunk dependency relations (dependency links between chunk heads) have been manually tagged and the chunks were expanded automatically using an arc-eager algorithm. Some of the major works on parsing in Bengali language appeared in ICON 2009 (http://www.icon2009.in/).
Chunking
Chunking involves identification of different phrases in a sentence and identification of a chunkhead or main word in a given chunk. A chunker may be rule-based or data-driven. In a rule-based chunker a set of pre-defined rules are used to identify the chunks and the corresponding heads. On the other hand in a data-driven chunker, the task of chunking is usually posed a sequence labelling task and a machine learning-based algorithm is trained for chunk identification.
On the other hand, rule-based approaches are usually used for chunk head identification.
Chunk identification
In this work, we address the problem of chunk identification as a sequence labelling task where we label each chunk using the BI labelling e.g. in the above example the chunk sequence is as follows; The chunk-type is determined based on the PoS tag of the chunk-head word e.g. if the chunk head word is a noun, pronoun or proper noun then it is assigned the NP chunk type. The beginning of a chunk is not necessarily the chunk head. In Table 1 we present the chunk annotation of an example sentence. The subscripts in the last column indicates the chunk number in the sentence.
Word
PoS
Chunker model
We used a CRF-BiLSTM based neural model to train the chunker. The 2-layer bi-directional LSTM takes the embeddings of the PoS tags of the words in a sentence as input and encodes them in the internal states. We used the hidden states corresponding to the final layers of the forward and backward LSTMs as the distributed representation of the corresponding words. These word representations were used as input to a CRF for chunk-label prediction.
Chunk head identification
We used a rule-based approach for predicting the chunk-head in a given chunk. Based on the category of a given chunk we designed a set of rules for predicting the most probable head. The set of rules varies slightly across languages.
Chunking based cross-lingual model transfer
In this section, we present our approach for shallow parser-guided cross-lingual transfer parsing where the transfer is carried out at the chunk-level instead of the word-level. In Figure 1 we show a schematic diagram of the steps of our chunk-level model transfer. This method requires training a source language parser model using chunks as unit and a shallow parser in the target language.
Training a chunk-level source language parser model involves derivation of the chunk-level parse trees and training the parser model using the chunklevel parse trees. In case of source language, the chunk-level parse trees are derived using the chunk annotation of the training data. This is done by collapsing the sub-trees corresponding to each chunk and replacing them by a chunk representation. The chunk-level parse tree so obtained is used to train the parser model.
Given a target language sentence, the shallow parser is used to identify the chunks in the sentence. In case of the target language, the chunk representations are obtained by simply replacing the words in a chunk by a representation. The sequence of chunk representations so obtained are parsed using the chunk-level source language parser model. Finally, the target language chunks are expanded to obtain the full target language parse tree.
We elaborate the steps for training a chunk-level transfer and parsing target language sentences using the model in Section 4.1 and 4.2 below.
Training a chunk-level parser model
The steps for training a chunk-level transfer parser are as follows.
Obtaining the chunk-level source language treebank
The chunk-level source language parse trees are derived from the parse trees in source language treebank by collapsing the chunks and replacing the chunks by their representations. Here we represent a chunk by its chunk head. In the example, the English sentence (T he white cat) N P ( ate) V P (a little mouse) N P ", the chunks The white cat, ate and a little mouse are collapsed and represented by their chunk heads cat, ate and mouse respectively. In the parse tree, the relations corresponding intra-chunk words are also removed. The final tree consists of the chunk representations and the relations among them corresponds to the relations among the chunk heads as shown in the diagram.
Training the chunk-level parser model
The chunk-level parse trees derived from the source language treebank in the above step are then used to train the parser model.
Chunk-level parsing followed by chunk expansion
The steps for generating the parse tree for a given target language sentence are as follows.
Chunking a target language sentence
A target language chunker is used to identify the chunks in a target language sentence and the heads of each chunk is identified using a rule-based technique as discussed in Section 3. Assuming French to be the target language, let us consider the sentence in the the example. FR: "Le chat blanc a mange une petit souris" A chunker is used to identify identify the chunks as follows; "(Le chat blanc) N P (a mange) V P (une petit souris) N P ". The rule based chunk head identifier is then used to identify the chunk heads. The heads of the chunks Le chat blanc, a mange, une petit souris are chat, mange and souris respectively.
Parsing a target language chunk sequence
The sequence of the target language chunk-head sequence obtained above is parsed using the parser model trained using the source language chunkhead parse trees to obtain the chunk-head parse tree as shown in the diagram.
Chunk expansion
The chunk-head parse tree so obtained is then expanded to obtained the parse tree of the target language sentence. To this end, we expand each chunk in the chunk-head parse tree by attaching the nonchunk-head word in the chunk to its corresponding chunk-head by a modifier-head relation without any change in relative ordering of words in the sentence. In this relation, the chunk-head is the head and the non-chunk-head word is the modifier. As shown in the above example, the chunk represented by chat is expanded by attaching the words Le and blanc to the chunk-head chat to obtain the parse of the chunk.
As of now, we do not associate any dependency relation to the intra-chunk relations.
5 Data and parser model
Data
We carried out our experiments using English (en) and Hindi (hi) as source languages, and, English, French (fr), German (de), Indonesian (id), Hebrew (he), Arabic (ar), Korean (ko) and Hindi (hi) as target languages. We used the UD v2.0 treebanks for our experiments.
Data for training chunkers
We trained our chunker using the gold annotations obtained from the UD 2.0 treebanks of the languages.
We classified the UD dependency relations into two groups; intra-chunk and inter-chunk. Our set of intra-chunk dependency relations comprises of the aux, appos, nummod, det, case, fixed, flat, compound, amod, advmod and goeswith relations. The words related by the other dependency relations such as nsubj, obj, iobj, root, obl, comp, cc, conj etc. were considered to be the chunk-heads and their relations with their parents were considered interchunk relations. In case of the amod and advmod relations, we selectively considered the dependents as intra-chunk. In case of amod, dependents whose parents are nouns, adjectives or adverbs were considered as intra-chunks and in case of advmod, the dependents with verbs, adverbs and adjectives as dependents were considered as intra-chunks. In a dependency parse tree, a chunk-head word along with all its dependents related to it by intra-chunk relations were considered to be a chunk.
Parser data
The chunk-level parse trees were obtained by removing the sub-trees corresponding to the nodes having intra-chunk relations with their parents. The removal of the phrase sub-trees left us with the skeleton trees in which all the words are chunkheads and are related to their parents via interchunk relations. Thus in each chunk-level tree, each chunk is represented by their chunk head. We trained the chunk-level parser model using these chunk-level trees derived from the training partition of the treebank of source languages.
Parser model
For our experiments, we trained a transitionbased encoder-decoder parser model that use a bidirectional LSTM as encoder and a attention-based decoder using stack-pointers (Ma et al., 2018).
Experiments and results
In this section, we discuss in details the experiments and results.
Chunk labelling and chunk head identification
We experimented with different sizes of dataset for training the chunkers. In the second column of Table 2 we report the average performance of the chunkers over the 9 languages corresponding to the different sizes of training data. We observe that the accuracy increases with training set size and stabilizes beyond a training set of 500 sentences.
In Table 3 we present the chunk-head identification accuracy for the different languages. We observe that although we have used a very simple rule set for chunk head identification, we achieved significantly high accuracies in chunk head identification.
Baseline
We compare the performance of the chunk-level transfer models with the performance of the corresponding word-level transfer parser models as baseline. For both word-level and chunk-level transfer parsing we adopted the delexicalized transfer parser models.
Chunk-level transfer parser
We experimented with both predicted and gold annotations of the test data.
• For predicted chunk annotation, the chunker models trained on 500 sentences were used to automatically label the test data and the chunk-heads were identified using the rulebased method discussed above.
• For the gold annotation, we directly used the gold chunk annotation of the test data.
Evaluation metric: We report the results of our experiments in terms of unlabeled attachment score (UAS) and labelled attachment score (LAS).
English as source language
Here we discuss the performance of the chunk-level transfer parser approach with English as source language.
In the third column of Table 2 we present the variation of the average UAS over the 9 target languages with training set sizes of the chunkers used to predict the chunks. We observe the beyond a training set size of 50 the average performance of chunk-level transfer parser improves over the performance on the word-level transfer. We also observe that the performance stabilizes at about a chunker training size of 500 sentences. In the following discussion with English as source language we report the results corresponding to the chunkers trained with 500 instances.
In Table 4 we compare the performance of the our chunk-level cross-lingual transfer parser model with the baseline. Since, we did not assign any relation type to the intra-chunk head-dependent dependency relations we report the UAS scores only for the full trees. In this table, we report results corresponding to predicted chunks and gold chunks. We have primarily compared the baseline with the performance of transfer parsing with predicted chunks. The bold entries indicates the higher of the UAS values. We have reported the results with gold chunks for reference in order to show the improvement in case gold annotated chunk information is available. We underlined the entries correspond- In Table 5 we compare the performance on the inter-chunk relations only. In this case we report both the UAS and LAS.
We observe that on an average across the 9 target languages our two-stage chunk-level transfer parser performs better than the baseline. Furthermore, it performs better than the baseline in case of 5 out of the 9 target languages. We also observe that the performance of our approach improves with the syntactic distance between the source and the target languages. We also observe that the chunklevel transfer parser with gold chunk information performs better than the baseline in case of 8 out of 9 target languages in terms of UAS and 7 languages in terms of LAS.
Hindi as source language
We repeated our experiments with Hindi as source language and the same set of target language as above. In the fourth column of Table 2 we present the variation of the average UAS over the 9 target languages with training set sizes of the chunkers used to predict the chunks. We observe the performance starts improving beyond a chunker training set size of 100 sentences. Furthermore the highest accuracy is achieved at 500 tree. Hence, in our following discussions we report the results corresponding to the chunkers trained with 500 instances.
In Table 6 we compare the performance of our chunk-level transfer parser with the baseline on full trees and in Table7 we report the results corresponding to the inter-chunk relations.
From Table 6 observe that corresponding to the all the full trees the chunk-level transfer followed by chunk expansion with predicted chunk information performs better than direct transfer with Hindi as source language for 7 out of 9 languages and also in terms of average performance over all target languages.
From Table 7 we observe that with Hindi as source language the average performance of the chunk-level transfer with predicted chunk information is slightly worse than that of the baseline in terms of average UAS and LAS. However, it outperforms the baseline in case of 5 out of the 9 target languages.
Conclusion
In this chapter, we present an approach of crosslingual transfer parsing that helps to reduce the error due to the syntactic differences between the source and target languages by addressing the intraphrase and inter-phrase syntactic differences separately when chunkers are available for the two languages. Table 7: Comparison of performance of chunk-level transfer parser with the baseline transfer model on the inter-chunk relations only with Hindi as source language. The chunks were predicted using chunker was trained on 500 sentences. | 2020-02-28T02:00:26.840Z | 2020-02-27T00:00:00.000 | {
"year": 2020,
"sha1": "038b96e6f7d99401e2de70b041d44c2a47a52216",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "038b96e6f7d99401e2de70b041d44c2a47a52216",
"s2fieldsofstudy": [
"Computer Science",
"Linguistics"
],
"extfieldsofstudy": [
"Computer Science"
]
} |
270334248 | pes2o/s2orc | v3-fos-license | Capmatinib is an effective treatment for MET-fusion driven pediatric high-grade glioma and synergizes with radiotherapy
Background Pediatric-type diffuse high-grade glioma (pHGG) is the most frequent malignant brain tumor in children and can be subclassified into multiple entities. Fusion genes activating the MET receptor tyrosine kinase often occur in infant-type hemispheric glioma (IHG) but also in other pHGG and are associated with devastating morbidity and mortality. Methods To identify new treatment options, we established and characterized two novel orthotopic mouse models harboring distinct MET fusions. These included an immunocompetent, murine allograft model and patient-derived orthotopic xenografts (PDOX) from a MET-fusion IHG patient who failed conventional therapy and targeted therapy with cabozantinib. With these models, we analyzed the efficacy and pharmacokinetic properties of three MET inhibitors, capmatinib, crizotinib and cabozantinib, alone or combined with radiotherapy. Results Capmatinib showed superior brain pharmacokinetic properties and greater in vitro and in vivo efficacy than cabozantinib or crizotinib in both models. The PDOX models recapitulated the poor efficacy of cabozantinib experienced by the patient. In contrast, capmatinib extended survival and induced long-term progression-free survival when combined with radiotherapy in two complementary mouse models. Capmatinib treatment increased radiation-induced DNA double-strand breaks and delayed their repair. Conclusions We comprehensively investigated the combination of MET inhibition and radiotherapy as a novel treatment option for MET-driven pHGG. Our seminal preclinical data package includes pharmacokinetic characterization, recapitulation of clinical outcomes, coinciding results from multiple complementing in vivo studies, and insights into molecular mechanism underlying increased efficacy. Taken together, we demonstrate the groundbreaking efficacy of capmatinib and radiation as a highly promising concept for future clinical trials. Supplementary Information The online version contains supplementary material available at 10.1186/s12943-024-02027-6.
Background
Brain tumors are the leading cause of cancer-related death in children, with pediatric-type diffuse high-grade gliomas (pHGG) being one of the most aggressive tumor families [1].Patients suffering from pHGG are typically treated with tumor resection followed by chemotherapy and/or radiation (based on age at diagnosis).This therapy is rarely curative and results in a 5-year survival rate of only ~20% [2].Oncogenic fusions with receptor tyrosine kinase (RTK) genes NTRK, ALK, ROS or MET drive a subgroup of pHGG in infants (IHG, Infant-type hemispheric glioma) [3][4][5][6].IHG has better survival than other pHGG [3,4], but poses a significant therapeutic challenge and is associated with devastating long-term sequelae [5].In pHGG patients >3 years old, MET fusions occur in up to 12 % of cases [6][7][8], and have also been identified in up to 15% of secondary glioblastoma in adults [9].Recent advances yielded remarkable responses of NTRK or ALK fusion pHGG to selective inhibitors [10,11], especially in IHG, but there is currently no effective selective therapy demonstrated for MET fusion-positive glioma.
A plethora of studies have explored new treatment options for pHGG, with solely discouraging outcomes [12].Although novel small molecule inhibitors frequently show promising initial responses, a decade of experience has shown that monotherapy of pHGG inevitably results in therapy-resistant relapses [13].The first FDA-approved inhibitor to target MET was crizotinib (Xalkori ® ).In the context of brain tumors, crizotinib displayed initial efficacy in a patient with pHGG [7], unfortunately followed by rapid progression.Capmatinib, another highly specific MET inhibitor, has shown promising intracranial activity [14,15].However, capmatinib has not been investigated as a treatment option against pHGG so far.
Given the limitations of monotherapies, multiple studies have investigated radiosensitization of tumor cells through RTK inhibition [16].These included MET inhibitors, whose radiosensitizing effects were reportedly mediated by downregulation of DNA repair genes including ATM and/ or by anti-apoptotic factors [17][18][19].However, the effect seems to be model-, tumor-and inhibitor-dependent [20].So far, MET inhibition-mediated radiosensitization has not been explored in the context of pediatric brain tumors.
Methods
All methods and materials are described in the Supplementary Methods (Additional File 1).
Clinical presentation
We analyzed MRI scans from MET fusion IHG patients enrolled on the SJYC07 clinical trial (NCT00602667) [5] or standard institutional protocols, which illustrated typical challenges for IHG surgery.The tumors are often very large, vascular and hemorrhagic, and associated with intraoperative bleeding, difficulties achieving gross total resections, and high morbidity (Fig. 1a-d).Fusion events between CLIP2 and MET have been observed in IHG and pHGG before [3,7], whereas, to our knowledge, we are the first to identify NPM1 and HIP1 as alternative fusion partners of MET.Our institutional experience thus confirmed the significant clinical challenges for MET fusion pHGG patients and the need for novel therapeutic concepts.
A novel, immunocompetent mouse model for MET-driven pHGG
To initially develop a genetically defined model of the disease, we performed in utero electroporation to stably induce expression of the HA-tagged, human TFG-MET fusion gene as well as a CRISPR-mediated knockout of Trp53 in the forebrain of E14.5 mice (Fig. 2a).We chose TFG-MET because it is the smallest identified fusion in both, IHG and pHGG [3,7] (Supplementary Fig. 1a; Additional File 2), fostering efficient somatic gene delivery.All electroporated mice developed tumors that stained positive for the HA-tag, pMET and pErk, validating the delivered fusion gene as an oncogenic driver (Fig. 2b,c).Murine tumors (Fig. 2b) showed similar histopathology to human, MET-fusion driven HGG (Fig. 2d), including characteristically round and relatively monotonous morphology as well as cytoplasmic clearing (Fig. 2b,d, higher magnification boxes, Supplementary Fig. 1b; Additional File 2).Our electroporation model was robust and highly aggressive with 9/9 mice developing neurologic symptoms by day 33 after birth (Fig. 2e).We showed that Trp53 knockout was efficient, inducing a 95 base pair deletion in all analyzed clones (Fig. 2f n=6), thereby recapitulating the loss of TP53 function that is frequently observed in patients with MET-activated pHGG [7].The results highlight Large areas of hemosiderin deposition, evidence of prior hemorrhages and hematoma, are noted in the tumor (second from right).Ample amounts of Gelfoam were needed to achieve hemostasis during surgery (far right).Scale bar is 150µm a novel mouse model with short latency and full penetrance that reflects the histopathology of the human counterpart.
Capmatinib demonstrates a favorable PK profile in mice compared to crizotinib
To evaluate the brain exposure of crizotinib and capmatinib, we analyzed their PK profiles in CD1 nude mice.Capmatinib was rapidly absorbed and cleared from both, brain and plasma, with concentrations below the detection limit at 16 hours post-dose (Fig. 2g, h and Supplementary Table 1; Additional File 3) while crizotinib slowly equilibrated, reaching Cmax at 4 hours post dose.Both drugs, however, reached physiologically relevant concentrations of >1 µM in brain tissues.Previous studies showed that unbound drug concentrations predict target inhibition more robustly than total amounts [21].Therefore, we performed in vitro protein binding assays with standard mouse plasma and naive brain tissue homogenates, finding that capmatinib has an appreciably higher fraction unbound in brain homogenate (Fu,b) versus crizotinib (Table 1).We then used respective Fu,b values to estimate unbound concentrations (Fig. 2h) and found that capmatinib reaches a 9.6-times higher maximal concentration of unbound drug in the brain than crizotinib (~103nM vs ~11nM).
Capmatinib efficiently inhibits TFG-MET in vitro and in vivo
We cultured tumor cells from our electroporation model in vitro and analyzed the impact of crizotinib or capmatinib treatment on phosphorylation of MET and downstream effectors (Fig. 2a).To investigate the intracellular response at relevant in vivo-concentrations, we challenged the cells with Cmax equivalents, based on the identified, free drug concentrations in the murine brain (0.02 µM of crizotinib and 0.15 µM of capmatinib; Fig. 2h).For both compounds 1 µM was used as positive control.Capmatinib readily inhibited the phosphorylation of TFG-MET and downstream targets Erk and Akt at both tested concentrations, while the Cmax equivalent dose of crizotinib displayed minimal effect (Fig. 3a and Supplementary Fig. 1c; Additional File 2).A dose response assay similarly revealed an in vitro potency of capmatinib >10 times higher than that of crizotinib (Fig. 3b and Supplementary Table 2; Additional File 4).Of note, capmatinib readily inhibited MET, Erk and Akt phosphorylation at the observed in vitro IC 50 concentration of only 9 nM (Supplementary Fig. 1d,e; Additional File 2), further emphasizing its potency.Next, we combined both compounds with RT and observed an increased anti-tumoral efficacy compared to single treatments (Extended Data Fig. 1f,g).Both combinations have an additive effect with a trend towards synergy according to the ZIP synergy model [22], with some clearly synergistic dose ranges (Supplementary Fig. 1h; Additional File 2).The capmatinib synergy score peaked at ~100 nM, an achievable free drug concentration in the murine brain (Fig. 2h).
To analyze the efficacy of capmatinib and crizotinib in vivo, we allografted tumor cells from our electroporation model into CD1 mice, providing a standardized mouse model with an immunocompetent background.In order to utilize this model in a combinatorial RT trial, we first determined a radiation dose at which allografted mice developed a partial but not a full response.As an initial study using 20 Gy demonstrated complete tumor remission in two out of five treated mice (Supplementary Fig. 2a,b; Additional File 5), we lowered the total dose to 12 Gy (in typical clinical fractions of 2 Gy per day [23]) for the subsequent, combinatorial trial in which animals received either A) vehicle B) vehicle + RT C) crizotinib D) crizotinib + RT E) capmatinib F) capmatinib + RT (Fig. 3c).All regimens were well tolerated (Supplementary Fig. 2c; Additional File 5).The time point of radiation was chosen to coincide with the Cmax of the respective drug in the brain (Fig. 2h, dashed squares).To analyze pharmacodynamic properties, mice were sacrificed after receiving their second treatment.These animals formed the "PD cohort" whereas the remaining mice represented the "Survival cohort".Treatment with capmatinib led to greatly reduced levels of phosphorylated MET, Erk and Akt in initial neoplasms of PD animals, whereas crizotinib treatment induced a less complete reduction compared to vehicle-treated mice (Fig. 3d,e and f and Supplementary Fig. 3; Additional File 6).These results indicate that capmatinib readily inhibits MET in intracranial tumors at clinically relevant doses.
Combined capmatinib and RT increases survival-rate and -time of murine allografts
We monitored the mice of the Survival cohort for up to 140 days after transplantation.All treatments increased the average survival time compared to the vehicle treated group, which was most prominent for animals treated with capmatinib + RT (Fig. 4a; p-value vehicle vs. capmatinib + RT=0.0388).Besides the prolonged duration of survival, this combination also increased the survival rate 3-fold (Fig. 4a).Additionally, biweekly bioluminescence imaging allowed us to quantify the combinatorial effect of capmatinib + RT (Fig. 4b, Supplementary Table 3; Additional File 7).While all other treatments mostly slowed down tumor growth, 8/10 capmatinib + RT treated animals displayed a reduction of tumor burden by week 3 (Fig. 3b, Supplementary Fig. 2d and Supplementary Table 3; Additional Files 5 and 7).Capmatinib and RT combined was able to eradicate even large initial neoplasms whereas the survivors in other groups were mice with low initial tumor burden (Fig. 4c).
Brains of mice that had to be sacrificed under treatment were histologically analyzed (Fig. 4d).As expected, mice treated with vehicle or radiation alone showed a strong upregulation of pMET, pErk and pAkt.Interestingly, the amounts of pMET and pErk were reduced to background levels in only 4/6 capmatinib-treated animals.This reflected the time span between last capmatinib administration and tumor isolation, as the 2 mice with stronger pMET/ pErk signal were sacrificed after a 2-day treatment pause, underscoring the observed rapid clearance of capmatinib in the brain (Fig. 2h).
Capmatinib effectively inhibits TRIM24-MET in human pHGG
During the time of this study a seven-month-old infant presented with a large cerebral mass and leptomeningeal metastasis extending from the brainstem through the cervical spine (C1-7; Fig. 5a).After surgical resection of the cerebral tumor, the patient received six months of chemotherapy, as the patient was deemed too young for radiation therapy post-surgery.The patient had no evidence of disease at the end of therapy (Fig. 5a) but relapsed within seven months thereafter.Molecular analysis revealed a TRIM24-MET fusion in both the initial and the recurrent tumors (Fig. 5b), however subsequent treatment with the MET inhibitor cabozantinib was ineffective.Samples of the pre-treatment (TRIM24-MET-i) and recurrent tumor (after chemotherapy but before cabozantinib; TRIM24-MET-r) were obtained for further characterization and disease modelling (Supplementary Fig. 4a-e; Additional File 8).The two samples were used to establish two stably growing cell cultures and expression of the fusion (predicted molecular weight of 116.8 Kd) was validated by immunoprecipitation (Supplementary Fig. 4f-h; Additional File 8).We performed DNA methylation profiling of both primary biopsies and the corresponding established cultures and found that all samples cluster closely with RTK fusion-driven IHG.We also profiled six biological replicates of our murine TFG-MET tumors using MM285k arrays, performed a cross-species implementation and found that the murine tumors clustered closely to human IHG as well (Fig. 5c).
We challenged human tumor cells with brain-specific unbound Cmax equivalents of capmatinib and crizotinib.Both drugs inhibited phosphorylation of TRIM24-MET and ERK within 30 minutes (Supplementary Fig. 5a; Additional File 9).In comparison to murine tumor cells, the Cmax-equivalent dose of crizotinib also resulted in an observable inhibition, albeit to a lesser extent than 1 µM crizotinib or any analyzed capmatinib concentration (Supplementary Fig. 5b; Additional File 9).In doseresponse assays, we found that capmatinib was more potent than crizotinib and cabozantinib (Fig. 5d), similar to our observations in murine tumor cells.To validate capmatinib's potency in additional MET-fusion-driven pHGG models, we also performed dose response assays with SJ-GBM2 cells [24], harboring a CLIP2-MET fusion and with cells isolated from murine tumors, induced by overexpressing TFG-MET alone (without Trp53 knockout; Supplementary Fig. 5c,d; Additional File 9).Capmatinib potently inhibited both models and displayed an IC 50 of only ~1.17nM against SJ-GBM2 cells, further underscoring its effectiveness against pHGG driven by MET-fusions.
Capmatinib treatment leads to long-term progression-free survival of human xenografts
To investigate capmatinib's anti-tumor efficacy on human cells in vivo, we established a novel PDOX model using TRIM24-MET-i cells.Given the observed rapid clearance of capmatinib in mouse tissues and tumors (Fig. 1h and 3d), we chose to administer capmatinib twice per day (bis in die, BID) to PDOX mice, matching the clinical dosing schedule [25].Subsequent western blot and IHC analysis revealed that capmatinib efficiently blocked phosphorylation of TRIM24-MET, ERK and AKT on this schedule (Supplementary Fig. 6a,b; Additional File 10).
To determine how closely our PDOX model would recapitulate the clinical failure of cabozantinib, we directly compared capmatinib vs. cabozantinib treatment (Fig 6a).Cabozantinib treatment resulted in a 15.5 day increase of median survival (p-value cabozantinib vs. cabozantinib vehicle = 0.002).Despite the statistical significance, this slight reduction of tumor growth would likely not have been appreciable clinically and is therefore consistent with the lack of efficacy in the patient.In striking contrast, capmatinib induced a long-term stable disease with all mice surviving the 19-week treatment period (Fig 6b; p-value capmatinib vs. capmatinib vehicle < 0.0005).Regular luciferase imaging underscored the long-term tumor control and even indicated initial regression in two out of eight capmatinib-treated mice (Supplementary Fig. 6c; Additional File 10).Ultimately seven of these animals relapsed after treatment was ceased (Fig. 6b), indicating that capmatinib monotherapy is not sufficient to consistently induce complete remission.
Consequently, we combined capmatinib with RT in human cells and found that radiation increased the response to capmatinib treatment in vitro (Supplementary Fig. 6d; Additional File 10).We then conducted a 4-arm preclinical trial treating the TRIM24-MET-i PDOX model with: 1) vehicle, 2) vehicle+RT, 3) capmatinib, and 4) capmatinib+RT (Fig. 6c).As METfusion-driven tumors are often diagnosed in infants [3] and we aimed to extend the time frame of potential synergy whereby cells were exposed to both capmatinib and RT and chose a very low-dose fractionation [26] of 0.5 Gy per day, with a total dose of 10 Gy over 20 days to recapitulate a clinical scenario balancing risk and benefit in pediatric patients.All treatments were well tolerated (Supplementary Fig. 6e; Additional File 10).RT alone resulted in a slight survival benefit compared to vehicletreated mice (Fig. 6d,e, 7.7 weeks vs 5.7 weeks, Log-rank test p=0.0086).Beside their cranial tumor outgrowth, all mice in these two groups quickly developed spinal metastases (Supplementary Fig. 6f; Additional File 10).Capmatinib monotherapy again induced a stable disease in all treated animals but persistent tumor cells in both, brains and spines readily grew out once treatment was withdrawn (Fig. 6d,e).Thereby, all mice treated with capmatinib or RT alone eventually reached tumor-induced endpoints.In striking contrast, combined capmatinib + RT profoundly and stably decreased tumor burden (Fig. 6d,e and f ).Importantly, although radiation was focally administered to the head, only one capmatinib + RT treated mouse experienced a spinal metastasis after therapy was withdrawn, while the remaining mice did not show any detectable signs of residual tumor before reaching natural, cancer-independent endpoints (Fig. 6e,f ).Taken together, these results show that also in the context of a human-derived MET-driven pHGG model, only the combination of capmatinib and RT reduces tumor burden and leads to long-term, progression-and metastasis-free survival.
Capmatinib induces dysregulation of DNA repair genes as a possible mechanism of radiosensitizaton
To investigate the molecular basis for the combined effect between capmatinib and RT, we performed RNA-sequencing analysis on murine tumors and validated fusion gene expression as well as p53 inactivation through frameshift in analyzed allografts (Supplementary Fig. 7a-d; Additional File 11).When analyzing the expression of Mapk signature genes [27], we found a significant downregulation in capmatinib treated tumors, whereas crizotinib-treated samples displayed a more heterogenous expression (Supplementary Fig. 7e; Additional File 11).To elucidate capmatinib's molecular effect on the cells, we focused on the differentially expressed genes between 4 capmatinib-treated tumors showing a particularly strong Mapk downregulation (Fig. 7a) and the 6 vehicle-treated PD samples.As expected, we observed a downregulation of gene sets pertaining to proliferation pathways in capmatinib-treated mice (Supplementary Fig. 7f; Additional File 11).Capmatinibtreated tumors of the Survival cohort displayed more heterogenous gene expression patterns than the PD cohort, potentially owing to more variable responses to long term drug exposure (Supplementary Fig. 8a; Additional File 12).Importantly, genes involved in the DNA repair machinery were downregulated in capmatinibtreated tumors (Fig. 7b, Supplementary Fig. 8b and Supplementary Table 4; Additional Files 12 and 13), providing a plausible explanation for the radiosensitizing effect of this drug.In tumors of the Survival cohort, genes involved in cell cycle progression were found to be upregulated after radiation (Supplementary Fig. 8c; Additional File 12), potentially as a late consequence to radiation induced DNA damage and tumor cell selection.Consistent with this finding, we observed a strong correlation between upregulation of genes associated with increased proliferation and upregulation of genes associated with DNA repair across the entire cohort (Fig. 7c).Although the connection between proliferation and expression of DNA repair genes is well known, we found a striking correlation also in further analyzed datasets, including human brain tumors and cells of normal brain development (Supplementary Fig. 8d; Additional File 12), potentially indicating DNA repair gene dysregulation by cell cycle inhibition as general radiosensitization option for certain tumor entities.Furthermore, we found genes involved in Trp53 regulation to be specifically (See figure on next page.)Fig. 7 Capmatinib dysregulates expression of DNA repair genes and enhances radiation-induced DNA damage.a Expression of the indicated Mapk pathway signature (MPAS) genes in tumors of the PD cohort treated with vehicle (+/-RT) or capmatinib (+/-RT, focusing on the 4 strongly affected tumors by capmatinib-treatment, the 2 outliers were excluded for this analysis).With the exception of Epha4, expression of all analyzed Mapk pathway signature genes were inhibited in capmatinib-treated mice.Significantly downregulated (adj.p < 0.05) genes are in bold.b Heatmap of genes in the "DNA REPAIR_7" geneset (baderlab pathways 2019) demonstrating that capmatinib treatment leads to a reduced expression of DNA repair genes.c Correlation between total expression scores of the genesets "DNA REPAIR_7" and "CELL CYCLE_7" (baderlab pathways 2019) amongst all murine tumors (treated and untreated) analyzed by RNAseq in this study.Each dot represents one tumor.d Heatmaps showing expression of MAPK Pathway Activity Score (MPAS) genes for in vitro capmatinib treatments in cell lines derived from TRIM24-MET fusion tumors as compared to a DMSO vehicle control.Significantly downregulated (adj.p < 0.05) genes are in bold.e Western blots of RAD51 and β-ACTIN after indicated treatments of TRIM24-MET or TFG-MET cells for 24 hours.Capmatinib and crizotinib both induce downregulation of RAD51.f Western blots of MET and p-MET after indicated treatments of TRIM24-MET or TFG-MET cells for 24 hours, which serve as controls for Western Blots in e. g γH2AX-immunofluorescence staining of TRIM24-MET L97 human glioma cell lines at different recovery timepoints following 4 Gy-irradiation.Capmatinib (Cap)-treated cells display significantly higher levels of γH2AX compared to DMSO-treated (DMSO) cells.h quantification of γH2AX-foci in f.The percentage of cells with ≥20 γH2AX-foci is significantly higher in capmatinib-treated cells (black bar) compared to DMSO-treated cells (white bar) at 1, 2, 3 and 4 hours following irradiation.Error bars display standard error of mean, statistical significance was determined using t-test analysis.(****;p<0.0001,***;p<0.001,**;p<0.01,*;p<0.05).Scale bar is 10µm.i Western blot of phosphorylated and total Kap1 from TFG-MET allograft tumors treated with vehicle (Veh), or capmatinib (Cap) alone or in combination with irradiation (RT).Samples 7-9 and 11-12 were collected 1 hr after RT, lane 10 was collected 3 hrs after RT, and shows time-dependent decrease of the DNA double-strand break signal.j Quantification of luminescence signal of western blots in panel h normalized to the vehicle control.Each dot represents an individual replicate.Error bars display standard error of the mean.Statistical significance was determined using a One-Way ANOVA followed by Tukey's multiple comparisons test (*p<0.05,**p<0.01,***p<0.001,****p<0.0001).Lysate from lane 10 was excluded due to the different timepoint after RT downregulated in capmatinib-treated samples (Supplementary Fig. 8e,f; Additional File 12), which may contribute to the reduced expression of DNA repair genes despite the absence of Trp53 itself in tumors (Supplementary Fig. 7b; Additional File 11).
To validate this finding in human cells, we also performed RNA-sequencing of TRIM24-MET-i and TRIM24-MET-r cells after 4h in vitro treatment with capmatinib, crizotinib, or cabozantinib.At their respective EC90 concentrations (Supplementary Table 5; Additional File 14), all three treatments caused similar transcriptional responses when compared to DMSO vehicle controls (Supplementary Fig. 9a and Supplementary Table 6; Additional Files 15 and 16).Downregulation of MAPK pathway signature genes confirmed successful MET inhibition (Fig. 7d).Next, we performed pre-ranked gene set enrichment analyses (GSEA) and found that cellular responses to capmatinib treatment between human tumor cultures and allografted mouse tumors were highly similar (Supplementary Tables 7, 8 and 9; Additional Files 17-19).Commonly downregulated genesets included MYC target genes, mTORC1 signaling, and unfolded protein response (Supplementary Fig. 9b,c; Additional File 15).In contrast to our murine tumors, we found that TP53 is expressed in human tumor cells (Supplementary Table 6; Additional File 16).However, we also found a dysregulation of genes involved in DNA repair (Supplementary Fig. 10a; Additional File 20), similar to our observation in allograft models (Fig. 7b).A major DNA repair gene is RAD51, which is involved in DNA double strand break repair and frequently upregulated in various human cancers [28].Despite no significant change in RNA levels, RAD51 protein is downregulated in both murine and human tumor cells after short-term in vitro drug treatment (Fig. 7e,f).
To further confirm the impact of capmatinib treatment on DNA repair, we treated cells with capmatinib or DMSO for 24 hours, performed irradiation and quantified γ-H2AX loci after an additional 1 to 24 hours.When treated with radiation alone, the number of γ-H2AX loci steadily declined over time in murine cells, indicating continuous DNA repair.The addition of capmatinib significantly this process and prolonged recovery (Supplementary Fig. 10b,c; Additional File 20).In human tumor cells the effect was even more pronounced, as capmatinib treatment induced a greatly increased number of DNA double strand breaks (Fig. 7g,h).The ataxia telangiectasia mutated (ATM) kinase initiates a signaling cascade including phosphorylation of Kap1 (KRAB-associated protein 1) at serine 824 in response to DNA double strand breaks [29].To further investigate the effects of capmatinib treatment on DNA damage response in vivo, we evaluated Kap1 pS824 in murine TFG-MET tumors.Phosphorylated Kap1 (p-Kap1), was dramatically increased in irradiated tumors treated with vehicle compared to unirradiated controls and showed a significantly greater increase in tumors treated with capmatinib and RT (Fig. 7i,j).Taken together these findings demonstrate that capmatinib treatment induces a dysregulation of DNA repair genes, and a marked potentiation of radiation-induced DNA damage in vitro and in vivo, providing a rational mechanism for the outstanding combinatorial efficacy in our animal models.
Discussion
Activating alterations in receptor tyrosine kinases are appealing therapeutic targets that are increasingly identified by clinical genomic approaches, and often play important roles in tumor maintenance and survival.Despite a growing armamentarium of available selective RTK inhibitors, choosing the ideal drug and predicting successful tumor response is complicated by diverse factors [30].While RTK inhibition displayed promising responses in multiple pHGG studies [31,32], responsiveness of adult HGG to RTK inhibition proved to be less striking and is currently under investigation [33].This discrepancy could partially result from the fact that pHGG, especially IHG, typically lacks large-scale structural, copy number, or single nucleotide variants [34,35], rendering the tumor exclusively dependent on the oncogenic RTK such as MET.Targeting a MET fusion gene with crizotinib in one pHGG patient resulted in a partial response with rapid tumor relapse [7], yet no lasting response after MET inhibition has been demonstrated for MET-driven pHGGs so far.Neurosurgery for large vascular IHG is associated with high morbidity, such as intraoperative bleeding, hypovolemic shock, mechanical ventilation and permanent neurologic deficits.Attaining a gross total resection (GTR) is difficult, often requiring multiple craniotomies.Therefore, long-term survivors often suffer from permanent neurocognitive impairment, hemiparesis, seizure disorders, dysarthria, and visual deficits.In a recently published NEJM report, a patient was left moribund after two unsuccessful craniotomies to resect a large hemispheric tumor.As molecular analysis revealed an ALK fusion, the child was treated with ALK inhibitor on a palliative basis.Remarkably, the tumor showed rapid shrinking and could be safely surgically resected with good clinical recovery [11].Similar cases have also been reported with NTRK fusion [10] pHGG.However, there is currently no effective selective inhibitor therapy for MET fusion-driven pHGG.
In this study, we established complementary in vitro and in vivo models of MET-driven pHGG including an immunocompetent allograft with TFG-MET fusion and Trp53 deletion.In contrast to a previous RCAS TFG-MET-driven pHGG model [7], the allograft described here is studied in an immunocompetent, wild-type p53 host background and allows robust preclinical evaluation by standardized tumor cell transplantation.Additionally, we generated two patient-derived cell lines and matched xenografts with TRIM24-MET fusion.All of our models closely recapitulated patient primary tumors as demonstrated by histopathology and methylation profiling.They thereby allowed us to faithfully explore the efficacy of three MET inhibitors in combination with RT against MET-driven pHGG.
Detailed pharmacokinetic analyses are critical to identify the optimal MET inhibitor for brain tumor therapy.
Here, we firstly describe capmatinib's CNS penetration in mice and provide an assessment of crizotinib and capmatinib pharmacokinetic properties.For in vivo studies, we used a crizotinib dose previously reported to be tolerated and efficacious in mice [36], which provided a high total plasma AUC of 64,700 hr-ng/mL.Notably, the maximum tolerated dose (280 mg/m 2 BID) for pediatric solid tumors provided a mean steady total plasma AUC of 6,990 hr-ng/mL [37].Thus, the utilized doses in mice far exceeded clinically achievable doses, even when adjusting for the approximately 2.5-fold higher plasma protein binding of crizotinib in mice versus humans [38].In contrast, capmatinib is administered orally at 400 mg BID in humans [15,39], achieving a mean steady total plasma AUC of 17,300 hr-ng/mL [40] -similar to our estimated murine total plasma AUC of 16,400 hr-ng/mL.In this case, comparisons using total AUCs are appropriate, as the plasma protein binding of capmatinib is similar between mice and humans [41].Therefore, our 25 mg/ kg BID regimen of capmatinib was clinically relevant and provided plasma exposures in mice similar to patients at the approved dose.
We also compared the fractions of unbound capmatinib and crizotinib in mouse brain homogenates and found that the unbound fraction of capmatinib was 8.2times higher than crizotinib, which likely contributes to the higher in vivo efficacy of capmatinib.Because of this higher unbound fraction, capmatinib reached a higher effective exposure in the murine brain for up to 8 hours after administration, even though crizotinib achieved much higher plasma AUCs.
The PDOX models allowed us to compare our preclinical results to the presented patient's clinical outcome.After an initial relapse, the patient was treated with cabozantinib, based on previous clinical studies that showed activity against intracranial metastases [42,43].Importantly, our in vivo PDOX response to cabozantinib, while statistically significant, provided a brief extension of survival that would be biologically inadequate when considered as a patient outcome.Thus, our PDOX model recapitulated the clinical failure of cabozantinib, while capmatinib monotherapy induced stable disease in the PDOX model.It is possible that previous brain metastases were more responsive to cabozantinib because of a higher sensitivity to low-level MET inhibition.Alternatively, differences in the blood-brain barrier in brain metastases compared to pHGG may have allowed greater drug availability in the tumor.These examples highlight the utility of evaluating relevant models for specific diseases, even if a drug has proved efficacious in a different tumor type with a common RTK target.
We investigated the efficacy of capmatinib, crizotinib and cabozantinib in vitro and in vivo to assess disrupted signaling of downstream effectors.Although our RNAseq data demonstrated that all three drugs induced a shared cellular response at respective EC90 concentrations, capmatinib displayed a much greater potency in all examined instances, compared to crizotinib and cabozantinib.This is in agreement with previous reports that demonstrated 10 to 100-fold lower IC50 values of in vitro MET inhibition for capmatinib compared to crizotinib or cabozantinib [44][45][46], although different assays have been utilized in these studies.While it was shown that crizotinib and cabozantinib inhibit a broader spectrum of tyrosine kinases [47,48], capmatinib has been demonstrated to selectively target MET with K D values 1000-fold below its second most high-affinity target [49].The plateau at ~20% cell viability/ abundance at higher capmatinib concentrations, which we observed in our dose response assays has been reported before [49] and is likely a result of growth arrest induced by capmatinib's specificity, in contrast to crizotinib and cabozantinib, which also target additional tyrosine kinases at high doses and thereby induce cell death in a non-specific manner.
Dose and safety data for capmatinib treatment in children is not yet available.The FDA approved capmatinib for adults with metastatic non-small cell lung cancer (mNSCLC) with MET exon 14 skipping mutations based on a clinical trial in which capmatinib was permanently discontinued in 16% of mNSCLC patients due to an adverse reaction, most commonly pneumonitis (1.8%), peripheral edema (1.8%) and fatigue (1.5%) [50], providing initial insights into potential toxicities in the pediatric population.
Capmatinib was even more effective when administered concomitantly with radiation, which we initially demonstrated in vitro for all aforementioned models.In the subsequent preclinical allograft trial, capmatinib and RT increased the survival-rate and -time compared to single treatments.In the PDOX study, the combination induced full responses in all but one treated animal, whereas none of the single-treated mice displayed significant tumor regression.This outstanding efficacy of the combination in the PDOX study compared to the allograft trial may be explained in part by the different underlying treatment schedules, which were adjusted in the PDOX study based on capmatinib's PK profile and to match a patient-equivalent dose based on a published clinical trial.Our results firstly and extensively highlight the striking advantage of combining capmatinib and RT against pHGG, and are in accordance with previous reports that demonstrated radiosensitization by MET inhibition [51][52][53].Additional prior studies indicated that this effect is p53-dependent [54].However, here we demonstrated combinatorial efficacy between capmatinib and radiation in both human TP53-expressing cells and in murine Trp53-deficient tumors, although we observed a differential expression of p53 regulating kinases after capmatinib treatment.
When analyzing the underlying mechanisms of radiosensitization we found a significant downregulation of specific DNA repair genes in capmatinib-treated tumor cells.This is in agreement with previous reports that displayed radiosensitization by downregulation of DNA repair genes after inhibition of MET [55][56][57] but also after inhibition of other RTKs [16,58].Many reports identified an involvement of ATM and ATR [17,18,59], which we also noted.However, the broader range of downregulated DNA repair genes together with the tight correlation between cell cycle progression and DNA repair gene expression that was observed in this study, might suggest a more general paradigm of radiosensitization by RTK-inhibition.The sudden downregulation of certain DNA repair genes, within the previously quickly proliferating tumor cells might render these cells generally more susceptible to RT.In agreement with this notion, we observed that capmatinib indeed potentiates radiation induced DNA damage in tumor cells in vitro.We also showed that tumors treated in vivo with combined capmatinib and RT contained increased levels of phosphorylated Kap1 (pS824) compared to tumors treated with vehicle and RT.This ATM-dependent phosphorylation event [29] further demonstrates the elevated DNA double-strand break signaling when combining capmatinib with RT in vivo and shows that effects of the combination are more than additive.This has important implications for the relative timing of drug and radiation delivery.Additional in vivo studies would be needed to comprehensively elucidate all aspects of the underlying signaling cascade and mechanisms driving the cooperative effects of capmatinib with irradiation.The increased efficacy of this combined therapy merits further investigation to comprehensively identify susceptible tumor entities.For example, secondary adult glioblastoma, in which MET-fusions have been identified in up to 15% [9], potentially represent another promising and eligible entity for concomitant capmatinib-radiation treatment in addition to pHGG.
Our preclinical testing in a MET-fusion IHG PDX, showed that low-dose radiation combined with capmatinib reduced tumor burden, leading to long-term progression and metastasis-free survival.To minimize radiotherapy-associated late effects, chemotherapy-based treatment approaches following surgical resection when feasible have historically been used to defer or delay RT until the age of 3-5 years or at relapse [60][61][62][63].For children in this most vulnerable age group, capmatinib alone may provide a useful approach to reduce morbidity by delaying surgery or as a bridging therapy until an age in which combination with radiation becomes more feasible.The low-dose radiation regimen employed in our human xenograft trials and its significant potentiation with MET inhibition highlights a potentially promising approach for older pediatric and young adult populations with MET-fusion driven tumors who will be otherwise treated with only involved field radiation as a standard of care.Clinical evaluation of this regimen should be reserved for those patients old enough for consideration of radiation therapy or in those that have progressed beyond the reach of successful systemic therapy options.The optimal incorporation of capmatinib in frontline treatment for pHGG with MET fusions as neoadjuvant, adjuvant, or radiation-delaying strategy must be tested in controlled and well-monitored clinical trials.
Conclusions
In conclusion, we generated novel, MET-fusion-driven pHGG mouse models to identify the optimal selective inhibitor for this devastating disease.Capmatinib showed greater potency and superior pharmacokinetic properties, including a greater proportion of unbound drug in the brain, when compared with crizotinib and cabozantinib.Combination of capmatinib with low-dose radiation potentiated RT-induced DNA damage and induced robust tumor regression in vivo, while treatment with cabozantinib recapitulated the lack of efficacy seen in the patient.Our consistent results of preclinical data using two independent and complementary mouse models provide a strong rationale for combining capmatinib and RT as novel treatment against MET-activated pHGG.
Fig. 1
Fig. 1 MET fusion IHG are large vascular tumors posing significant surgical challenges.a MRI images of IHG with CLIP2-MET fusion (right panel).Left panel: Left: T2 weighted image shows a large solid cystic tumor encompassing the entire right cerebral hemisphere, Middle: Subtraction weighted Image sequences (SWI).The yellow arrows indicate intra tumoral hemorrhagic regions.Right: T2 weighted image shows large tumor resection cavity after surgery.b MRI images of IHG with NPM1-MET fusion (right panel).Left panel: Left: T2 weighted MRI Image shows a large solid cystic tumor encompassing the entire temporal lobe of the left hemisphere.Right: Image post first attempt neuro-surgical resection.Due to massive bleeding and hemorrhage during surgery only a fraction of tumor could be resected.The yellow arrows show the large cysts within the tumor.c Images of IHG with HIP1-MET fusion (right panel).Left panel: Left: An emergent CT scan performed in the ER on a 4-week-old baby who presented with irritability and bulging anterior fontanelle.Shows a massive right hemispheric hemorrhagic tumor.The yellow arrow points toward the large hemorrhagic focus.Right: Diffusion Restricted images (DWI) of MRI.The restricted water diffusion (dark/black area noted by yellow arrow) represents high cellular density and proliferating tumor.d Histologic sections of a human MET-fusion tumor (TRIM24::MET) show large and abnormal thin-walled vessels invaded by the tumor cells (both upper panels), with mural thrombi (two left panels) and acute hemorrhages (second from left).Large areas of hemosiderin deposition, evidence of prior hemorrhages and hematoma, are noted in the tumor (second from right).Ample amounts of Gelfoam were needed to achieve hemostasis during surgery (far right).Scale bar is 150µm
Fig. 2
Fig. 2 TFG-MET-driven mouse model and pharmacokinetic profiles of MET inhibitors.a Schematic illustrating the method and utilized vectors to induce CRISPR/Cas9-mediated Trp53 deletion and TFG-MET overexpression following in utero electroporations.b H&E staining and Immunohistochemical analysis of a tumor generated by in utero electroporation, visualized by the HA-tag of TFG-MET.In contrast to normal tissue (bottom right corners) tumors display elevated levels of pMET and pErk.Scale bars are 100 µm in large panel and 25 µm in high magnification inset.c, H&E staining showing a large and invasive HGG in the mouse brain.Red rectangle indicates the region shown in b. d H&E staining of a human MET-driven pHGG demonstrating similar features as murine neoplasms.Scale bars are 100 µm in large panel and 25 µm in high magnification inset.e Survival curve indicating penetrance and latency of tumors induces by in utero electroporation.f, Sanger sequencing of PCR products of the targeted Trp53 locus in a tumor revealed a 95bp deletion in all analyzed sequences (n=6).g, h Plasma(G)-and brain(H)-concentrations of capmatinib and crizotinib at the indicated time points after administration of CD-1 nude mice with the respective compounds.Three mice were analyzed per compound and time point.Error bars indicate the standard deviation.Dashed rectangles indicate time windows of radiation in the following preclinical allograft study
Fig. 3
Fig. 3 Capmatinib is effective against TFG-MET-driven tumor cells.a Western blot of phosphorylated and total MET and the downstream effector Erk in cultured, murine tumor cells after different time points of crizotinib (cri) or capmatinib (cap) addition at the indicated concentrations.b Dose-response curves of murine tumor cells after treatment with capmatinib or crizotinib.Each dot represents one replicate of triplicates.Viable cells were analyzed 72 hours after compound addition using the CellTiter-Glo Assay.The vertical dotted lines indicate EC50 values.c Overview schematic depicting the various treatments and the two different cohorts of our preclinical allograft study.d Immunohistochemical stainings of phosphoproteins in tumors of the PD cohort, which were treated with the indicated therapies.Levels of pMET, pErk and pAkt were significantly reduced after capmatinib treatment.Scale bar is 50 µm.e Western blot of phosphorylated and total MET, Akt and Erk from allograft tumors treated with vehicle (veh), crizotinib or capmatinib alone or in combination with irradiation.f Quantification of luminescence signal of western blots in panel C normalized to the respective vehicle control.Each dot represents an individual replicate.Error bars display standard error of the mean.Statistical significance was determined using a One-Way ANOVA followed by Tukey's multiple comparisons test (*p<0.05,**p<0.01,***p<0.001,****p<0.0001)
Fig. 4
Fig.4 Combining capmatinib and RT increases survival-rate and -time in vivo.a Kaplan-Meier curve of mice enrolled in the "Survival cohort".All treatments started 1 week after transplantation.Radio therapy was administered for 6 days, delivering 12 Gy total.Compound treatment was continued for 84 days.After an additional 49 days of monitoring (140 days after transplantation) the trial ended and none of the remaining mice showed any hints of residual tumor.Three mice that were treated with capmatinib + RT reached this time point, whereas each of the other groups contained only 1 "survivor".N = 8 (vehicle arms) or n = 10 (compound-treated arms), respectively.P-values for groups that displayed statistically significant survival differences are indicated.b Bioluminescence-imaging pictures from four representative mice of the vehicle arm (middle ranks according to initial luciferase intensity) and from all mice of the capmatinib + RT arm.First row is depicted in another intensity scale to visualize tumors in all mice.The depicted scale bar indicates the range from 5x10^5-1x10^7 photons/sec/cm 2 /sr.The combinatorial treatment induced tumor regression in 8/10 animals around day 21 on treatment.c Tumor burdens according to BLI of all enrolled mice before treatment are depicted as area of circles (left panel).The right panel shows the initial tumor sizes of mice that survived for 140 days without residual tumor.While the surviving animals of the vehicle groups displayed the smallest initial tumors, neoplasms of all sizes could be cured with combinatorial therapy of capmatinib and radiation.d Immunohistochemical analyzes of phosphoproteins in tumors of the Survival cohort, which were treated with the indicated therapies until onset of neurological symptoms.Phospho-MET, pErk and pAkt levels were significantly reduced in capmatinib-treated mice collected on days of treatment (Mo.-Fr.), however elevated levels reappeared in tissue collected during treatment pauses on weekends.Scale bar is 100µm
Fig. 5 .
Fig. 5. Sensitivity of human tumor samples to MET inhibition.a MRI images from an IHG patient with TRIM24-MET fusion.Left panel: Image at diagnosis showing a large solid cystic tumor filling the entire temporal lobe of the left hemisphere.Middle panel: Image at the end of resection and chemotherapy.Right panel: MRI image at recurrence.b The fusion encompassed TRIM24 exons 1-12 and exon 15 of c-MET, encoding a chimeric protein that contains the N-terminal moiety of TRIM24 and the c-MET kinase domain.c, tSNE projection of a combined methylation dataset comprised of a reference set of glioma subtypes (n=1128, circles from Capper, et al.Nature 2018, triangles from Clarke, et al.Cancer Discov 2020).The TRIM24-MET and TFG-MET tumor samples and cell lines from this study (squares, TRIM24-MET-i primary n = 4, cell culture n = 1; TRIM24-MET-r primary n = 3, cell culture n = 1; TFG-MET Models n = 6) group together with infant HGG with RTK fusion genes (IHG).d Dose-response curves of TRIM24-MET-i and TRIM24-MET-r cells after treatment with capmatinib, crizotinib or cabozantinib for 72 h.Data from three independent experiments.The vertical dotted lines indicate EC50 values
Fig. 6
Fig. 6 Combination of capmatinib and RT eradicates human tumor cells in vivo.a Overview schematic depicting the four treatment arms of the preclinical study comparing in vivo response to capmatinib and cabozantinib.b Kaplan-Meier curve of mice enrolled in the study depicted in A. All treatments started 13 days after transplantation.Compound treatment was continued for 133 days.Within the subsequent 6 months of monitoring, 7 of 8 mice in the capmatinib-treated group experienced tumor relapse.c Overview schematic depicting the four treatment arms of the preclinical study comparing the combination treatment of capmatinib and RT vs either treatment alone.d Kaplan-Meier curve of mice enrolled in the study depicted in C. All treatments started 18 days after transplantation.Radiotherapy was administered at 0.5 Gy per day, delivering 10 Gy total.Compound treatment was continued for 301 days.After an additional 147 of monitoring, the trial ended with all mice having reached their tumor-induced or natural endpoint.e Trend of total flux (photons/sec/cm2/sr) at the cranial and spinal cord region of capmatinib-treated mice enrolled in the 4-arm preclinical trial depicted in C. f Bioluminescence-imaging pictures from mice of the vehicle + RT arm at the time closest to the humane endpoint and from capmatinib + RT treated mice at that time.Color scale range: 1.19x10^6-2.08x10^7photons/sec/cm 2 /sr
Table 1
In vitro ADME (adsorption, distribution, metabolism, and excretion) profiling of crizotinib, capmatinib and cabozantinib.AVG = average, STD = standard deviation.Values indicate the unbound drug fractions in the depicted environment | 2024-06-09T05:10:59.342Z | 2024-06-07T00:00:00.000 | {
"year": 2024,
"sha1": "ac10020a64f08fd1a5b9786f2fd90245b87b34d7",
"oa_license": "CCBY",
"oa_url": "https://molecular-cancer.biomedcentral.com/counter/pdf/10.1186/s12943-024-02027-6",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "1f5e7dcc4d15a0ac8433cc344aac6dc4b302148e",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
246897702 | pes2o/s2orc | v3-fos-license | Gadolinium and Polythiophene Functionalized Polyurea Polymer Dots as Fluoro-Magnetic Nanoprobes
A rapid and one-pot synthesis of poly 3-thiopheneacetic acid (PTAA) functionalized polyurea polymer dots (Pdots) using polyethyleneimine and isophorone diisocyanate is reported. The one-pot mini-emulsion polymerization technique yielded Pdots with an average diameter of ~20 nm. The size, shape, and concentration of the surface functional groups could be controlled by altering the synthesis parameters such as ultrasonication time, concentration of the surfactant, and crosslinking agent, and the types of isocyanates utilized for the synthesis. Colloidal properties of Pdots were characterized using dynamic light scattering and zeta potential measurements. The spherical geometry of Pdots was confirmed by scanning electron microscopy. The Pdots were post-functionalized by 1,4,7,10 tetraazacyclododecane-1,4,7,10-tetraacetic acid for chelating gadolinium nanoparticles (Gd3+) that provide magnetic properties to the Pdots. Thus, the synthesized Pdots possess fluorescent and magnetic properties, imparted by PTAA and Gd3+, respectively. Fluorescence spectroscopy and microscopy revealed that the synthesized dual-functional Gd3+-Pdots exhibited detectable fluorescent signals even at lower concentrations. Magnetic levitation experiments indicated that the Gd3+-Pdots could be easily manipulated via an external magnetic field. These findings illustrate that the dua- functional Gd3+-Pdots could be potentially utilized as fluorescent reporters that can be magnetically manipulated for bioimaging applications.
Introduction
A wide range of probes for magnetic resonance imaging [1], X-ray computed tomography [2][3][4], positron emission tomography [5,6], and fluorescence imaging [7,8] have been explored to facilitate therapy and diagnosis. Over the recent years, emphasis has been laid on the development of probes with more than one functionality. Polymer dots, a class of fluorescence imaging probes with high fluorescence intensity, photostability, and high biocompatibility are one of the ideal candidates for the development of multifunctional probes [9,10]. Polymer dots based nanoprobes have revolutionized common practice in bioimaging [11][12][13][14][15][16], diagnostics, and therapeutic applications [17,18]. The major advantages of polymer dot nanoprobes are their superior photophysical properties and post-functionalization capabilities. Therefore, polymer dots enable imaging of a wide range of samples from single cell to more complex tissues and organs.
The current reports on polymer dots exhibit their potential for in-vitro and in-vivo imaging and diagnosis [19][20][21]. Single chain polymer dots in reduced diameter (<10 nm) have shown improved quantum yield, photostability, and colloidal stability [21][22][23][24][25][26][27][28] as compared to larger diameter polymer dots. Recently, Ozenler et al. demonstrated the use of single-chain polymer dots for differentiation of cancer and healthy cells in co-culture medium [29]. Though single chain polymer dots are expected to show unambiguous advantages in imaging, post-functionalization possibilities are still limited due to the small surface area. Considering the current requirements in clinical practice, bioimaging probes capable of rapid functionalization, bioconjugation, and multimodal imaging are of high priority [30]. Further, the multi-functional probes should be biocompatible and facilitate high-resolution bioimaging. Recently, dual-functional fluoro-magnetic reporters have gained significant research attention as they enable visualization of several biological processes [31].
In this study, a facile one-pot approach [32] for the synthesis of dual-functional fluoromagnetic polymer dots reporters is described. Poly 3-thiopheneacetic acid (PTAA) [33] fluorescent reporters are added to a mixture of polyethyleneimine (PEI), isophorone diisocyanate (IPDI) to yield PTAA functionalized polyurea polymer dots (Pdots) via a one -pot mini-emulsion technique. The adopted mini-emulsion technique enables precise control of size, shape, and concentration of the surface functional groups by varying the parameters such as ultrasonication time, concentration of the surfactant, and crosslinking agent, and the types of isocyanates utilized for the synthesis. The Pdots were post-functionalized by 1,4,7,10 tetraazacyclododecane-1,4,7,10-tetraacetic acid (DOTA) for chelating gadolinium nanoparticles (Gd 3+ ) that impart magnetic properties to the Pdots. Hence, the PTAA and Gd 3+ serve as dual-functional fluorescent and magnetic reporters, respectively, for multimodal bioimaging. The optical and magnetic properties of the Gd 3+ -Pdots characterized using fluorescence spectroscopy and a magnetic levitation system, reveal detectable fluorescent signals even at lower concentrations and facile manipulation by an external magnetic field. Thus, the proposed one-pot mini-emulsion synthesis approach could be utilized to synthesize dual-functional Gd 3+ -Pdots for high-resolution multimodal imaging of various biological samples.
Materials and Methods
Dual-functional Gd 3+ -Pdots were synthesized as shown in Scheme 1. Pdots were prepared via a one-pot mini-emulsion technique. IPDI (0.21 mL, 1 mmol), PEI 25,000 (1 g, 0.04 mmol), PTAA (1.25 mg in 250 µL water, pH: 9), hexadecane (114 µL, 0.387 mmol), and sodium dodecyl sulfate (SDS) (84 mg, 0.294 mmol) in 10 mL DI water were added to a 50 mL round-bottom flask and stirred for 1 h at room temperature. The pre-emulsion solution was then ultrasonicated for 2 min. After ultrasonication, the mixture was added to a 50 mL round-bottom flask, and the reaction mixture was refluxed at 60 • C for 4 h and then cooled to room temperature to yield the Pdots solution.
DOTA-NHS (10 mg, 20 µmol) in 100 µL phosphate-buffered saline (PBS) solution was added to 100 µL of Pdots solution with a magnetic stirrer and stirred for 1 day. Then, gadolinium (III) chloride hexahydrate (GdCl 3 ·6H 2 O) (75 mg, 0.2 mmol) was added to 2 mL citric acid monohydrate solution (230 mg, 0.6 mmol) and stirred for 3 days, followed by purification using a 12-14 kDa dialysis tube and to remove excess Gd 3+ in solution. Five cycles of dialysis (each cycle for 24 h) were performed against a 0.05 M citrate solution (200 mL, pH: 7.4) to yield the dual-functional Gd 3+ -Pdots reporters. All the pH adjustments during the synthesis of Gd 3+ -Pdots were performed using NaOH and HCl. Scheme 1. Schematic representation of the preparation of Gd 3+ -Pdots dual-functional reporters (Overall yield is 26.6% for Gd 3+ -Pdots dual-functional reporters).
Results and Discussion
Anionic PTAA was synthesized and characterized using NMR, UV-visible spectroscopy, and fluorescence spectroscopy to synthesize the Pdots fluorescence reporters. Firstly, poly 3-thiophene methyl acetate (PTMA) was characterized using 1 H NMR ( Figure S1, 400 MHz, CDCl 3 , δ, ppm) that showed associated peaks at 7.30-7.00 (proton of thiophene ring, m, 1H), 3.70 (s, thiophene ring, 2H), and 3.60 (s, methyl, 3H). Then, FTIR-ATR spectroscopy was performed to compare PTMA and PTAA, as shown in Figure S2. It could be observed from Figure S2 that aromatic ester (C-O) functional group was observed in PTMA spectrum at 1310 cm −1 and 1270 cm −1 but not in the PTAA spectrum. This difference plays a key role in the characterization of PTAA. The most significant peak of the spectrum is the broad O-H peak observed in the 3400-2400 cm −1 range. Moreover, the carbonyl peak at 1700 cm −1 reveals the existence of the carboxyl group. The absorption peak at 3180-2980 cm −1 range refers to the C-H bond on the ring of thiophene, and the aliphatic C-H bond is observed at 2980-2780 cm −1 range.
UV-visible and fluorescence spectroscopy was carried out for PTAA at various pH to comprehend the pH sensitivity. The pH was adjusted between pH 3 and 11 at room temperature. When the PTAA solution was prepared, 1 M NaCl was used for all samples to provide a constant ionic strength. As observed from Figure 1a, the solubility of PTAA increases with pH, as revealed by the increase in absorbance intensity at maximum wavelength. A significant increase in UV maximum wavelength (λ maximum) is observed at pH range 5-6 as shown in Figure 1b, which demonstrates the conformational changes of PTAA. In addition, the fluorescence intensity decreases with pH (Figure 1c), as examined using 1.0 M NaCl solutions. As shown in Figure 1d, the maximum wavelength of the emission spectrum shows a significant increase at the pH range 5-6. These observations further indicate that PTAA exhibits good emission intensities (Figure 1c) around physiological pH ranges, making them an ideal candidate for the development of fluorescent reporters for bioimaging.
PTAA was then utilized as the fluorescence reporter to synthesize the Pdots using the one-pot mini-emulsion technique. SEM imaging of 1000 fold diluted Pdots reveals their spherical morphology (Figure 2a). Figure 2b shows fluorescence microscopy (FM) imaging of Pdots confirming spherical geometry in solution. The particle size of Pdots measured using DLS analysis indicates an average hydrodynamic radius of~20 nm, as shown in Figure 2c, which agrees with SEM imaging. Pdots analyzed via FTIR-ATR spectroscopy ( Figure 2d) yielded a broad band spanning from 3600 to 3000 cm −1 , which is attributed to the carboxylic acid -OH groups in the PTAA structure. The peaks associated with the carboxylic acid (C=O stretching mode of carboxyl group) is observed at 1715 cm −1 (deconvoluted spectrum is shown in dotted lines). The NH 2 deformation mode peak is observed at 1636 cm −1 , indicating that the surface of Pdots contains both COOH and NH 2 groups. These characterization results illustrate successful synthesis of fluorescent Pdots. Upon successful characterization of Pdots, a post-functionalization process was carried out to conjugate Gd 3+ to impart magnetic properties to the Pdots. The Pdots prior and after Gd 3+ functionalization were analyzed by fluorescence spectroscopy shown in Figure 3 (excited at 475 nm), which yielded a broad emission spectrum with a peak maximum at 570 nm. The fluorescence intensity of Pdots and Gd 3+ -Pdots were found to be identical, which ascertains negligible interference of Gd 3+ functionalization on the optical properties of PTAA. The slight shift in the emission maximum could be attributed to the change in the chemical environment of Pdots by Gd 3+ cations. The colloidal properties of Pdots and Gd 3+ -Pdots were then characterized by Zeta potential measurements, which further confirmed Gd 3+ post-functionalization process. Prior to the functionalization of DOTA and Gd 3+ chelation, higher concentrations of COO − groups were available on Pdots, whereas the COO − groups were consumed after functionalization. The zeta potential of 11.8 and 6.10 mV was measured for Pdots prior and after Gd 3+ functionalization. The change in zeta potential was attributed to the reaction between primary amine groups and DOTA, thus, the decrement shows the binding of DOTA to the Pdots. The mobility of dual-functional reporters decreases with the functionalization of Gd 3+ on Pdots because the size of the dual-functional reporters is larger than Pdots. Additionally, no significant difference was observed for the conductivity prior and after Gd 3+ functionalization, as shown in Table 1, indicating that Gd 3+ -Pdots solution does not contain free Gd 3+ ions. The Gd 3+ chelation provides magnetic properties to Pdots, enabling magnetic manipulation of as well as magnetic levitation of Pdots. As shown in Figure 4a, the customized magnetic levitation system is fabricated using a glass capillary filled with Gd 3+ -Pdots solution and sandwiched between two magnets, where the same poles are positioned against each other. Polystyrene beads are utilized to illustrate the magnetic properties of Gd 3+ -Pdots. As schematically illustrated in Figure 4a, in the absence of magnets, the polystyrene beads rest at the bottom of the glass capillary since they have a higher density than Gd 3+ -Pdots solution filled in capillary, whereas they levitate at a certain height from the bottom of the glass capillary in the presence of the magnets. Figure 4b,c shows the fluorescence and optical microscope images of the levitating polystyrene beads (polystyrene beads with a density = 1.08 g/mL) in the presence of a magnet. The levitation of polystyrene beads is due to the effect of the paramagnetic behavior of Gd 3+ -Pdots, where F buoyancy + F magnetic > F gravitation .
In addition, the manipulation of Gd 3+ -Pdots reporters was tested by using a neodymium bar magnet. As observed from Figure 5b, the Gd 3+ -Pdots reporters are scattered in the solution, whereas they assemble and cover the surface of the magnet placed under the petri dish after a time period of 24 h (Figure 5c, also schematically represented in Figure 5a). This observation shows that dual-functional Gd 3+ -Pdots reporters could be manipulated by an external magnetic field and that the magnetic patterning and assembly on samples are feasible owing to their intrinsic magnetic properties. These results ascertain that the synthesized Gd 3+ -Pdots could serve as dual-functional fluoro-magnetic nanoprobes for bioimaging.
Conclusions
In conclusion, dual-functional Gd 3+ -Pdot reporters were successfully synthesized via a one-pot mini-emulsion technique for bioimaging applications. SEM, fluorescence microscopy, UV-visible and fluorescence spectroscopy, and magnetic levitation characterization results illustrated that the Gd 3+ -Pdot reporters possess fluorescence and magnetic properties, imparted by PTAA and Gd 3+ , respectively. Thus, the synthesized reporters could be utilized for multimodal bioimaging applications. Furthermore, with the postfunctionalization of Gd 3+ , the Pdots could be magnetically manipulated via an external magnetic field. Apart from bioimaging, we foresee that the dual-functional Gd 3+ -Pdot reporters could be utilized as magnetic tweezers for manipulating micron sized objects.
Informed Consent Statement: Not applicable.
Data Availability Statement: The raw/processed data required to reproduce these findings cannot be shared at this time as the data also forms part of an ongoing study. | 2022-02-17T16:28:09.303Z | 2022-02-01T00:00:00.000 | {
"year": 2022,
"sha1": "3dbf5d646a04d0b8838adb79dc568b93d3846748",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2079-4991/12/4/642/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "d85fcea5c4e8ec51e4c566f0df311375255337da",
"s2fieldsofstudy": [
"Materials Science",
"Chemistry"
],
"extfieldsofstudy": [
"Medicine"
]
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.