id stringlengths 3 9 | source stringclasses 1 value | version stringclasses 1 value | text stringlengths 1.54k 298k | added stringdate 1993-11-25 05:05:38 2024-09-20 15:30:25 | created stringdate 1-01-01 00:00:00 2024-07-31 00:00:00 | metadata dict |
|---|---|---|---|---|---|---|
213616018 | pes2o/s2orc | v3-fos-license | The Effectiveness of SAVI Approach with Macromedia Flash Toward Students’ Critical Thinking Ability
: One of study approaches which can be chosen by a teacher as a variation in biology learning is SAVI (Somatic, Auditory, Visualization, Intellectual) approach. SAVI approach can be applied with macromedia flash in order to keep up with IT development. This research aims to reveal the effectiveness of SAVI approach with macromedia flash towards the critical thinking of 2nd grade students of SMA Negeri 1 Depok focusing on the excretion system subject. This research uses quasi experiment type with pretest-posttest non-equivalent control group design. SAVI approach is used for the experiment group. Meanwhile scientific approach is used for the control group. The population of this research is all 2nd grade students of SMA Negeri 1 Depok Sleman with total 109 students who are divided into four classes. Through cluster random sampling, the students of 2nd grade science 1 class are chosen as the experiment class. Besides, the students of 2nd grade science 2 class are chosen as the controll class with the amount of sample are 56 students. Data collection technique which is used to analyze the students’s critical thinking is an essay test. In addition, the hypothesis test uses independent sample t-test through SPSS 16.0 for windows. The result of the research reveals that the SAVI approach with macroflash is considered effective in increasing the critical thinking ability of the 2nd grade students of SMA Negeri 1 Depok Sleman in excretion system subject.
INTRODUCTION
Undang-Undang Dasar Republik Indonesia number 20 year 2003 concerning the National Education System concerning education policy in Indonesia is directed at achieving eight goals. Based on these eight objectives, in Rencana Pembangunan Pendidikan Nasional Jangka Panjang year 2005-2025, Indonesia has a vision to produce Smart and Competitive Indonesians (Insan Kamil / Insan Paripurna) (Pendidikan, Kebudayaan, & Indonesia, 2014). In supporting the achievement of education development goals, several important components that can be used as indicators include mastery of science and technology (science and technology) for students and at the same time become one of the keys to the superiority of a nation in the era of globalization. Meanwhile, increasing the academic qualifications of all human resources, namely teachers and education personnel are other determinants and at the same time are the nation's assets in improving the quality of education. Some facts show that the quality of Indonesia's human resources is far behind compared to developed countries.
Indonesia have to work hard to improve the quality of their human resources because we still lags behind the other Asian regions. In learning activities, teachers are professionals who have significant influence. Developing the potential of the students is one of the roles and duties of the teacher. The role of other teachers is to determine the effectiveness and efficiency of learning activities. In addition, the teacher has a role as teacher, motivator, mediator, class manager, participant, and evaluator. Teachers play an important role in planning and implementing learning (Ayuni, 2012). Therefore, it is very important to increase the creativity, quality, and professionalism of the teachers.
Peraturan Pemerintah number 19 year 2005 concerning Standar Pendidikan Nasional in Bab IV related to Standar Proses, article 19 paragraph 1, states that the learning process in educational units is held interactively, inspiring, fun, challenging, motivating students to actively participate, and providing sufficient space for initiatives, creativity, and independence in accordance with the talents, interests and physical and psychological development of students. In order for this to happen, various ways can be done by the teacher to create a learning environment that is interactive, inspiring, fun, challenging and motivating students.
Fun learning can be done by the teacher by implementing various approaches, strategies, models and appropriate learning media. One learning approach that can create a fun learning atmosphere is SAVI (Somatic, Auditory, Visualization, Intellectually) designed by Dave Meier. The SAVI approach is an approach that involves all the senses in the body to support learning, learning to actively move physically, by using the senses as much as possible, and making the learning process involve the whole body or mind (Hikmah, 2018).
In the SAVI approach, students are required to actively participate in learning such as observing, conducting experiments, presenting material, then solving problems based on knowledge that have been obtained during the learning. Engagement in learning will attract students' interest in learning. Thus, students do not just sit quietly and listen to the teacher who teaches in front of the class. Learning which employs SAVI approach will be more interesting if it is equipped with learning media. Learning media that can be used are such as teaching aids, LKS (Student Worksheets) or interactive videos. Thus, students can understand everything they learn so that they are more trained in their mindset in understanding the concepts learned. The concept that can be understood more easily by students has an impact on improving learning outcomes, especially the cognitive domain.
In the cognitive area, it contains various aspects, one of which is the ability to think critically. Critical thinking ability is the ability of students to explore and deal with things by not only accepting but also asking about the truth. Students who have critical thinking skills are expected to be able to provide simple explanations, provide further explanations, and apply strategies and tactics. Critical thinking ability is a competency that must be mastered by students. As revealed by Sudiarta (Ristiasari, Priyono, & Sukaesih, 2012), critical thinking has been proven capable to prepare students in thinking in various disciplines because critical thinking is a cognitive activity carried out by students by dividing thinking in real activities by focusing on making decisions about what is believed or done.
Responding to the demand for of science and technology mastering for students in the era of globalism, it is appropriate for teachers in the learning process to involve technology in it. Various technology-based learning media can be selected by the teacher to create a more modern and enjoyable learning atmosphere. One of the media that is being used is macroflash. With the help of Macroflash, the teacher can easily explain the concepts that are considered difficult before through video, images, and animation. The choice of the media must be adapted to the learning approach chosen. The SAVI learning approach which involves the ability to listen and see is appropriate if it is assisted by Macroflash.
The SAVI learning approach that is assisted with macroflash is expected to be in accordance with the demands of education in the era of globalization and so it can improve students' critical thinking skills. Based on the explanation, the researchers were interested in conducting research on the effectiveness of the SAVI learning approach assisted by macroflash to improve students' critical thinking skills.
METHOD
The type of this research is quasi-experimental (quasi experimental research). This type of research is selected because not all variables that appear and experimental conditions can be fully Advances in Social Science,Education and Humanities Research,volume 397 controlled by the researcher. The research design used was the pretest-posttest non-equivalent control group design. The design of the research design is illustrated in the table below. In this study, the study population was all of the eleventh grade students of SMA Negeri 1 Depok Sleman in the 2018/2019 academic year, totaling 109 people and divided into four classes. While the sample used for this study was XI MIA 1 class as the experimental class and XI MIA 2 as the control class. In both classes the number of students was 28 in total. Data collection used in the form of tests in the form of description questions amounting to 6 questions about critical thinking skills. The results of the research data are then analyzed by inferential statistics to test the hypothesis. The analysis used for test the hypothesis is an independent test sample t-test. But beforehand, a prerequisite test was carried out in the form of a normality test and variance homogeneity test. The normality test employed Kolmogorof-Smirnov and the variance homogeneity test employed Levene's Test. While, the whole test employed the SPSS 16.0 for Windows computer program application.
RESULTS AND DISCUSSION
Based on the research done, in general, the students' critical thinking ability increases both in the control class and the experimental class. The results of the study of students' critical thinking skills (Table 2) showed that the average value of the experimental class post-test was higher than the control class. On the comparison chart the, the average value of students' critical thinking skills ( Figure 1) is seen that both the control class and the experimental class have pre-test values with almost the same average. This indicates that the students of both classes have initial abilities that are not much different. The average score of the experimental class post-test increased far more than the control class. The percentage increase in the experimental class was 41.17%, while the control class was 32.54%. Increasing the value of critical thinking skills in the experimental class is the impact of the implementation of the SAVI approach with macromedia flash. The SAVI approach assisted by Macromedia Flash can make students more active and able to reflect learning activities, this causes the mind of students to fully follow the ongoing learning process. In addition, the SAVI approach provides opportunities for students to learn according to what they want. The experience explored by students through the learning process can be used as initial information for further learning (Astawan, 2013). To determine the effectiveness of the SAVI-assisted macroflash learning approach, an independent t-test was performed using the SPSS version 16.0 application. The prerequisite test is done by testing the post-test scores of students through tests of normality and homogeneity. Obtained sig. (2-tailed) 0.208 for the normality test (Kolmogorov-Smirnov Test) which means normal data distribution (α ≥ 0.05). Meanwhile, the homogeneity test (Levene's Test) sig. (2-tailed) of 0.095 (α ≥ 0.05) then the data is homogeneous data or originating from the same population. The next hypothesis test employed is parametric paths because the data is normal and homogeneous. The test employed is the independent T-Test because the data tested are paired data. The independent Ttest results of critical thinking skills (Table 3) show that the sig value. (2-tailed) is 0,000 (α ≤ 0,05) which means H1 is accepted. From the results of the independent T-test, it can be concluded that there are significant differences between the critical thinking skills of the control class and the experimental class.
Value Average Scores Comparison between Critical Thinking Ability Control Class and Experimental Class
Advances in Social Science, Education and Humanities Research,volume 397 with a value of 0.59 while the experimental class is in the high category with a value of 0.73. Thus, the experimental class that uses the SAVI learning approach assisted by macroflash has a very significant influence on improving students' critical thinking skills. This is in line with the research conducted by Rifda and Rendy (2015), the results of the study state that there is a significant effect of Brain Gym-based SAVI approach to cognitive learning outcomes in science learning at Bandar Lampung SMPN 13. The average pretest and posttest scores increased from 69.06 to 77.03 in the experimental class. In the control class the average value of pretest was 63.75 and the posttest average value was 70.63. Table 4. Categorization of N-gain Score N-gain Description 0,00 ≤ g ≤ 0,3 Low 0,3 ≤ g ≤ 0,7 Medium 0,7 ≤ g ≤ 1,00 High In order to find out how effective the use of the SAVI learning approach in excretion system material in the second grade of senior high school, it can be discovered by effect size formula presented below (Sugiyono, 2014 According to the calculation result, the effect size is 1,04 which can be concluded that the application of SAVI learning approach with Macromedia Flash has high effectivity in improving students' critical thinking ability. It is supported by a research by Dadang (2018, 49) which states that SAVI learning approach can improve students' critical thinking ability from average score 44,3 becomes 91,3. Meanwhile, according to a research done by Maruli (2017, 89), macromedia flash is proven can improve students' critical thinking ability. In the first cycle, students who are able to achieve KKM only 40%, then increases into 60% in the second cycle, and 80% in the last cycle.
The principle of SAVI learning approach is students being active in physical or intellectual activity in learning process. SAVI learning approach focuses on modern cognitive science that argues on how the best learning should include emotion, whole body, and realize that every individual studies in different method (Khusna, 2018). The SAVI approach consists of somatic, audiotory, visual and intellectual phases. Somatic phase learning takes the form of simple proof activities as the first step in learning activities. Somatic activities are designed to sharpen students' critical thinking skills through simple evidence. A simple proof of activity aims for students to find their own conclusions from each event. Carla Hannaford (in Meier, 2002) states that brain function can increase due to body movements. The movement of the body can stimulate various chemicals that are important for the construction of the brain's nerve tissue out, so that the learning process can be helped.
The audiotory facilitated by macromedia flash helps students to listen to a specific explanation record. In the audiotory phase, students also listen to explanations from group friends when they are discussing and then the teacher gives clarification at the end of the learning activities. Meier (2002) states that if you want to learn more then talk about it nonstop. When students learn by talking and listening, the brain membranes and other parts of the brain will be stimulated to condense and integrate learning.
In the visual phase, it is displayed moving animation, videos, and various images. This is able to attract the attention of students during the learning process. The colors and images presented for students to be interested and help them in remembering subject matter (Eshter, 2013). Students in intellectual activities are faced with various problems that can build critical thinking skills. Together with a group of friends, students will discuss to solve the problems presented.
SAVI learning follows the modern cognitive science which states that the best learning is involving emotions, the whole body, all senses and all the depth and breadth of the person, respecting the learning styles of other individuals by realizing that people learn differently (Eka, 2013) . The SAVI approach is capable of arousing the integrated intelligence of students in whole through the incorporation of physical motion with intellectual activity. Where students are required to play an active role, all senses must help in the learning process so that students' thinking skills are better (Wahyu, 2015). From the explanation, it can be concluded that students with any learning style can be facilitated by the SAVI approach with macromedia flash, so that students' critical thinking skills can be improved.
CONCLUSION
Based on the results of the research and discussion, it can be concluded that the SAVI approach with macromedia flash is effective in improving the critical thinking skills of students of the XI class excretion system in SMA Negeri 1 Depok, Sleman. This result is proven by the significance value of 0,000 which is smaller than the alpha value (<0.05) through the independent sample t test using the SPSS 16.0 program. | 2020-02-13T09:15:00.676Z | 2020-01-01T00:00:00.000 | {
"year": 2020,
"sha1": "961ecf0d89ca36433dc441208602c51ac585ba4b",
"oa_license": "CCBYNC",
"oa_url": "https://doi.org/10.2991/assehr.k.200129.127",
"oa_status": "GOLD",
"pdf_src": "Adhoc",
"pdf_hash": "35ac5b80e71766e95a2452db4b3ba3f448a83615",
"s2fieldsofstudy": [
"Education",
"Biology"
],
"extfieldsofstudy": [
"Psychology"
]
} |
5326660 | pes2o/s2orc | v3-fos-license | Declarative Diagnosis of Floundering
Many logic programming languages have delay primitives which allow coroutining. This introduces a class of bug symptoms -- computations can flounder when they are intended to succeed or finitely fail. For concurrent logic programs this is normally called deadlock. Similarly, constraint logic programs can fail to invoke certain constraint solvers because variables are insufficiently instantiated or constrained. Diagnosing such faults has received relatively little attention to date. Since delay primitives affect the procedural but not the declarative view of programs, it may be expected that debugging would have to consider the often complex details of interleaved execution. However, recent work on semantics has suggested an alternative approach. In this paper we show how the declarative debugging paradigm can be used to diagnose unexpected floundering, insulating the user from the complexities of the execution. Keywords: logic programming, coroutining, delay, debugging, floundering, deadlock, constraints
Introduction
The first Prolog systems used a strict left to right evaluation strategy, or computation rule. However, since the first few years of logic programming there have been systems which support coroutining between different sub-goals [?]. Although the default order is normally left to right, individual calls can delay if certain arguments are insufficiently instantiated, and later resume, after other parts of the computation have further instantiated them. Such facilities are now widely supported in Prolog systems. They also gave rise to the class of concurrent logic programming languages, such as Parlog [?], where the default evaluation strategy is parallel execution and similar delay mechanisms are used for synchronisation and prevention of unwanted nondeterminism. Delay mechanisms have also been influential for the development of constraint logic programming [?]. Delays are often used when constraints are "too hard" to be handled by efficient constraint solvers, for example, non-linear constraints over real numbers.
Of course, more features means more classes of bugs. In theory delays don't affect soundness of Prolog (see [?] 1 ) -they can be seen as affecting the "control" of the program without affecting the logic [?]. However, they do introduce a new class of bug symptoms. A call can delay and never be resumed (because it is never sufficiently instantiated); the computation is said to flounder. Most Prolog systems with delays still print variable bindings for floundered derivations in the same way as successful derivations (in this paper we refer to these as "floundered answers"), and may also print some indication that the computation floundered. Floundered answers are not necessarily valid, or even satisfiable, according to the declarative reading of the program. They provide little useful information and generally indicate the presence of a bug. In concurrent logic programs the equivalent of floundering is normally called deadlock -the computation terminates with no "process" (call) sufficiently instantiated to proceed. In constraint logic programming systems, the analogue is a computation which terminates with some insufficiently instantiated constraints not solved (or even checked for satisfiability). Alternatively, if some constraints are insufficiently instantiated they may end up being solved by less efficient means than expected, such as exhaustive search over all possible instances.
There is a clear need for tools and techniques to help diagnose floundering in Prolog (and analogous bug symptoms in other logic programming languages), yet there has been very little research in this area to date. There has been some work on showing floundering is impossible using syntactic restrictions on goals and programs (particularly logic databases), or static analysis methods (for example, [?][?]). However, this is a far cry from general purpose methods for diagnosing floundering. In this paper we present such a method. Furthermore, it is a surprisingly attractive method, being based on the declarative debugging paradigm [?] which is able to hide many of the procedural details of a computation. The paper is structured as follows. We first give some examples of how various classes of bugs can lead to floundering. We then present our method of diagnosing floundering, give examples, and discuss how our simple prototype could be improved. Next we briefly consider some more theoretical aspects, then conclude. Basic familiarity of Prolog with delays and declarative debugging is assumed. Figure 1 gives a permutation program which has simple logic but is made reversible by use of delaying primitives and careful ordering of sub-goals in perm/2 (see [?] for further discussion). The delay primitive used is the "when meta-call": a call when(Cond,A) delays until condition Cond is satisfied, then calls A. For example, the recursive call to perm/2 will delay until at least one of its arguments are non-variables. Generally there are other features supported, such as delaying until a variable is ground; we don't discuss them here, though our method and prototype support them. A great number of delay primitives have been proposed. Some, like the when meta-call, are based on calls. Others are based on procedures (affecting all calls to the procedure), which is often more convenient and tends to clutter the source code less. Our general approach to diagnosis is not effected by the style of delay primitive. The when meta-call is by far the % perm(As0, As): As = permutation of list As0 % As0 or As should be input perm ( ((nonvar(As0) ; nonvar(A)), %... Bug 1 when((nonvar(As0) ; nonvar(As)), %.......inserted(A, AS0, As)).
Fig. 1. A reversible permutation program
most portable of the more flexible delay primitives, which is our main reason for choosing it. We have developed the code in this paper using SWI-Prolog.
We consider three separate possible bugs which could have been introduced, shown as commented-out lines preceding the correct versions. They exemplify three classes of errors which can lead to floundering: incorrect delay annotations, confusion over the modes of predicates, and logical errors. With the first bug, an incorrect delay annotation on the recursive call to inserted/3, several bug symptoms are exhibited. The call perm([X,Y,Z],A) behaves correctly but perm([1,2,3],A) succeeds with the answers A=[1,2,3] and A=[1,3,2], then loops indefinitely. We don't consider diagnosis of loops in this paper, though they are an important symptom of incorrect control. The call perm(A, [1,2,3]) succeeds with the answer A=[1,2,3] then has three further floundered answers, A=[1,2,_,_|_], A=[1,_,_|_] and A=[_,_|_], before terminating with failure.
The second bug is a more subtle control error. When inserted/3 was coded we assume the intention was the second argument should always be input and the delay annotation is correct with respect to this intention. However, some modes of perm/2 require inserted/3 to work with just the third argument input. When coding perm/2 the programmer was either unaware of this or was confused about what modes inserted/3 supported. Although this version of the program behaves identically to Bug 1 for the goal perm(A,[1,2,3]), the bug diagnosis will be different because the programmer intentions are different. The mistake was made in the coding of perm/2, and this is reflected in the diagnosis. The simplest way to fix the bug is change the intentions and code for inserted/3, but we only deal with diagnosis in this paper.
Because delays are the basic cause of floundering and they are inherently procedural, it is natural to assume that diagnosing unexpected floundering requires a procedural view of the execution. Even with such a simple program and goals, diagnosis using just traces of floundered executions can be extremely difficult. Subcomputations may delay and be resumed multiple times as variables incrementally become further instantiated. Reconstructing how a single subcomputation proceeds can be very difficult, especially if there is also backtracking involved. Although some tools have been developed, such as printing the history of instantiation states for a variable, diagnosis of floundering has remained very challenging.
Declarative diagnosis of floundering
To diagnose unexpected floundering in pure Prolog programs with delays we use an instance of the three-valued declarative debugging scheme described in [?]. We describe the instance precisely in the following sections, but first introduce the general scheme. A computation is represented as a tree, with each node associated with a section of source code (a clause in this instance) and subtrees representing subcomputations. The trees we use here are a generalisation of proof trees. Each node has a truth value which expresses how the subcomputation compares with the intentions of the programmer. Normally the truth values of only some nodes are required and are found by asking the user questions. Three truth values are used: correct, erroneous, and inadmissible. Informally, the third truth value means the subcomputation should never have occurred. It means a pre-condition of the code has been violated, whereas erroneous means a postcondition has been violated. Inadmissibility was initially used to express the fact that a call was ill-typed [?] but can also be used for other purposes [?]. Here calls which flounder because they never become sufficiently instantiated are considered inadmissible.
Given a tree with truth values for each node, a node is buggy if it is erroneous but has no erroneous children. Diagnosis consists of searching the tree for a buggy node. Many search strategies are possible and [?] provides very simple code for a top-down search. The code first checks that the root is erroneous.
It then recursively searches for bugs in children and returns them if they exist. Otherwise the root is returned as a buggy node, along with an inadmissible child if any are found. In the next sections we first define the trees we use, discuss how programmer intentions are formalised, give some simple diagnosis sessions then make some remarks about search strategy.
Partial proof trees
Standard wrong answer declarative diagnosis uses Prolog proof trees which correspond to successful derivations (see [?]). Each node contains an atomic goal which was proved in the derivation (in its final state of evaluation) and the children of a node are the subgoals of the clause used to prove the goal. Leaves are atomic goals which were matched with unit clauses. We use partial proof trees which correspond to successful or floundered derivations. The only difference is they have an additional class of leaves: atomic goals which were never matched with any clause because they were delayed and never resumed.
Definition 1 ((Callable) annotated atom). An annotated atom is an atomic formula or a term of the form when(C, A), where A is an atomic formula and C is a condition of a when meta-call. It is callable if it is an atom or C is true according to the normal Prolog meaning (for ",", ";" and nonvar/1). atom(X) is the atom of annotated atom X.
Definition 2 ((Successful or floundered) partial proof tree). A partial proof tree for annotated atom A and program P is either 1. a node containing A, where atom(A) is an instance of a unit clause in P or A is not callable, or 2. a node containing A together with partial proof (sub)trees S i for annotated atom B i and P , i = 1 . . . n, where atom(A):-B 1 , . . . B n is an instance of a clause in P .
A partial proof tree is floundered if it contains any annotated atoms which are not callable, otherwise it is successful.
Declarative debuggers use various methods for representing trees and building such representations. The declarative debugger for Mercury [?] is a relatively mature implementation. A much simpler method (which is impractical for large scale applications) is a meta interpreter which constructs an explicit representation of the tree. Figure 2 is one such (poor) implementation which we include for completeness. Floundering is detected using the "short circuit" techniquean accumulator pair is associated with each subgoal and the two arguments are unified if and when the subgoal succeeds. Tree nodes contain an annotated atom, this accumulator pair and a list of subtrees. A subcomputation is floundered if the accumulator arguments in the root of the subtree are not identical.
The programmer's intentions
The way truth values are assigned to nodes encodes the user's intended behaviour of the program. For traditional declarative debugging of wrong answers the intended behaviour can be specified by partitioning the set of ground atoms into true atoms and false atoms. There can still be non-ground atoms in proof tree nodes, which are considered true if the atom is valid (all instances are true). A difficulty with this two-valued scheme is that most programmers make implicit assumptions about they way their code will be called, such as the "type" of arguments. For example, it is assumed that inserted/3 will be called in a context where (at least one of) the last two arguments must be lists. Although inserted(1,a,[1|a]) can succeed, it is counter-intuitive to consider it to be true (since it is "ill-typed"), and if it is considered false then the definition of inserted/3 must be regarded as having a logical error. The solution to this problem is to be more explicit about how predicates should be called, allowing pre-conditions [?] or saying that certain things are inadmissible [?] or having a three-way partitioning of the set of ground atoms [?].
In the case of floundering the intended behaviour of non-ground atoms must be considered explicitly. As well as assumptions about types of arguments, we inevitably make assumptions about how instantiated arguments are. For example, perm/2 is not designed to generate all solutions to calls where neither argument is a (nil-terminated) list and even if it was, such usage would most likely cause an infinite loop if used as part of a larger computation. It is reasonable to say that such a call to perm/2 should not occur, and hence should be considered inadmissible, even though more instantiated calls are acceptable. An important heuristic for generating control information is that calls which have an infinite number of solutions should be avoided [?]. Instead, such a call is better delayed, in the hope that other parts of the computation will further instantiate it and make the number of solutions finite. If the number of solutions remains infinite the result is floundering, but this is still preferable to an infinite loop.
We specify the intended behaviour of a program as follows: Definition 3 (Interpretation). An interpretation is a three-way partitioning of the set of all atoms into those which are inadmissible, valid and erroneous. The set of admissible (valid or erroneous) atoms is closed under instantiation (if an atom is admissible then any instance of it is admissible), as is the set of valid atoms. Note we do not have different admissibility criteria for different sub-goals in the program -the intended semantics is predicate-based. Delay primitive based on predicates thus have an advantage of being natural from this perspective. Note also that atoms in partial proof tree nodes are in their final state of instantiation in the computation. It may be that in the first call to inserted/3 from perm/2, no argument is instantiated to a list (it may delay initially), but as long as it is eventually sufficiently instantiated (due to the execution of the recursive perm/2 call, for example) it is considered admissible. However, since admissibility is closed under instantiation, an atom which is inadmissible in a partial proof tree could not have been admissible at any stage of the computation. The debugger only deals with whether a call flounders -the lower level procedural details of when it is called, delayed, resumed et cetera are hidden.
Truth values of partial proof tree nodes are defined in terms of the user's intentions: Definition 4 (Truth of nodes). Given an interpretation I, a partial proof tree node is 1. correct, if the atom in the node is valid in I and the subtree is successful, 2. inadmissible, if the atom in the node is inadmissible in I, and 3. erroneous, otherwise.
Note that floundered subcomputations are never correct. If the atom is insufficiently instantiated (or "ill-typed") they are inadmissible, otherwise they are erroneous.
Diagnosis examples
In our examples we use a top-down search for a buggy node, which gives a relatively clear picture of the partial proof tree. They are copied from actual runs of our prototype 2 except that repeated identical questions are removed. In section 3.4 we discuss strategies which can reduce the number of questions; the way diagnoses are printed could also be improved. Figure 3 shows how Bug 1 is diagnosed. We use a top-level predicate wrong/1 which takes an atomic goal, builds a partial proof tree for an instance of the goal then searches the tree. The truth value of nodes is determined from the user. The debugger prints whether the node succeeded or floundered (this can be helpful to the user, and the reader, though it is not necessary), then the atom in the node is printed and the user is expected to say if it is valid (v), inadmissible (i) or erroneous (e) 3 . The first question relates to the first answer returned by the goal. It is valid, so the diagnosis code fails and the computation backtracks, building a new partial proof tree for the next answer, which is floundered. The root of this tree is determined to be erroneous and after a few more questions a buggy node is found. It is a floundered leaf node so the appropriate diagnosis is an incorrect delay annotation, which causes inserted(A,B,[]) to delay indefinitely (rather than fail). Ideally we should also display the instance of the clause which contained the call (the debugger code in [?] could be modified to return the buggy node and its parent), and the source code location. Figure 4 shows how Bug 2 is diagnosed. It proceeds in a similar way to the previous example, but due to the different programmer intentions (the mode for inserted/3) the floundering call inserted(A,[3|B],[3]) is considered inadmissible rather than erroneous, eventually leading to a different diagnosis. Both calls in the buggy clause instance are inadmissible. The debugger of [?] returns both these inadmissible calls as separate diagnoses. For diagnosing floundering it is preferable to return a single diagnosis, since the floundering of one can result in the floundering of another and its not clear which are the actual culprit(s).
?-wrong(perm (A,[1,2,3])). (succeeded) perm ([1, 2, 3], [1, 2, 3] Figures 5 and 6 show how Bug 3 is diagnosed. In the first case the diagnosis is a logical error in the inserted/3 clause. In the second case the top-level goal is perm ([1,2,3],A). We assume the user decides to diagnose a floundered answer, skipping over the previous answers. The diagnosis is a control error, similar to that for Bug 2. Both are legitimate diagnoses, just as logical bugs can lead to both missing and wrong answers, which typically result in different diagnoses in declarative debuggers. We have used a very simple search strategy in our examples. Suggestions for search strategies for diagnosing some forms of abnormal termination are given in [?] and these can be adapted to floundering. From our definition of truth values for nodes, we know no floundered node is correct. We also know that floundering is caused by (at least one) floundered leaf node. Thus we have (at least one) path of nodes which are not correct between the root node and a leaf. It makes sense to initially restrict our search to such a path. A top-down search of the path can be achieved simply by careful ordering of the children (examining floundered children first) in a top-down debugger. This is what we have used for our examples (see Figure 7 for the code). There is an erroneous node on the path with no erroneous children on the path. Both bottom-up and binary search strategies are likely to find this node significantly more quickly than a top-down search. Once this node is found, its other children must also be checked. If there are no erroneous children the node is buggy. Otherwise, an erroneous child can be diagnosed recursively, if it is floundered, or by established wrong answer diagnosis algorithms.
Theoretical considerations
We first make some remarks about the soundness and completeness of this method of diagnosis, then discuss related theoretical work. An admissible atomic formula which flounders has a finite partial proof tree with an erroneous root and clearly this must have a buggy node. Since the search space is finite, completeness can easily be achieved. Soundness criteria come from the definition of buggy nodes (erroneous nodes with no erroneous children). The three classes of bugs mentioned in Section 2 give a complete categorisation of bugs which cause floundering. Logical errors cause successful buggy nodes. Incorrect delay annotations cause floundered leaf nodes which are admissible but delay. Confusion over the modes causes floundered internal nodes which are admissible but have one or more floundered inadmissible children. If there are also successful inadmissible ("ill-typed") children it may be more natural to say it is caused by a logical ("type") error.
Declarative diagnosis of wrong answers can hide the complex procedural details of execution because success is independent of the computation rule. Our current work on diagnosis arose out of more theoretical work on floundering [?]. Nearly all delay primitives have the property that if a certain call can proceed (rather than delay), any more instantiated version of the call can also proceed. An important result which follows from this property is similar to the result concerning success: whether a computation flounders, and the final instantiation of variables, depends on the delay annotations but not on the order in which sufficiently instantiated call are selected. Non-floundering is also closed under instantiation, so it is natural for admissibility to inherit this restriction and partial proof trees provide a basis for intuitive diagnoses. Our diagnosis method can be effectively applied to other delay primitives for which this property holds simply by changing the definition of callable annotated atoms.
The use of the term "declarative" in this paper may have caused unease in some readers. However, there is an interpretation of when meta-calls which allows model-theoretic view of our diagnosis method (see [?] for further details). We partition the set of function symbols into program function symbols and extraneous function symbols. The program, goals and set of admissible atoms only contain program function symbols. We interpret nonvar(X) as meaning the principle function symbol of X is a program function symbol. Instead of a when meta call when(C,G) being interpreted as G, we interpret it as a disjunction (G;C), wherē C is the negation of C. For example, the meaning of when(nonvar(X),p(X)) is p(X) or the principle function symbol of X is extraneous. Extraneous function symbols are essentially used to encode variables.
A goal has a floundered derivation which uses the normal procedural interpretation of when meta-calls if and only if it has a successful derivation using an added disjunct (C) in the alternative interpretation. The sets of admissible and valid atoms can also be encoded in the same way: if an atom containing variables is admissible (or valid), the atom with the variables instantiated to extraneous function symbols should be admissible (or valid, respectively). Encoding our previous example, perm ( ) would be inadmissible, assuming $ and $$ are extraneous function symbols. We then have a partitioning of ground atoms into those which are true (valid), false, and inadmissible -a three-valued interpretation of the kind used discussed [?]. If this interpretation is not a three-valued model, bug symptoms can be diagnosed using declarative wrong answer diagnosis. All the diagnosis examples in this paper can be reproduced in this way, though floundering of valid atoms (which is rare in practice) cannot be diagnosed. In this paper the way truth values are assigned to tree nodes overcomes this limitation.
Conclusion
There has long been a need for tools and techniques to diagnose unexpected floundering in Prolog with delay primitives, and related classes of bug symptoms in other logic programming languages. The philosophy behind delay primitives in logic programming languages is largely based on Kowalski's equation: Algorithm = Logic + Control [?]. By using more complex control, the logic can be simpler. This allows simpler reasoning about correctness of answers from successful derivations -we can use a purely declarative view, ignoring the control because it only affects the procedural semantics. When there are bugs related to control it is not clear the trade-off is such a good one. The control and logic can no longer be separated. Since the normal declarative view cannot be used, the only obvious option is to use the procedural view. Unfortunately, even simple programs can exhibit very complex procedural behaviour, making it very difficult to diagnose and correct bugs using this view of the program.
In the case of floundering, a much simpler high level approach turns out to be possible. The combination of the logic and control can be viewed as just slightly different logic, allowing declarative diagnosis techniques to be used. The procedural details of calls delaying and the interleaving of subcomputations can be ignored. The user can simply put each atomic formula into one of three categories. The first is inadmissible: atoms which should not be called because they are insufficiently instantiated and expected to flounder (or are "ill-typed" or violate some pre-condition of the procedure). The second is valid: atoms for which all instances are true and are expected to succeed. The third is erroneous: atoms which are legitimate to call but which should not succeed without being further instantiated (they are not valid, though an instance may be). A floundered derivation can be viewed as a tree and this three-valued intended semantics used to locate a bug in an instance of a single clause or a call with a delay annotation. | 2007-10-31T18:40:50.000Z | 2007-10-31T00:00:00.000 | {
"year": 2007,
"sha1": "b4cc0774a8f3567e461a75c3913fba409d9bce01",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "ed5b3f9bda71e0d8bdb29ebfe538a22e35ff3139",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Computer Science"
]
} |
12758137 | pes2o/s2orc | v3-fos-license | Search for tt resonances in the lepton plus jets final state in pp collisions at â‹ıs = 1.96 TeV
Search for t ¯ t resonances in the lepton plus jets final state in p ¯ p collisions at √ s = 1.96 TeV a r t i c l e i n f o a b s t r a c t We present a search for a narrow-width heavy resonance decaying into top quark pairs (X → t ¯ t) in p ¯ p collisions at √ s = 1.96 TeV using approximately 0.9 fb −1 of data collected with the DØ detector at the Fermilab Tevatron Collider. This analysis considers t ¯ t candidate events in the lepton plus jets channel with at least one identified b jet and uses the t ¯ t invariant mass distribution to search for evidence of resonant production. We find no evidence for a narrow resonance X decaying to t ¯ t. Therefore, we set upper limits on σ X · B(X → t ¯ t) for different hypothesized resonance masses using a Bayesian approach. For a Topcolor-assisted technicolor model, the existence of a leptophobic Z boson with mass M Z < 700 GeV and width Γ Z = 0.012M Z can be excluded at the 95% C.L.
PACS:
14.65. Ha 14.70.Pw We present a search for a narrow-width heavy resonance decaying into top quark pairs ( X → tt) in pp collisions at √ s = 1.96 TeV using approximately 0.9 fb −1 of data collected with the DØ detector at the Fermilab Tevatron Collider. This analysis considers tt candidate events in the lepton plus jets channel with at least one identified b jet and uses the tt invariant mass distribution to search for evidence of resonant production. We find no evidence for a narrow resonance X decaying to tt. Therefore, we set upper limits on σ X · B( X → tt) for different hypothesized resonance masses using a Bayesian approach. For a Topcolorassisted technicolor model, the existence of a leptophobic Z boson with mass M Z < 700 GeV and width Γ Z = 0.012M Z can be excluded at the 95% C.L. © 2008 Elsevier B.V. All rights reserved.
Introduction
The top quark has by far the largest mass of all the known fermions. Unknown heavy resonances may play a role in the production of top quark pairs (tt) and add a resonant part to the Standard Model (SM) production mechanism mediated by the strong interaction. Such resonant production is possible for massive Z -like bosons in extended gauge theories [1], Kaluza-Klein states of the gluon or Z boson [2,3], axigluons [4], Topcolor [5], and other theories beyond the SM. Independent of the exact model, resonant production of top quark pairs could be visible in the reconstructed tt invariant mass distribution.
In this Letter, we present a search for a narrow-width heavy resonance X decaying into tt. We consider the lepton + jets ( + jets, where = e or μ) final state. The event signature is one isolated electron or muon with high momentum transverse to the beam axis (p T ), large transverse energy imbalance (/ E T ) due to the undetected neutrino, and at least four jets, two of which result from the hadronization of b quarks. The analyzed dataset corresponds to an integrated luminosity of 913 ± 56 pb −1 in the e + jets channel and 871 ± 53 pb −1 in the μ + jets channel, collected with the DØ detector between August 2002 and December 2005. The analysis uses events with at least three reconstructed jets. Backgrounds from light-quarks are further reduced by identifying b jets. After b tagging, the dominant physics background for a resonance signal is non-resonant SM tt production. Smaller contributions arise from the direct production of W bosons in association with jets (W + jets), as well as instrumental background originating from multijet processes with jets faking isolated leptons. The search for resonant production in the tt invariant mass distribution is performed using Bayesian statistics to compare SM and resonant production to the observed mass distribution. Previous searches performed by the CDF and DØ Collaborations in Run I found no evidence for a tt resonance [6,7]. In these studies, a Topcolor model was used as a reference to quote mass limits. According to this model [5], a large top quark mass can be generated through the formation of a dynamical tt condensate, Z , due to a new strong gauge force with large coupling to the third generation of fermions. In one particular model, Topcolor-assisted technicolor [8], the Z boson has large couplings only to the first and third generation of quarks and has no significant couplings to leptons. Limits obtained on σ X · B( X → tt) are used to set a lower bound on the mass of such a leptophobic Z boson. In Run I CDF found M Z > 480 GeV with 106 pb −1 of data [6], and DØ obtained M Z > 560 GeV using 130 pb −1 [7], both at the 95% C.L. and for a resonance with width Γ Z = 0.012M Z .
DØ detector
The DØ detector [9] has a central-tracking system consisting of a silicon microstrip tracker and a central fiber tracker, both located within a 2 T superconducting solenoidal magnet, with designs optimized for tracking and vertexing at pseudorapidities |η| < 3 and |η| < 2.5, respectively. The pseudorapidity, η, is defined with respect to the beam axis. Central and forward preshower detectors are positioned just outside of the superconducting coil. A liquidargon and uranium calorimeter has a central section (CC) covering pseudorapidities |η| 1.1, and two end calorimeters (EC) that extend coverage to |η| ≈ 4.2, with all three housed in separate cryostats [10]. An outer muon system covering |η| < 2 consists of a layer of tracking detectors and scintillation trigger counters in front of 1.8 T iron toroids, followed by two similar layers after the toroids [11]. Luminosity is measured using plastic scintillator arrays placed in front of the EC cryostats. The three-level trigger and data acquisition systems are designed to accommodate the high luminosities of Run II and record events of interest at up to about 100 Hz.
Event selection
To select top quark pair candidates in the e + jets and μ + jets decay channels, triggers that required a jet and an electron or muon are used. The event selection requires either an isolated electron with p T > 20 GeV and |η| < 1.1, or an isolated muon with p T > 20 GeV and |η| < 2.0. No additional isolated leptons with p T > 15 GeV are allowed in the event. Details of the lepton identification and isolation criteria are described in [12,13]. We require / E T to exceed 20 GeV (25 GeV) for the e + jets (μ + jets) channel. Jets are defined using a cone algorithm [14] with radius 2 , φ is the azimuthal angle, and y the rapidity. The selected events must contain three or more jets with p T > 20 GeV and |y| < 2.5. At least one of the jets is required to have p T > 40 GeV. Events with mismeasured lepton momentum are rejected by requiring the / E T to be acollinear with the lepton direction in the transverse plane [15].
To improve the signal-to-background ratio, at least one jet is required to be identified as a b jet. The tagging algorithm uses the impact parameters of tracks matched to a given jet and information on vertex mass, the decay length significance, and the number of participating tracks for any reconstructed secondary vertex within the cone of the given jet. The information is combined in a neural network to obtain the output variable, NN B , which tends towards one for b jets and towards zero for light quark jets [16]. In this analysis we consider jets to be b-tagged if NN B > 0.65 which corresponds to a tagging efficiency for b jets of about 55% with a tagging rate for light quark jets of less than 1%.
We independently analyze events with three and four or more jets and separate singly tagged and doubly tagged events, since the channels have different signal-to-background ratios and systematic uncertainties.
Signal and background modeling
Simulated events are used to determine selection efficiencies for the resonant tt production signal and for background sources except those in which instrumental effects give fake leptons and / E T in multijet production events. Samples of resonant tt production are generated with pythia [17] for ten different choices of the resonance mass M X between 350 GeV and 1 TeV. In all cases, the width of the resonance is set to Γ X = 0.012M X . This qualifies the X boson as a narrow resonance since its width is smaller than the estimated mass resolution of the DØ detector of 5-10%. The generated resonance is forced to decay into tt.
Standard Model tt and diboson backgrounds (W W , W Z, and Z Z ) are generated with pythia [17]. Single top quark production is generated using the comphep generator [18]. A top quark mass of 175 GeV is used for both resonant and SM top production processes. W + jets and Z + jets events are generated using alpgen [19] to model the hard interaction and pythia for parton showering, hadronization and hadron decays. To avoid double counting between the hard matrix element and the parton shower, the MLM jet-matching algorithm is used [20]. The CTEQ6L1 parton distribution functions [21,22] are used for all samples. The generated events are processed through the full geant3-based [23] simulation of the DØ detector and the same reconstruction program as used for data.
The SM tt, single top quark, diboson, and Z + jets backgrounds are estimated completely from Monte Carlo (MC) simulation, to obtain the total acceptance as well as the shape of the reconstructed tt invariant mass distribution. Trigger inefficiencies and differences between data and MC lepton and jet identification efficiencies are accounted for by weighting the simulated events [15]. Jet b-tagging probabilities are measured in data and parametrized as functions of p T and η. They are used to weight each simulated event according to its event b-tagging probability. Finally, the expected yields are normalized to the SM theoretical prediction. A tt production of σ tt = 6.77 ± 0.60 pb for m t = 175 GeV [24] is used. Z + jets, single top quark and diboson samples are normalized to their next-toleading-order cross sections [25][26][27].
The W + jets background is estimated from a combination of data and MC information. The expected number of W + jets events in the b-tagged sample is computed as the product of the estimated number of W + jets before b tagging and the expected event b-tagging probability. The former is obtained from the observed number of events with real leptons in data, computed using the matrix method [12], and then subtracting the expected contribution from other SM production processes. The b-tagging probability is obtained by combining the W + jets flavor fractions estimated from MC with the event b-tagging probability, estimated from b tag rate functions. The shape of the reconstructed invariant mass distribution is obtained from the MC simulation. The multijet background is completely determined from data. The total number of expected events is estimated by applying the matrix method to the each of the b-tagged subsamples. The shape is derived from events with leptons failing the isolation requirements. A summary of the prediction for the different background contributions in the combined + jets channels, along with the observed number of events in data, is given in Table 1. Systematic uncertainties are discussed below.
Reconstruction of the tt invariant mass distribution
The tt invariant mass is reconstructed from the four-momenta of up to the four highest p T jets, the lepton momentum, and the neutrino momentum. The latter is obtained from the transverse missing energy and a W -mass constraint. The neutrino transverse momentum is identified with the missing transverse momentum, given by / E T and its direction. The neutrino momentum along the beam direction, p ν z , is estimated by solving the equation M 2 W = (p + p ν ) 2 , where p (p ν ) is the lepton (neutrino) four momentum. If there are two solutions, the one with the smaller |p ν z | is taken; if no solution exists, p ν z is set to zero. This method gives better sensitivity for high mass resonances than a previously applied constrained kinematic fit technique [7], since for M tt 700 GeV the jets from the hadronically decaying W boson are more likely be reconstructed in a single jet instead of two jets and in such cases the assumptions made in the kinematic fit are invalid. The sensitivity for lower resonance masses is slightly reduced from that for the constrained fit. The direct reconstruction also allows the inclusion of data with fewer than four jets in the case that some jets are merged, further increasing the sensitivity. The expected tt invariant mass distributions for three different resonance masses are compared to the SM expectation in Fig. 1.
Systematic uncertainties
The systematic uncertainties can be classified as those affecting only normalization and those affecting the shape of any of the signal or background invariant mass distributions. The systematic uncertainties affecting only the normalization include the theoretical uncertainty on the SM prediction for σ tt (9%), the uncertainty on the integrated luminosity (6.1%) [28], and the uncertainty on the lepton identification efficiencies.
The systematic uncertainties affecting the shape of the invariant mass distribution as well as the normalization are studied in signal and background samples. These include uncertainties on the jet energy calibration, jet reconstruction efficiency, and b-tagging parameterizations for b, c and light jets. The effect due to the top quark mass uncertainty is computed by changing m t in the simulation of tt to 165 GeV and 185 GeV, normalized to their corresponding theoretical cross sections. The effect is scaled to correspond to a top quark mass uncertainty of ±5 GeV. The difference in the tt acceptance due to the top quark mass variation is also included in the systematic uncertainty. The fraction of heavy flavor in the W + jets background is measured in control samples, and a corresponding uncertainty on the W + jets flavor composition is used. Also the uncertainties on the b-fragmentation and the uncertainties of the efficiencies used in the matrix method are taken into account. Table 2 gives a summary of the relative systematic uncertainties on the total SM background normalization for the combined + jets channels. The sample dependence of the background composition and the use of data-and MC-based methods to estimate the backgrounds, induce a sample dependent overall luminosity uncertainty. The effect of the different systematic uncertainties on the shape of the tt invariant mass distribution cannot be inferred from this table, but is included in the analysis.
Result
After all selection cuts, 319 events remain in the e + jets channel and 288 events in the μ + jets channel. The sums of all SM and multijet instrumental backgrounds are 303 ± 22 and 251 ± 19
Table 2
The relative systematic uncertainties on the overall normalization of the SM background and for a resonance mass of M X = 650 GeV, with at least one b-tagged jet.
The uncertainties shown are symmetrized. The actual asymmetric uncertainties and the effect of shape-changing systematic errors are used in the limit setting Fig. 2 for the 3 jets and 4 jets samples.
Finding no significant deviation from the SM expectation, we apply a Bayesian approach to calculate 95% C.L. upper limits on σ X · B( X → tt) for hypothesized values of M X between 350 and 1000 GeV. A Poisson distribution is assumed for the number of observed events in each bin, and flat prior probabilities are taken for the signal cross section times branching fraction. The prior for the combined signal acceptance and background yields is a multi- variate Gaussian with uncertainties and correlations described by a covariance matrix [29].
The expected and observed 95% C.L. upper limits on σ X · B( X → tt) as a function of M X , after combining the 1 and 2 b-tag samples and the 3 and 4 jets samples, are summarized in Table 3 and displayed in Fig. 3. This figure also includes the predicted σ X · B( X → tt) for a leptophobic Z boson with Γ Z = 0.012M Z computed using CTEQ6L1 parton distribution functions. The comparison of the observed cross section limits with the Z ' boson prediction excludes M Z < 700 GeV at the 95% C.L. Due to a small excess of data over expectation (of no more than 1.5σ significance) for invariant masses in the range between 600 and 700 GeV, the observed limits do not reach the expected limit for a Z boson of 780 GeV.
Conclusion
A search for a narrow-width heavy resonance decaying to tt in the + jets final states has been performed using data corresponding to an integrated luminosity of about 0.9 fb −1 , collected with the DØ detector at the Tevatron collider. By analyzing the reconstructed tt invariant mass distribution and using a Bayesian method, model independent upper limits on σ X · B( X → tt) have been obtained for different hypothesized masses of a narrow-width heavy resonance decaying into tt. Within a Topcolor-assisted tech-nicolor model, the existence of a leptophobic Z boson with M Z < 700 GeV and width Γ Z = 0.012M Z is excluded at the 95% C.L. | 2019-10-08T15:09:38.251Z | 2018-01-01T00:00:00.000 | {
"year": 2019,
"sha1": "dc3c53decc0be684a2afdd699a99c1ab4f792d63",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1016/j.physletb.2008.08.027",
"oa_status": "HYBRID",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "4401b29ec8f4989b633cc5ec9b597b113489eb99",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": []
} |
59490783 | pes2o/s2orc | v3-fos-license | Implementation of a Mobile Robot Platform Navigating in Dynamic Environment
Currently, problems of autonomous wheeled mobile robots in unknown environments are great challenge. Obstacle avoidance and path planning are the back bone of autonomous control as it makes robot able to reach its destination without collision. Dodging obstacles in dynamic and uncertain environment is the most complex part of obstacle avoidance and path planning tasks. This work deals with the implementation of an easy approach of static and dynamic obstacles avoidance. The robot starts by executing a free optimal path loaded into its controller; then, it uses its sensors to avoid the unexpected obstacles which may occur in that path during navigation.
Introduction
Robotics helps human in difficult, repetitive or tedious tasks.Moreover, it is the dream of substituting the machine to man in these tasks.Currently, the faculties of robots perception and reasoning are progressing every day and more in the future, they will play an increasingly important role in our lives.
Navigation of mobile robots in dynamic environments still represents a challenge for real world applications.The robot should be able to gain its goal position navigating safely among moving people or vehicles, facing the implicit uncertainty of the surrounding world and the limits of its perception system.
The problem of autonomous navigation has been deeply studied in literature and several techniques have been developed.These techniques use probabilistic and complex algorithms which are hard to implement and requires specific sets of hardware and software [1], [2].
In this paper a technique that is easy to be implemented using low cost materials is introduced.First, the robot is supposed to follow a predefined optimal free path, generated using the method given by [3], to navigate safely in static environment.Then, a strategy enabling the robot to navigate in dynamic environment considering the uncertain obstacles is proposed.Thus sonar sensors are used for this purpose.
Path generation and obstacle avoidance in static environment
First, and prior to spraying the case of dynamic environment, the robot needs to generate a navigation path in a static environment.After that, this trajectory will be saved in the internal memory of a robot and run repeatedly; and at the same time avoiding unexpected obstacles (and dynamic ones).
Path planning approach
A path planning and obstacle avoidance approach using parametric curves was used for the generation of the predetermined path that the robot should follow [4].It is assumed that the current position coordinates of the robot and of the target are known using exteroceptive sensors.The robot can measure the distance to nearby obstacles that are in the range of sensors and avoid them to achieve the goal by following an optimal smoothed path by running the algorithm proposed in [4], [5].
Figure 1. Path planning using parametric curves and control points insertion
In [5], a linear parametric curve is used for path planning, and then this curve is smoothed around the control point.The linear path connecting the initial CMME 2016 I position (S) to the target (T) is considered first.If the trajectory hits an obstacle, a control point is introduced between the initial position and the target and an intermediate connection point is created as shown in Fig. 1.Then, the collision between the line (QT), connecting the control point Q to the target T, and the obstacle is tested.In case of collision, another control point must be added, and so on (see Fig. 1).
Non-uniform rational B-spline (NURBS) curves were used to smooth the path, as they preserve the C 2 continuity of the trajectory which is very important for the robot dynamic.Furthermore, the addition of control points to avoid obstacles does not affect the whole trajectory.The smoothed curve is shown in green in Fig. 1.
Static trajectory data extraction
There are several ways to generate a path to follow for a robot such as: Arc trajectory [4], which is easy to be generated; but the combination of obstacles avoidance with other factor changes in a dynamic environment is difficult because of the circular arc.Thus, the trajectory generation using Bezier curves is introduced [6], which is characterized by defining the orientation and direction of the robot.However, the major drawback of this method is the mathematical complexity of Bezier curves, which makes it difficult to create paths a robot can follow with high speed and the same result is presented in straight line guide where the robot must stop walking to turn.
To reduce these problems, in our work, the robot must follow the trajectory generated using NURBS.First, the following parameters must be extracted from the NURBS curve generated using Visual Basic instruction [5]: The tangent from the initial position, which defines the initial orientation of the robot.
The distance (di) between each two control point which defines the distance that the robot has to move and which can be calculated using the following formula: The tangent at each control point Qi to define the orientation α i of the robot (see Fig. 2(a)).The speed of the robot is defined by the derivative of the NURBS curve at each control point.Between the control points, the robot has to move with a constant speed.Initially, NURBS is presented as a database containing the coordinates (x i , y i ) of each control point Q i .Next, the distances d i and the orientations α i are computed as explained above and as shown in Fig. 2(b).After extracting these data, they will be recorded as an Excel database (* .xls) to send them to the platform of the robot via USB (universal serial bus) port, which allows series data transfers from the PC to the robot microcontroller.This free path data will be executed by the robot platform.If the robot encounters an unexpected obstacle so it sends an error message to the microcontroller to notify the existence of an obstacle not point in the map of the environment.
Obstacles avoidance in dynamic environment
A function called "dynamic" is created to support the dynamic obstacle avoidance.This function helps the robot to navigate in an uncertain environment with four (04) ultrasonic sensors that will collect the environmental information, locate the obstacles and avoid them.The principle of this function is as follows: • Read the data of the front sensor; • If there is an obstacle, the robot moves backward for a distance that allows it to avoid the obstacle without collision by a test of the rear sensor; • Read the data of the right sensor; • If there is no obstacle the robot turns right and reads the right sensor data until it exceeds the obstacle then it recovers the initial trajectory by the inverse performance of data travelled; • If there is an obstacle the robot tests the left sensor, otherwise it avoids the obstacle from the left.• If both sensors indicate that the path is not free an error message will be sent to the controller to assign that "the way is occupied"; • If a moving obstacle moves toward the robot, the robot will move backward with sending a message that contains the value # DANGER#.The flowchart in Fig. 3 summarizes the proposed approach.
Experimental results
To validate our approach, it has been tested in several scenarios.
In the scenario shown in Fig. 6, the robot follows its path planned previously, and loaded into its controller; once an unexpected obstacle appears, the robot stops, go back (for a small distance which permits it to avoid the obstacle) then turns right and avoids the obstacle.If no other obstacle occurs, the robot turns left and come back to its initial path.Hence, the robot avoids the obstacle by the side free of obstacles, and it will not come back to its initial path until the sensor face of the obstacle be free.
One important case to notice is when the robot avoids the obstacles to a specific side and detects another obstacle in that way.The robot automatically goes back and tries the other side.If the two sides are blocked, the robot backs off in a straight line and avoids it again.
When the robot is driven out of its path by many obstacles, it always tries to rejoin its path.
Results and discussion
This robot is equipped by ultrasonic sensors which make it able to detect transparent objects, and navigate in an indoor environment.So this robot can be used in our dignified house, offices, supermarkets etc ....For example the robot can detect glass sliding doors.
In addition, the robot can navigate in outdoor environments where the surface can be rough, because of its chassis with suspension (see Fig. 5) and the approach that allow it to navigate in uncertain environments.
However, there are some critical cases that this robot can encounter and which are not taken into consideration in our work.The following section discusses some of these cases.
Some critical cases
Our robot encounters great difficulties to detect circular objects and obstacles with a width lower than 5cm due to the characteristics of ultrasonic sensors when sending sound waves.These waves will hit the obstacle and change direction and they do not return to the sensor, so the obstacle avoidance is not very accurate in this case.
In the case 1 of Fig. 7, the distance measurement is accurate because the obstacle is perpendicular and in front of the sensor, however in case 2 it will generate an inaccurate measurement since the inclined position sensor.Among the critical cases also, the positions which generate a measuring antagonism as shown in Fig. 7 case 3.
Case 1
Case 2 Case 3 Figure 7.Some critical cases which cannot be avoided accurately by the robot Apparently, the robot will not detect objects that have a vacuum of more than 10cm high and in the case where the object has a substantial length.This length may be a function of speed.This situation can create a robot damage (crushing) Fig. 8. Another drawback to notice is that high-speed obstacles cannot be avoided.The obstacle velocity should be less or equal to the robot's velocity.
Conclusion
In this work, the design, simulation and real time testing of an autonomous mobile robot in an unknown environment have been successfully carried out.We were able to come up with a strategy of navigation and try its effectiveness in a real time changing indoor environment.Though some defects, the results were satisfying and the robot was able to navigate safely in dynamic environment.
Hence, in the future work, the robot's performance can be enhanced by installing a GPS module to localize the robot at any given moment.This will simplify the program and drive the robot to any goal with high precision.
Moreover, the ultrasonic sensors can be replaced by laser sensors which have a wider range to recognize the exact shape and position of obstacles.We can go further in customizing with adding up a Radar sensor to work in outdoor environment (rough environment) since the ultrasonic sensors performance is limited.
Figure 2 .
AutoCAD simulation results.(a) Path planning in static environment.(b) Static trajectory data extraction
Figure 3 .
Figure 3. Flowchart simurizing the approach of obstacles aviodance in dynamic environment.
Figure 4 .Figure 5 .
Figure 4.The robot circuit Platform.1: Arduino MEGA2560 card, it contains the program that allows the robot to navigate in its environment; 2: Ultrasonic sensors used to calculate the distance between the robot and the obstacles; 3: 6V battery energy source; 4: DC-motors; 5: On / off switch; 6: L293D used to drive the DC motors
Figure 6 .
Figure 6.Obstacle avoidance scenario Before turning, the robot tests the two side sensors.Hence, the robot avoids the obstacle by the side free of obstacles, and it will not come back to its initial path until the sensor face of the obstacle be free.One important case to notice is when the robot avoids the obstacles to a specific side and detects another obstacle in that way.The robot automatically goes back and tries the other side.If the two sides are blocked, the robot backs off in a straight line and avoids it again.When the robot is driven out of its path by many obstacles, it always tries to rejoin its path.
Figure 8 .
Figure 8. Objects having a vacuum of more than 10cm high.
Table 1
is an illustration of database containing the coordinates (x i , y i ) of control points Q i constituting the free path, and Table2illustrates the database that contains the translation of these data to distances and orientations.
Table 1 .
Illustration of database extraction from static free path (the control points (x, y) coordinates).
Table 2 .
Translation of Q i coordinates to distances and orientations. | 2018-12-20T18:08:45.697Z | 2017-01-01T00:00:00.000 | {
"year": 2017,
"sha1": "3f89b9de319bf1fa5246e2fa5797f3774d610582",
"oa_license": "CCBY",
"oa_url": "https://www.matec-conferences.org/articles/matecconf/pdf/2017/09/matecconf_icmme2017_08004.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "3f89b9de319bf1fa5246e2fa5797f3774d610582",
"s2fieldsofstudy": [
"Computer Science",
"Engineering"
],
"extfieldsofstudy": [
"Engineering"
]
} |
269912614 | pes2o/s2orc | v3-fos-license | Uncommon Presentation of Uterine Artery Pseudoaneurysm: A Case Report of Post Cesarean Hemorrhage
: Uterine artery pseudoaneurysm is an uncommon cause of vaginal bleeding that can occur after cesarean, hysterectomy, myomectomy, uncomplicated vaginal delivery, as well as gynecologic surgery. A 27 - year - old woman (P2L2) who underwent cesarean section, 3 years ago came with chief complain of heavy uterine bleeding since 3 months following 3 years amenorrhea, was found to have a 3.5*2.3 cm right uterine arterypseudo aneurysm on ultrasound. The patient presented to interventional radiology for angiography and uterine artery embolization to minimize the risk of spontaneous hemorrhage. This is the rare reported case, to our knowledge, of a uterine artery pseudo aneurysm associated with an old cesarean section. Computed tomography angiogram, with multi planar and maximal intensity projection images, can optimally display the pseudoaneurysm and the feeding vessel, which can provide valuable information for image - guided catheter embolization.
Introduction
A pseudoaneurysm is a blood -filled cavity with turbulent flow communicating with the parent vessel leading to a deficiency of the arterial wall.Pseudoaneurysm of the uterine artery-a rare, life-threatening complication causing vaginal bleeding-has been associated with cesarean or vaginal delivery and gynecologic surgery [1].
Typically, patients are diagnosed due to rupture of the pseudoaneurysm, leading to hemorrhage [2].Doppler sonography and arteriography are usually used as a diagnostic test [3].More importantly, transcatheter uterine artery embolization (UAE) has emerged as an effective method to control obstetric and gynecologic hemorrhage [4], especially in the treatment of pseudoaneurysm after iatrogenic injury.We report a case of pseudoaneurysm of the right uterine artery presenting after the cesarean section.[5]
Case Presentation
A 27 -year -old woman (P2L2; 1st NVD, 2nd LSCS -3 years back) came with chief complain of heavy uterine bleeding since 3 months following amenorrhea for 3 years with a history of abruption in previous pregnancy.She was known case of hypothyroidism (on Tab.Levothyroxine 50micrograms OD).She had not been on any medication post last child birth.The patient denied any clinical symptoms of vaginal pain or bleeding.She did not have abdominal pain, dizziness, or dysuria.She had a history of amenorrhea since last 3 years (last child birth).Her recent episode of bleeding was associated with no vaginal discharge or pelvic pain.On presentation, her vital signs included a temperature of 97.6 F, heart rate 92, respiratory rate 16 breath/min, and blood pressure 133/89 mm Hg.Physical examination revealed good appearance with no acute distress, regular heart rate, and clear lung field.Her abdominal wall was soft and non -tender.Pelvic examination revealed normal external genitalia, vulva, vagina, cervix, uterus, and adnexa.Her hemoglobin level was 7.6 g/dL for which she received 2 units PRC's.Pelvic ultrasound demonstrated a 3.5*3.1cmheterogeneous area in myometrium with significant vascularity and low impedance flow on colour doppler, (features suggestive of AVM??).Patient was reluctant for surgical intervention and wanted medical management.So initially 3 doses of injection methotrexate with leucovorin factor rescue 24 hours later (as per multiple dose protocol) was tried but no improvement in symptoms and USG was noted.Patient was planned for CTAngiography.Intravenous (IV) contrast -enhanced computed tomography (CT) angiogram of the abdomen, and pelvis was performed which showed features suggestive of Pseudoaneurysm of arcuate branch of right uterine artery.So interventional radiology consultation was done and patient was planned for uterine artery embolization which was done under all aseptic precautions.Post Op period patient was put on antibiotics.Proper vital monitoring was done which remained stable and patient was doing well overall, so, she was discharged on a good note.
Discussion and Conclusion
We report a 27 -year -old woman (P2L2) came with chief complain of heavy uterine bleeding since 3 months following amenorrhea for 3 years, (without any history of DNC) who was found to have a pseudoaneurysm of the right uterine artery.This vascular abnormality resulted from the disruption of the arterial wall.Blood flows around the damaged artery and dissects the adjacent then forming a sac that communicates with the arterial lumen.A true aneurysm consists of a three -layered wall, which differentiates it from a pseudoaneurysm.[6] If a pseudoaneurysm is not recognized, its enlargement and rupture can occur through the extraluminal turbulent flow, possibly leading to bleeding.Since the traumatic injury of the uterine artery wall leads to uterine artery pseudoaneurysms, it is crucial to consider it as a differential diagnosis when there is vaginal bleeding, especially in the postoperative setting or postpartum.[7] Pseudoaneurysm of the uterine artery can occur due to dilatation and curettage, myomectomy, after an uncomplicated vaginal delivery, hysterectomy, and cesarean section [8].Several diagnostics approaches use noninvasive radiological imaging techniques, including CT, magnetic resonance imaging, and ultrasound to identify pseudoaneurysm of the uterine artery [9].Previously, surgical management, including internal iliac artery ligation and laparotomy, has been performed as a treatment [10].Recently, image -guided catheter embolization has emerged as a reliable and cutting -edge method to treat uterine artery pseudoaneurysm [11].Covered stent ultrasound -guided thrombin injection is another possible treatment requiring local expertise and the experience of an operator [12].
Compared to the available case reports in the literature, our patient was unique in terms of having prolonged period of amenorrhea following old caesarean section (with no history of DNC) and the aneurysm didn't respond to medical management.Angiography and UAE were conducted for the patient to decrease the risk of spontaneous bleeding, and she was discharged from the hospital without any complication.
Commons Attribution CC BY Uterine artery pseudoaneurysm in a 26 yr.female (A) axial, (b) Sagittal (c) coronal CTarterial phase images showing a saccular outpouching (arrows in a and b) opacified with contrast within the uterine cavity in the middle 1/3 rd. of endometrium extending into posterior myometrium with arrow in (c) showing it to be arising from the arcuate branch of right uterine artery (D) VR angiogram showing the pseudo aneurysm (arrow) arising from the right uterine artery. | 2024-05-20T15:10:50.369Z | 2023-08-05T00:00:00.000 | {
"year": 2023,
"sha1": "2acad77174c07be18b405a5bd4d69c6c648ad979",
"oa_license": null,
"oa_url": "https://doi.org/10.21275/sr23824014627",
"oa_status": "GOLD",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "d8871d4fb7473ed1b660fb211d1f2425b419f166",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": []
} |
9136211 | pes2o/s2orc | v3-fos-license | Mixing unmixables: Unexpected formation of Li-Cs alloys at low pressure
Unexpected Li-Cs alloy with unusual Cs to Li charge transfer characterized from valence charge topology has been synthesized under 10 GPa.
INTRODUCTION
According to the time-honored empirical Miedema (1) and Hume-Rothery rules (2), if there is a large disparity in the atomic sizes and small difference in electronegativity, some binary alloys are not expected to form solid solutions under ambient conditions. An example is elemental lithium (Li) and cesium (Cs), where no Li-Cs alloy has been found to exist under ambient pressure. Pressure has been recognized to be an efficient thermodynamic parameter to modify the electron states, and the synthesis may become feasible. Highly compressed Li-Cs is a system of topical interest and has motivated several theoretical investigations. In one theoretical study, it is shown that the application of pressure can alter the repulsive nature of the nonreactive Li-Cs mixture, converting an unequivocal phase separation situation at ambient pressure to a strong long-range ordering at high pressures (>50 GPa) (3). Zhang and Zenger predicted that intermetallic crystalline structures can be stabilized as a result of a pressure-induced increase in charge transfer from Li to Cs. A more recent study also predicted that under pressure, electrons can be transferred from Li in Li-rich Li-Cs alloys, causing Cs to become anionic with a formal charge larger than −1 (4). These proposals are not unreasonable because pressure-promoted electron transfer in solids is not uncommon. A good example is the formation of solid solutions of K 2 Ag and K 3 Ag (5). Theoretical analysis showed that the K-Ag alloys form because of the electronegativity difference resulting in electrons being transferred from K 4s to unfilled Ag 5s and 5p orbitals. As a result, the overlaps between spatially extended Ag 5p orbitals are enhanced and this helps to stabilize the alloy framework with the smaller K ions as "spectators" occupying the interstitial sites (6). With this view in mind, the proposed charge transfer from Li to Cs at high pressure is not unreasonable. The motivation for this study is to explore these hypotheses and the possible existence of novel Li-Cs alloys under pressure. We uncovered several surprising results. As demonstrated below, Li-Cs alloys can readily form at very low pressure. Notably, the structure of a nonstoichiometric crystalline Li-Cs inter-metallic compound was resolved, and the analysis of the electron density topology revealed an unusual charge transfer from Cs to Li and forming Li anion.
RESULTS
Mixtures of Li and Cs metals of different concentrations were examined by in situ synchrotron powder x-ray diffraction under pressure and temperature. In the first experiment, the x-ray diffraction pattern was recorded immediately after a sample loading (~1:1 volume ratio, that is, Cs-dominant given the higher density of Cs) at 1.77 GPa and room temperature, as shown in Fig. 1. Even at this very small compression, a complex x-ray diffraction pattern was observed, with Bragg reflections not belonging to elemental Cs and Li. Moreover, changes in the diffraction peak positions with pressure suggested that there was more than one phase present in the sample. As illustrated in Fig. 1, two phase transitions were also detected at 2.60 and 4.5 GPa and at room temperature in the solid mixture. The complexity and the possibility of multiple components in the recorded x-ray diffraction patterns have prevented the determination of the underpinning crystalline structures. Because the possibility of phase mixtures is not ruled out, the identification of individual x-ray diffraction patterns and structure determination require a more detailed analysis. The most significant conclusion following from this first part of our experimental study is that the formation of Li-Cs crystalline alloys does occur unambiguously in a pressure regime much lower than theoretically predicted, that is, >50 GPa (3). The observed structures are likely not related to the predicted Li-rich Cs alloys (4).
In the second set of experiments, an approximate equimolar Li and Cs mixture was prepared. This was accomplished by first computing the relative volumes appropriate to achieve this stoichiometry from the elemental densities. Then, the respective volume of metals was visually estimated and appropriately loaded in the diamond anvil cell (DAC). The x-ray diffraction pattern recorded at the initial pressure of 0.12 GPa and 298 K ( Fig. 2A) indicated a mixture of body-centered cubic elemental Li and Cs. At this Li-Cs concentration, alkali metals with small and large atomic radii do not intermix to form solid solutions. However, marked changes in the x-ray diffraction patterns were observed starting at~3 GPa, as a hint on the formation of a new crystalline phase. As shown in Fig. 2A, a much simpler x-ray diffraction pattern had emerged at 4 GPa, with no further changes recorded up to 8 GPa. The x-ray diffraction pattern obtained can be indexed to a primitive cubic unit cell (Pm-3m). The cubic cell parameter monotonously decreases from 3.585(9) Å at 3 GPa to 3.489(3) Å at 8 GPa (Fig. 2, bottom). To investigate the stability of the new alloy phase, we increased the sample temperature by resistive heating to 498 K. Starting at 8 GPa, the final pressure increased to 9.6 GPa upon heating. This resulted in no further change in the x-ray diffraction pattern (Fig. 2B). While keeping the temperature constant at 498 K, the pressure was then gradually reduced. The new crystalline structure was maintained at a pressure as low as 1.9 GPa at 498 K. At lower pressures, the x-ray diffraction pattern of the sample showed a dissociation of the alloy back to elemental Li and Cs solids. For the same sample, the x-ray diffraction pattern only revealed a single phase in the range of 3 to 10 GPa, regardless of the temperature (room or high temperature).
One striking observation was the very large coefficient of linear thermal expansion (a) of the unit cell size at 298 and 498 K (Fig. 3, top). At 3 GPa, a is calculated to be 1.7 × 10 −4 K −1 , which is one order of magnitude larger than that of most solids under ambient pressure. However, as the pressure was increased, the a value for the Li-Cs alloy dropped; at 8 GPa, it decreased to 1.1 × 10 −4 K −1 . The large a obtained at low pressure suggests a rather weak atomic interaction (and possibly very large anharmonicity). The anomalously large a is reminiscent of the fluxional dynamics observed in the "distorted simple cubic" phase of calcium for which an a value of 2.4 × 10 −4 was measured at 42 GPa (7).
Attempts to determine the atomic positions in the crystalline alloy using conventional methods were not successful in producing a reasonable structural model. The x-ray powder diffraction data, suitably well averaged and presenting minimal preferred orientation, were then analyzed using the maximum entropy method (MEM) (8). With MEM, a structural model can be obtained from the three-dimensional (3D) charge density probability distribution directly computed from the intensities of the Bragg reflections, with the space group and the lattice parameter(s) of the crystal being the only input. MEM is particularly well suited for high-pressure experiments because x-ray diffraction information is often limited and restricted to low Bragg angle reflections. Previous studies have shown that MEM is reliable in extracting (semi-) quantitative charge density distributions from powder x-ray diffraction data in dense solids at high pressure (9,10). Moreover, although the x-ray form factor of Cs is much larger than that of Li and, consequently, the intensities of the Bragg reflections are dominated by the scattering arising from Cs atoms, MEM is able to reveal the electron density even from weak x-ray scatterers, such as hydrogen (11). We applied this method to the powder x-ray diffraction patterns obtained from the Li-Cs alloy; the resulting 3D charge density probability distribution determined at 8 GPa is depicted in Fig. 4. It is obvious that the charge densities obtained at the corner of the cubic lattice are more pronounced (larger) that what is located around the unit cell center. With this result, it is thus reasonable to assign equivalent atoms located at the corners of the cubic lattice, that is, one atom at (0,0,0) to Cs, leaving a Li situated at the ( 1 / 2 , 1 / 2 , 1 / 2 ) center position Fig. 1. Synchrotron x-ray diffraction patterns for a starting mixture of Cs-rich Li-Cs, synthesized at low pressure and at 298 K. Obvious changes in x-ray diffraction patterns were observed as a function of pressure. Radiation wavelength is 0.41373 Å. (structural model B). The atomic site assignment was confirmed by a full pattern refinement (Rietveld) for which a substantially better fit was achieved (that is, for structural model B as opposed to model A, as illustrated in Fig. 5). For this analysis, we concluded that the alloy is an ordered solid solution of Li and Cs. Incidentally, the observed structure is similar to the stoichiometric Li-Cs B2 structure that was predicted to be stable above 100 GPa (3). This, of course, was not found to be the case. The spread of the electron density around the Li site, which correlated to the density probability distribution derived from the MEM analysis, is somewhat delocalized as compared to what is obtained around the Cs atomic sites. Although the charge density distribution derived by MEM (Fig. 4) may not be sufficiently quantitative, the Li occupancy estimated from the integrated charge density is about 0.7; thus, the Li-Cs solid may not be perfectly stoichiometric. The structural assignment and atomic occupancies are consistent with the experimental volume of about one Cs atom per unit cell and the Cs-Cs distance of only one cell length, or 3.585(9) Å at 3 GPa.
DISCUSSION
Under ambient conditions, elemental Cs has a body-centered cubic structure (Cs-I) (12)(13)(14). At a pressure of 2.7 GPa, it transforms to a facecentered cubic structure (Cs-II). A volume collapse was observed at a pressure of 4.5 GPa (15) because of the appearance of the tetragonal Cs-IV phase (16). Between 4.2 and 4.3 GPa, we found a complex modulated structure of phase Cs-III (17). The large volume change at 4.5 GPa is associated with the Cs 6s→5d hybridization (18). Therefore, below 4.5 GPa, Cs is still a main group sp metal. As discussed above, the charge density topology around the Cs that results in the formation of a crystalline Li-Cs solid probably does not arise from the s→d transition. The Cs-Cs nearest-neighbor distance in cubic Cs-II is 4.23 Å (15) and ranges from 3.7 to 4.2 Å in Cs-III (16). These Cs-Cs distances are longer than that observed in the Li-Cs alloy. Hence, the estimated average volume occupied by a Cs atom in the Li-Cs is significantly smaller than what is found in dense phases of pure Cs (Fig. 3, bottom). An explanation for the shorter nearest-neighbor contact distance found in the alloy is that the spatial extent of electron distribution around the Cs atoms is reduced as a result of a charge transfer from the 6s to the Li L-shell (2s and 2p). This situation is possible because Cs is more electropositive than Li and, according to the Pauling scale, the difference in electronegativity [Dc = c(Li) − c(Cs)] is almost 0.2. The consequences of the electron transfer are that (i) Cs d+ is now smaller and Li d− becomes larger and (ii) Cs positive ions can be brought closer together as a result of compression. This situation resembles the close contacts between K + ions found in K 3 Ag (5, 6). Thus, a shorter Cs contact distance (3.586 Å) in the alloy than in the elemental solid is indicative of the presence of Cs positive ions. This suggestion is supported by an inspection of the electron density topology (Fig. 4). The nearly spherical electron density around the Cs atoms indicates no noticeable electron transfer from the 6s to the directional 5d orbitals. In contrast, the electron distribution at the Li shows directionality and is spatially more extended with a mean l width of~1.7 Å. This observation suggests that electrons are being transferred from Cs to the Li 2s-2p hybrid orbitals (vide supra). It is informative to compare the Li-Cs alloy found in this study with the high-pressure phase of CsF, which also bears a simple cubic structure (19) but with a smaller lattice parameter of 3.38 Å at 4.8 GPa. The lattice parameter of Li-Cs at the corresponding pressure is 3.53 Å (Fig. 3). The larger radius of a neutral Li atom (1.37 Å) compared with that of an F anion (1.19 Å) (20) may account for the expansion of the lattice parameter. Moreover, the spatial extent of the Li anion is expected to be even larger.
Here, it is shown that, although group I Li and Cs do not combine under normal conditions, it is possible to force the two unmixable elements to readily form binary alloys at very low pressure. The Li-Cs solids unveiled represent the very first observations of binary alloys obtained from isovalent but very dissimilar alkali metals. In a way, the experimental results apparently contradict the theoretical structural prediction that Li-Cs alloys can only be obtained at significantly higher pressure, namely, above 50 GPa (3,4). Evidently, there is no strong repulsive force between Li and Cs as they become miscible, and they easily establish a long-range order at very low compression. It should be noted that there are important differences between the present experimental conditions and previous electronic structure calculations. First, theoretical calculations were carried for the 1:1 stoichiometric Li-Cs alloy. Second, the effect of temperature was not considered. As demonstrated here, a thermal activation is required to obtain the cubic crystalline alloy. Again, these two factors were not considered in the calculations. Perhaps the most significant observation arising from the present work is that the experimental charge density unambiguously shows that charge donation from Cs to Li is feasible at low pressure in which Li uses the diffuse 2p orbitals to accommodate the electron and adopt a formal oxidation state of −1. Although first-principles structural prediction techniques (21-23) are known as a powerful complement to experiments, our study indicates that the prediction of structures for nonstoichiometric alloys still represents a significant challenge. Together with recent theoretical studies (3,4), the present experimental findings illustrate the diversity of chemical bonding manifested by the apparently simple group I elements.
MATERIALS AND METHODS
High-purity Li and Cs metals (Alfa Aesar, 99.999%) were loaded and constrained, under an inert argon atmosphere, by a rhenium gasket placed between anvils presenting 350-mm-diameter culets of resistively heated membrane DACs. Because Li and Cs are very reactive under normal laboratory conditions, the sample purity was confirmed by x-ray diffraction after the DAC loadings; no trace of the metal oxides or hydroxides was found. Powder x-ray diffraction images were recorded on a Rigaku R-axis IV++ area detector at beamline BL10XU of SPring-8 (Japan) using x-rays with a wavelength of 0.41373 Å from samples at different pressures (T < 10 GPa) and temperatures (T < 500 K). The x-ray beam was focused at the sample using a compound refractive lens (24). Temperature was measured using thermocouples placed inside the DAC, on the gasket, and the quasi-hydrostatic pressures were obtained from the calibrated spectral shifts of ruby luminescence corrected at each temperature. X-ray diffraction images were integrated using FIT2D (25), and the resulting x-ray diffraction patterns were analyzed using XRDA software (26); full pattern refinements were carried out by the Rietveld method using the FULLPROF software (27). To perform the MEM analysis, the intensities of the Bragg peaks were extracted using the Le Bail method (28). The charge density was then derived from the MEM program PRIMA (29). The MEM calculations were initiated from a uniform prior density distribution, resulting from dividing the total number of electrons in the unit cell. All MEM calculations were performed over a 64 × 64 × 64 pixel mesh representing the unit cell. Results of a Rietveld analysis of the high-temperature/ high-pressure (498 K and 5 GPa) x-ray data of the Li-Cs alloy using two different cubic structural models (symbols, observed x-ray diffraction pattern; solid black line, calculated x-ray diffraction pattern; solid blue line, intensity difference). (Top) Structural model A. Space group Pm-3m; atomic positions: Cs (0,0,0) and Li (0, 1 / 2 , 1 / 2 ), Li ( 1 / 2 , 0, 1 / 2 ), Li ( 1 / 2 , 1 / 2 ,0); lattice parameter = 3.572(4) Å; R p = 20.1% and R wp = 17.1%. (Bottom) Structural model B. Space group Pm-3m; atomic positions Cs (0,0,0) and Li ( 1 / 2 , 1 / 2 , 1 / 2 ); lattice parameter = 3.572(3) Å; R p = 16.9% and R wp = 9.18%. | 2018-04-03T00:05:53.431Z | 2015-10-01T00:00:00.000 | {
"year": 2015,
"sha1": "7836e0062deb6c24ebdae6dcad7c9746c05cf1c2",
"oa_license": "CCBYNC",
"oa_url": "http://advances.sciencemag.org/content/advances/1/9/e1500669.full.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "7836e0062deb6c24ebdae6dcad7c9746c05cf1c2",
"s2fieldsofstudy": [
"Materials Science",
"Physics"
],
"extfieldsofstudy": [
"Materials Science",
"Medicine"
]
} |
225780071 | pes2o/s2orc | v3-fos-license | Quality Monitoring for Micro Resistance Spot Welding with Class-Imbalanced Data Based on Anomaly Detection
: Micro resistance spot welding (MRSW) is an important technology widely used in electronics manufacturing for micro component joining. For the joining of micro enameled wire, quality control is heavily dependent on manual inspection till now. In this paper, a quality monitoring approach based on isolation forest (iForest) is proposed to identify abnormal welds and normal welds. Electrode voltage and welding current of over 110,000 spot welds were collected from a production line. The dynamic resistance and heat input were calculated for all welds and used for feature extraction. A class imbalance problem existed in the collected dataset because abnormal welds were far fewer than normal welds. The anomaly detection model based on iForest was established for the imbalanced data classification after comparison with other methods such as one-class (support vector machine) SVM and local outlier factor. Test results show that the similarity of dynamic resistance profile and heat input compared with the previous ten welds are valid features for detecting a part of the abnormal welds. The iForest model is e ff ective for distinguishing incomplete fusion welds from normal welds with high e ffi ciency. It can assist in the on-line quality monitoring of enameled wire welding process in production.
Introduction
With the booming development of miniaturization and integration of electronic devices, micro enameled wires have been increasingly used in the manufacture of various electronic products, such as electroacoustic devices, micro coils, micro delay and chip inductor et al. [1,2]. The micro resistance spot welding (MRSW) of enameled wire to pad is a common production process, since the removal of insulation coating and the joining of wire to pad can be accomplished efficiently in a welding process. Because of advantages such as low cost, high efficiency and ease of automation, MRSW is an important micro joining technology for the manufacture of micro devices, which is also termed small-scale resistance spot welding [3,4].
In view of the demand of reliable signal transmission in electronic components, especially the high fidelity requirement of electroacoustic devices in smartphones and earphones, it is important to ensure the quality of each MRSW weld. However, quality control still relies heavily on manual inspection, which requires experienced people to do it with great attention and still overlooks some defects. It is essential to realize on-line quality monitoring.
Several studies have recently focused on the quality control of MRSW [4][5][6][7][8]. Wan et al. [4] developed a weld quality evaluation model based on electrode voltage recognition for the MRSW between back propagation neural network (BPNN) and probabilistic neural network (PNN) in MRSW, and it was concluded that BPNN was more appropriate for failure load estimation and PNN was more effective at quality classification. Yue et al. [6] established regression equations between welding parameters and weld quality with response surface methodology for foil metal joining. Chen et al. [7] designed multi-performance optimization for MRSW with a hybrid approach. Rikka et al. [8] optimized the welding parameters by Taguchi design to obtain low electrical contact resistance for joining a nickel tab to an inner aluminum casing in a cylindrical cell. However, despite the great demand in industry, there is a lack of investigation into quality monitoring in the MRSW for micro enameled wire joining.
Quality assessment for resistance spot welding (RSW) has been studied extensively [9,10]. Different process signals are analyzed for feature extraction, such as dynamic resistance [11], welding power [12], electrode force [13], electrode displacement [14] and multiple signals [15]. Dynamic resistance has wide application because of the low cost and convenient installation of sensors. In recent years, machine learning models have been found to be effective for predicting weld quality, including artificial network [16,17], decision tree [18], random forest [19] and SVM [20]. The rapid development of machine learning technology provides effective decision support tools for weld quality evaluation.
Despite comprehensive studies on the weld quality of MRSW and RSW, specific applications require further investigation. First, compared with RSW or MRSW of metal sheets, MRSW of enameled wire to pad is quite a different physical process and has rarely been explored before. Additionally, a class imbalance problem exists in the quality classification of MRSW in the production. This refers to data within which one or more of the classes (majority class) have a much greater number of samples than the others (minority class) [21]. As a result of quality control, abnormal welds are generally far fewer than normal welds in the manufacture of micro components by MRSW. Traditional classification models cannot provide good performance for class-imbalanced data, which requires imbalanced data processing methods like anomaly detection algorithms [21].
Based on an imbalanced dataset from industry, this work focuses on the quality monitoring for MRSW of micro enameled wire to pad. The welding process was analyzed in terms of dynamic resistance and heat input, and they were adopted for feature extraction. A classification approach based on isolation forest (iForest) is proposed to assist in the quality inspection.
Materials and Equipment
Enameled copper wire with a diameter of 0.08 mm was joined to the 1.5 × 1.5 mm pad coated with 0.01 mm thick tin solder in an electronics factory. A direct current (DC) welder was used to heat the tungsten electrode, as shown schematically in Figure 1. The joule heat was generated intensively at the electrode tip and transferred to workpieces. The constant voltage control mode was selected for micro enameled wire joining, since it had better adaptability than constant current and constant power, which made the welding current vary in the welding process. The welding parameters are listed in Table 1. All welds were produced by applying double heating pulses. The 1st pulse was used to remove the insulation coating from the wire, and the 2nd pulse was set up to provide enough heat. The voltage setting was varied in the range of 0.68-0.90 V, because it matched with each electrode and was adjusted manually during the electrode life. The welding time was varied from 55 to 60 ms due to its fine-tuning for each electrode.
Process Data Aacquisition
The instantaneous electrode voltage (ut) and welding current (it) of 111,958 spot welds were collected by the data acquisition system shown schematically in Figure 1. The curves of ut, it and dynamic resistance (Rt) were saved for each weld. A weld performed in the early and middle stage of electrode life is taken as an example in Figure 2. Part of the resistance curve is ignored because ut and it change rapidly between the two pulses, making the curve change abruptly.
Weld Quality Determination
The quality of each weld was estimated by manual visual testing. The quality level was divided into two classes: 'normal' and 'abnormal'. As listed in Table 2, 53 abnormal welds were recorded in detail in the MRSW dataset, including four defect types: incomplete fusion, abnormal wire, overheating, and other failures. The welding parameters are listed in Table 1. All welds were produced by applying double heating pulses. The 1st pulse was used to remove the insulation coating from the wire, and the 2nd pulse was set up to provide enough heat. The voltage setting was varied in the range of 0.68-0.90 V, because it matched with each electrode and was adjusted manually during the electrode life. The welding time was varied from 55 to 60 ms due to its fine-tuning for each electrode.
Process Data Aacquisition
The instantaneous electrode voltage (u t ) and welding current (i t ) of 111,958 spot welds were collected by the data acquisition system shown schematically in Figure 1. The curves of u t , i t and dynamic resistance (R t ) were saved for each weld. A weld performed in the early and middle stage of electrode life is taken as an example in Figure 2. Part of the resistance curve is ignored because u t and i t change rapidly between the two pulses, making the curve change abruptly. The welding parameters are listed in Table 1. All welds were produced by applying double heating pulses. The 1st pulse was used to remove the insulation coating from the wire, and the 2nd pulse was set up to provide enough heat. The voltage setting was varied in the range of 0.68-0.90 V, because it matched with each electrode and was adjusted manually during the electrode life. The welding time was varied from 55 to 60 ms due to its fine-tuning for each electrode.
Process Data Aacquisition
The instantaneous electrode voltage (ut) and welding current (it) of 111,958 spot welds were collected by the data acquisition system shown schematically in Figure 1. The curves of ut, it and dynamic resistance (Rt) were saved for each weld. A weld performed in the early and middle stage of electrode life is taken as an example in Figure 2. Part of the resistance curve is ignored because ut and it change rapidly between the two pulses, making the curve change abruptly.
Weld Quality Determination
The quality of each weld was estimated by manual visual testing. The quality level was divided into two classes: 'normal' and 'abnormal'. As listed in Table 2, 53 abnormal welds were recorded in detail in the MRSW dataset, including four defect types: incomplete fusion, abnormal wire, overheating, and other failures.
Weld Quality Determination
The quality of each weld was estimated by manual visual testing. The quality level was divided into two classes: 'normal' and 'abnormal'. As listed in Table 2, 53 abnormal welds were recorded in detail in the MRSW dataset, including four defect types: incomplete fusion, abnormal wire, overheating, and other failures.
It is a class-imbalanced dataset, since abnormal welds (the minority class) are far fewer than normal welds (the majority class). Classifying imbalanced data significantly challenges traditional classification models: (1) their bias towards the majority class causes poor identification for the minority class, (2) minority samples usually overlap with majority samples in part of the characteristic space, which is termed class overlap, and (3) there are very few minority samples or even there are only normal samples in specific applications [21,22]. It is more appropriate to select anomaly detection algorithms rather than traditional classification methods for quality classification of imbalanced data.
Despite the low defect rate (<0.25%), it is crucial to detect abnormal welds because it is necessary to ensure the quality of each product. They cannot be avoided completely due to various disturbances at production sites. Moreover, it is virtually impossible to make a redundant welding spot on the pad to reduce the failure risk, which makes quality monitoring critical.
Welding Process Analysis
For joining micro enameled wire to tinned pad by MRSW, the basic principle is that the joule heat is generated at the electrode tip by passing an electrical current to remove the insulation coating from the wire and re-flow the solder on the pad. The wire is joined to the pad under a controlled pressure. The weld energy required is obtained from the heated electrode by heat transfer. It is different from the sheet metal joining by RSW or MRSW, where the heat is generated at the interface of the sheets. Figure 3 shows four signal curves of MRSW from the perspective of electrode life. One of the electrodes is shown as an example: voltage control setting (V CON ), average of I t in a welding process (I AVE ), initial resistance (R 0 , the first resistance value collected in a welding process, shown in Figure 2) and heat input of electrode and workpieces (Q E ). For I AVE , R 0 and Q E , the moving averages of every 200 welds are calculated to make the trends clear. As welding number grows, Figure 3 shows that V CON is reduced manually (from 0.86 to 0.76 V), I AVE and Q E decline in general (from 177 to 88 A, from 8.2 to 3.6 J), while the R 0 curve shows an upward trend (from 2.5 to 8.3 mΩ).
Appl. Sci. 2020, 10, x FOR PEER REVIEW 4 of 15 It is a class-imbalanced dataset, since abnormal welds (the minority class) are far fewer than normal welds (the majority class). Classifying imbalanced data significantly challenges traditional classification models: (1) their bias towards the majority class causes poor identification for the minority class, (2) minority samples usually overlap with majority samples in part of the characteristic space, which is termed class overlap, and (3) there are very few minority samples or even there are only normal samples in specific applications [21,22]. It is more appropriate to select anomaly detection algorithms rather than traditional classification methods for quality classification of imbalanced data.
Despite the low defect rate (<0.25%), it is crucial to detect abnormal welds because it is necessary to ensure the quality of each product. They cannot be avoided completely due to various disturbances at production sites. Moreover, it is virtually impossible to make a redundant welding spot on the pad to reduce the failure risk, which makes quality monitoring critical.
Welding Process Analysis
For joining micro enameled wire to tinned pad by MRSW, the basic principle is that the joule heat is generated at the electrode tip by passing an electrical current to remove the insulation coating from the wire and re-flow the solder on the pad. The wire is joined to the pad under a controlled pressure. The weld energy required is obtained from the heated electrode by heat transfer. It is different from the sheet metal joining by RSW or MRSW, where the heat is generated at the interface of the sheets. Figure 3 shows four signal curves of MRSW from the perspective of electrode life. One of the electrodes is shown as an example: voltage control setting (VCON), average of It in a welding process (IAVE), initial resistance (R0, the first resistance value collected in a welding process, shown in Figure 2) and heat input of electrode and workpieces (QE). For IAVE, R0 and QE, the moving averages of every 200 welds are calculated to make the trends clear. As welding number grows, Figure 3 shows that VCON is reduced manually (from 0.86 to 0.76 V), IAVE and QE decline in general (from 177 to 88 A, from 8.2 to 3.6 J), while the R0 curve shows an upward trend (from 2.5 to 8.3 mΩ).
Dynamic Resistance
For the sheet metal joining by RSW in industry, the measured voltage is composed of a resistive and an inductive component: uRSW = iR + Ldi/dt [23,24]. The induced voltage is picked up by the two electrical wires which span the upper and lower electrodes and form an inductance loop [24]. However, for the micro enameled wire joining by MRSW with single electrode, there is not an
Dynamic Resistance
For the sheet metal joining by RSW in industry, the measured voltage is composed of a resistive and an inductive component: u RSW = iR + Ldi/dt [23,24]. The induced voltage is picked up by the two electrical wires which span the upper and lower electrodes and form an inductance loop [24]. However, for the micro enameled wire joining by MRSW with single electrode, there is not an obvious inductance loop for the u t measurement (Figure 1c). The R t is obtained through dividing u t by i t in this study.
As shown in Figure 4, the equivalent circuit for the electrode and workpieces can be regarded as resistance in series-parallel connection. R E represents the resistance from the voltage measuring position to the electrode tip, R Tip represents the electrode tip resistance, and R W is used to analyze the workpiece resistance and the contact resistance. The following factors influence R t and its profile.
•
Factor 1: the electrode wear. This is caused by several factors, such as frequent tip cleaning and ignition loss, which causes the mass and the size of the tip to decrease ( Figure 4b) and R Tip to increase as the welding number grows [2]. • Factor 2: the temperature increase. The electrode temperature increases because of the net energy input of electrode, and the heat transferred to the workpieces also causes their temperature to increase. Therefore, the resistivity of the electrode and workpieces increases in the welding process. • Factor 3: the shunting effect. Because of the low resistivity of copper, more or less of the current is diverted to the joint of copper wire to the pad. The R W starts to influence R t after the insulation coating is removed. When the R Tip becomes higher in the middle and late stage of the electrode life, the resistance curve may drop obviously due to the shunting effect.
Appl. Sci. 2020, 10, x FOR PEER REVIEW 5 of 15 obvious inductance loop for the ut measurement (Figure 1c). The Rt is obtained through dividing ut by it in this study. As shown in Figure 4, the equivalent circuit for the electrode and workpieces can be regarded as resistance in series-parallel connection. RE represents the resistance from the voltage measuring position to the electrode tip, RTip represents the electrode tip resistance, and RW is used to analyze the workpiece resistance and the contact resistance. The following factors influence Rt and its profile.
• Factor 1: the electrode wear. This is caused by several factors, such as frequent tip cleaning and ignition loss, which causes the mass and the size of the tip to decrease (Figure 4b) and RTip to increase as the welding number grows [2]. • Factor 2: the temperature increase. The electrode temperature increases because of the net energy input of electrode, and the heat transferred to the workpieces also causes their temperature to increase. Therefore, the resistivity of the electrode and workpieces increases in the welding process. • Factor 3: the shunting effect. Because of the low resistivity of copper, more or less of the current is diverted to the joint of copper wire to the pad. The RW starts to influence Rt after the insulation coating is removed. When the RTip becomes higher in the middle and late stage of the electrode life, the resistance curve may drop obviously due to the shunting effect.
(a) (b) According to the change of Rt profile during the electrode life, it is divided into several types based on its main trend described in Table 3. One of the electrodes is taken as an example in Figure 5 to show the change of Rt. Figure 6 shows various profiles with arrows used to point out the main trends. Its change can be explained by the above factors. 1. Factor 1: its effect can be observed from the R0, due to the slight effect of Factor 2 and Factor 3 when the welding just starts. After the electrode has been used for a long time, it is prone to wear out, which is reflected in the steep rise of R0. It can be observed in Figure 3 also. 2. Factor 2: it generates the 'Up' profile, which is the majority of the resistance curves shown in Figure 5. 3. Factor 3: it can be seen obviously in the middle and late stage of electrode life. After the removal of the insulation coating, part of current is shunted to the wire and the pad, causing Rt to drop to some extent. Figure 5 shows the 'Up&Down' profile, but the 'Down' profile may occur because of the greater effect of Factor 3 than that of Factor 2. 4. The balance between Factor 2 and Factor 3 may result in the 'Flat' profile.
The profile code listed in Table 3 was attributed to each resistance curve by the trend determination rules written in Python. As listed in Table 4, 'Up -Up' and 'Up&Down -Up' have a majority of Rt profiles, accounting for 86.54% of the total weld count. According to the change of R t profile during the electrode life, it is divided into several types based on its main trend described in Table 3. One of the electrodes is taken as an example in Figure 5 to show the change of R t . Figure 6 shows various profiles with arrows used to point out the main trends. Its change can be explained by the above factors.
1.
Factor 1: its effect can be observed from the R 0 , due to the slight effect of Factor 2 and Factor 3 when the welding just starts. After the electrode has been used for a long time, it is prone to wear out, which is reflected in the steep rise of R 0 . It can be observed in Figure 3 also. 2.
Factor 2: it generates the 'Up' profile, which is the majority of the resistance curves shown in Figure 5.
3.
Factor 3: it can be seen obviously in the middle and late stage of electrode life. After the removal of the insulation coating, part of current is shunted to the wire and the pad, causing R t to drop to some extent. Figure 5 shows the 'Up&Down' profile, but the 'Down' profile may occur because of the greater effect of Factor 3 than that of Factor 2.
4.
The balance between Factor 2 and Factor 3 may result in the 'Flat' profile.
The profile code listed in Table 3 was attributed to each resistance curve by the trend determination rules written in Python. As listed in Table 4, 'Up-Up' and 'Up&Down-Up' have a majority of R t profiles, accounting for 86.54% of the total weld count.
Heat Input
The QE is obtained based on the electric work formula = . As shown in Figure 3, the characteristics of QE during the electrode life include three aspects. (1) When the VCON remains unchanged, QE decreases gradually as the welding number grows due to electrode wear. (2) The electrode is finally scrapped because of insufficient QE. (3) The VCON is turned down manually during the electrode life. This is related to the following factors: the mass loss of the electrode tip, the heat generated at the tip and the heat transferred to the workpieces. The formula for specific heat capacity (Q = mcΔT) can be used for explanation. As welding number grows, the net energy input of the electrode decreases from Q1 to Q2 and its mass decreases from m1 to m2. However, if Q2/Q1 is greater
Heat Input
The Q E is obtained based on the electric work formula Q E = u t i t dt. As shown in Figure 3, the characteristics of Q E during the electrode life include three aspects. (1) When the V CON remains unchanged, Q E decreases gradually as the welding number grows due to electrode wear. (2) The electrode is finally scrapped because of insufficient Q E . (3) The V CON is turned down manually during the electrode life. This is related to the following factors: the mass loss of the electrode tip, the heat generated at the tip and the heat transferred to the workpieces. The formula for specific heat capacity (Q = mc∆T) can be used for explanation. As welding number grows, the net energy input of the electrode decreases from Q 1 to Q 2 and its mass decreases from m 1 to m 2 . However, if Q 2 /Q 1 is greater than m 2 /m 1 , ∆T 2 will be greater than ∆T 1 , which causes excessive heat transferred to workpieces. Actually, overheating will occur if the V CON is not duly adjusted in this application.
Anomaly Detection Algorithms
For imbalanced data classification, the pattern of normal samples can be recognized by anomaly detection algorithms when the training set has only normal samples or it contains abnormal samples. The samples that do not fit the pattern are classified as anomalies. This is the basic principle of anomaly detection, which belongs to unsupervised learning. Three well-known anomaly detection algorithms are discussed for quality monitoring: Isolation Forest (iForest), One-Class SVM (OCSVM) and Local Outlier Factor (LOF).
The iForest 'isolates' a sample from the rest of samples by adopting an ensemble of random trees as base estimators for the decision. The number of separations required to isolate a sample is the normality measure of iForest, as it produces noticeably smaller separation numbers for anomalies [25]. OCSVM has good performance for capturing the shape of the training set which contains normal samples only [26]. Samples outside the shape frontier are regarded as anomalies. LOF is a well-known density-based method which identifies the samples with much smaller local density as anomalies [27].
Feature Extraction
Similarity is a common measure for distinguishing abnormal samples from normal samples. In terms of R t profile and Q E , most welds are similar to their 'near neighbors' (e.g., the previous 10 welds), because the R t and Q E in general change gradually during the electrode life. Compared with the 'far neighbors' (e.g., the previous 1000th-1010th welds), a large disparity may be observed due to electrode wear, especially for welds performed in the middle and late stage of the electrode life.
To quantify the profile similarity between the resistance curves of two welds (X and Y), a curve similarity measure S p (X, Y) is constructed using Equations (1)-(3) to obtain the dimensionless feature in the range of [0.0, 1.0] [28]. This is based on Euclidean distance, which is widely used to calculate curve similarity [29]. For different cases (profile codes are equal or not), it has different parameter values to calculate the similarity score [30]. When the spot welds have the same profile code in the 1st or 2nd pulse (p = 1 or 2), S p (X, Y) is in the range of [0.0, 0.5]. If not, S p (X, Y) is in the range of (0.5, 1.0]. S p (X, Y) is smaller when X and Y have more similar profiles.
where p represents the heating pulse (p = 1, 2), m is the number of R t collected during the pulse, E p (X, Y) is the Euclidean distance of X and Y in the 1st or 2nd pulse, D p (X, Y) is the maximum of E p (X, Y) and the averages of X and Y, C p represents the profile code, and S p (X, Y) is used to evaluate the profile similarity. Two features calculated by Equations (4)-(7) are extracted from Q E and R t for anomaly detection: the similarity of Q E (∆Q 10 ) and the similarity of R t profile (∆S 10 ). Both of them are compared with previous 10 welds. There is no evident linear correlation between them because the absolute value of Pearson correlation coefficient is less than 0.12.
where symbol anlz represents the weld to be analyzed, i is the index of its previous welds, ∆Q 10 is the relative change of Q E , and ∆S 10 is used to analyze the similarity of R t profile in the double-pulse welding process. For the first two welds of each electrode, the ∆Q 10 and ∆S 10 are regarded as 0.0; for the 3rd to 10th welds, the nominal ∆Q 10 and ∆S 10 are compared with the previous two welds.
Model Construction
To conduct the anomaly detection experiments, the models are constructed with the training set including normal welds only. All the abnormal welds and the remaining normal welds are used for model verification, as shown in Figure 7. Each weld in the test set is classified as 'normal' or 'abnormal'. The Python packages from scikit-learn [31] are used, and the major parameters are listed in Table 5. where symbol anlz represents the weld to be analyzed, i is the index of its previous welds, ΔQ10 is the relative change of QE, and ΔS10 is used to analyze the similarity of Rt profile in the double-pulse welding process. For the first two welds of each electrode, the ΔQ10 and ΔS10 are regarded as 0.0; for the 3rd to 10th welds, the nominal ΔQ10 and ΔS10 are compared with the previous two welds.
Model Construction
To conduct the anomaly detection experiments, the models are constructed with the training set including normal welds only. All the abnormal welds and the remaining normal welds are used for model verification, as shown in Figure 7. Each weld in the test set is classified as 'normal' or 'abnormal'. The Python packages from scikit-learn [31] are used, and the major parameters are listed in Table 5. Four performance measures are applied in this study: recall (also termed sensitivity), specificity, processing time, and the area under the receiver operating characteristic curve (AUC) [20]. Recall and specificity are calculated by Equations (8)- (9). True positive (TP), false positive (FP), true negative (TN) and false negative (FN) are four types of predicted results in binary classification. In general, minority class (abnormal weld) is regarded as positive case and majority class (normal weld) is termed as negative case. AUC is widely used to analyze the performance of a binary classifier. It is in the range [0, 1], and a good classifier has an AUC score near 1.
Recall TP Four performance measures are applied in this study: recall (also termed sensitivity), specificity, processing time, and the area under the receiver operating characteristic curve (AUC) [20]. Recall and specificity are calculated by Equations (8)- (9). True positive (TP), false positive (FP), true negative (TN) and false negative (FN) are four types of predicted results in binary classification. In general, minority class (abnormal weld) is regarded as positive case and majority class (normal weld) is termed as negative case. AUC is widely used to analyze the performance of a binary classifier. It is in the range [0, 1], and a good classifier has an AUC score near 1.
As common for preprocessing in machine learning, the features ∆Q 10 and ∆S 10 are scaled into the interval of [0, 1] by normalization. The normalized value ∆Q 10N is calculated by Equation (10), and ∆S 10N is equal to ∆S 10 , where Lim L is the lower limit and Lim U is the upper limit used for ∆Q 10N . Compared with the previous 10 welds, the welds with small changes in Q E have ∆Q 10N near 0.5, and ∆S 10N near 0 reflects small change in the R t profile.
Results and Discussion
The particularity of the quality classification in this study can be illustrated by the feature distribution of normal welds and abnormal welds shown in Figure 8. The Lim L and Lim U of ∆Q 10 are selected to -0.1 and 0.1, respectively, because 97.5% welds are at this interval. According to the statistical analysis, 52,291 welds meet the conditions of ∆Q 10N ∈[0.4, 0.6) and ∆S 10N ∈[0.0, 0.2), and 79.6% (89, 094) welds meet the conditions of ∆Q 10N ∈[0.3, 0.7) and ∆S 10N ∈[0.0, 0.5). This reflects the fact that most welds are similar to their 'near neighbors' in terms of Q E and R t profile.
It can be observed that in this imbalanced dataset, abnormal welds overlap with normal welds in part of the two-dimensional characteristic space. The combination of class imbalance and class overlap makes it difficult to construct the frontier of each class, which adds complexity for quality classification. The dataset of normal welds may contain certain abnormal welds because of misjudgment, which may make abnormal welds overlap 'normal' (misjudged) welds. However, the amount of such polluted data was limited, since the weld quality was inspected by two inspectors in this experiment.
The weld failure detection methods based on traditional classification models appear to be unsuitable to detect abnormal welds from the imbalanced data. First, such models generally adopt global performance measures such as prediction accuracy in the learning process. If all welds are predicted as normal welds, a high accuracy score (>99.75%) can be obtained because of the low defect rate (<0.25%), leading to poor detection of weld failure in this application. Second, traditional classification models are based on supervised learning for the class-balanced data in general, while there is not enough real data of weld failure available for model training and testing in this study, since the abnormal welds do not often occur in reality. Third, the problem of class overlap also makes it difficult to distinguish between normal welds and abnormal welds by traditional classification models. It is an essential task to find out abnormal welds in the quality monitoring of MRSW. Figure 9 shows the performance difference of anomaly detection for three models and four defect types. The parameter c or nu provided in the packages is used to adjust the model decision function.
For the detection of incomplete fusion welds, both iForest and OCSVM can distinguish all the incomplete fusion welds from normal welds, at the cost of the decrease in the specificity. The LOF model appears to be unsuitable for this application because its recall of incomplete fusion welds cannot reach 100% when c is near its upper limit 0.5. For the detection of other defect types, none of these models can identify the abnormal welds effectively, since no valid features have been extracted from ut, it and variables calculated from them so far. It is an essential task to find out abnormal welds in the quality monitoring of MRSW. Figure 9 shows the performance difference of anomaly detection for three models and four defect types. The parameter c or nu provided in the packages is used to adjust the model decision function. It is an essential task to find out abnormal welds in the quality monitoring of MRSW. Figure 9 shows the performance difference of anomaly detection for three models and four defect types. The parameter c or nu provided in the packages is used to adjust the model decision function.
For the detection of incomplete fusion welds, both iForest and OCSVM can distinguish all the incomplete fusion welds from normal welds, at the cost of the decrease in the specificity. The LOF model appears to be unsuitable for this application because its recall of incomplete fusion welds cannot reach 100% when c is near its upper limit 0.5. For the detection of other defect types, none of these models can identify the abnormal welds effectively, since no valid features have been extracted from ut, it and variables calculated from them so far. To analyze the classification of incomplete fusion welds and normal welds further, AUC, specificity and processing time are adopted, as listed in Table 6 For the detection of incomplete fusion welds, both iForest and OCSVM can distinguish all the incomplete fusion welds from normal welds, at the cost of the decrease in the specificity. The LOF model appears to be unsuitable for this application because its recall of incomplete fusion welds cannot reach 100% when c is near its upper limit 0.5. For the detection of other defect types, none of these models can identify the abnormal welds effectively, since no valid features have been extracted from u t , i t and variables calculated from them so far.
To analyze the classification of incomplete fusion welds and normal welds further, AUC, specificity and processing time are adopted, as listed in Table 6. The average training time and average test time over the range of c or nu are used to compare the model efficiency. The LOF model is time-saving for model construction in this application, but it cannot identify all the incomplete fusion welds. Hence, its AUC is regarded as NA. The performance of OCSVM for anomaly detection is not poor; however, it is relatively time-consuming to train and test the model. The identification of incomplete fusion welds and normal welds can be performed effectively and efficiently by the iForest model. It has a high AUC score of 0.9525, and it takes 1.79 s to train the model with over 60,000 welds in the training set, and 1.11 s to test over 47,000 welds. In addition to identifying abnormal welds, the interpretation of anomalies is also important for quality monitoring. For incomplete fusion welds, one of them and its previous 10 welds are shown as an example in Figure 10a,c,d. It can be observed that this abnormal weld has a different R t profile and smaller Q E . The previous welds have an 'Up&Down' profile in the 1st pulse, while the abnormal weld has an 'Up-Down' profile in the double pulses. This implies that the insulation coating was not properly removed as designed in the 1st pulse for this abnormal weld. Since the welding time was fixed, this weld lacked enough energy to join the wire to the pad after the insulation coating was removed. Based on the assumption that normal welds are 'many and similar' and anomalies are 'few and different', the iForest model can detect incomplete fusion welds with good performance in this application.
For other defect types, one from abnormal wire welds is analyzed as an example (Figure 10b). It was caused by the deviation of wire position. As shown in Figure 10e,f, it overlaps with its previous 10 welds in terms of R t profile. This is related to the characteristics of the MRSW process for micro enameled wire joining, implying that the case of abnormal wire which occurs in the early and middle stage of electrode life does not obviously influence the temperature rise of electrode tip and tinned pad. Therefore, there is no remarkable difference in either ∆Q 10N or ∆S 10N between normal welds and these abnormal welds, meaning that the models cannot detect them effectively.
Although only incomplete fusion can be detected by the present iForest model with good performance, it is favorable for the quality monitoring research for MRSW. Its detection requires inspectors to be experienced and focused, which makes it relatively difficult to perform visual testing in the production line, while the deviation of wire position can be identified as abnormal wire with relative ease by a machine vision system. To further separate various abnormal welds from normal welds, more features from other process signals are required to fully represent the welding process. Similar to RSW, the quality monitoring based on multiple signals may be better for MRSW than that based on electrical signals only. Electrode displacement, dynamic force and images before and after welding are expected to be added in future work to improve quality monitoring.
in the production line, while the deviation of wire position can be identified as abnormal wire with relative ease by a machine vision system. To further separate various abnormal welds from normal welds, more features from other process signals are required to fully represent the welding process. Similar to RSW, the quality monitoring based on multiple signals may be better for MRSW than that based on electrical signals only. Electrode displacement, dynamic force and images before and after welding are expected to be added in future work to improve quality monitoring.
Conclusions
An anomaly detection model based on iForest is proposed to assist in the quality monitoring of MRSW. The work is based on a dataset with the class imbalance problem. In terms of the dynamic resistance and heat input, the change during the electrode life is analyzed, and features are extracted from them for anomaly detection. The following conclusions can be obtained. 1. Class imbalance and overlap exist in the quality estimation of MRSW production and require proper anomaly detection algorithms for quality monitoring. 2. The similarity of dynamic resistance profile and heat input compared with the previous ten welds are valid features for detecting incomplete fusion welds.
Conclusions
An anomaly detection model based on iForest is proposed to assist in the quality monitoring of MRSW. The work is based on a dataset with the class imbalance problem. In terms of the dynamic resistance and heat input, the change during the electrode life is analyzed, and features are extracted from them for anomaly detection. The following conclusions can be obtained.
1.
Class imbalance and overlap exist in the quality estimation of MRSW production and require proper anomaly detection algorithms for quality monitoring.
2.
The similarity of dynamic resistance profile and heat input compared with the previous ten welds are valid features for detecting incomplete fusion welds.
3.
For the classification of incomplete fusion welds and normal welds, the iForest model is a good candidate with a high AUC score of 0.9525 and high efficiency. Funding: This research received no external funding. | 2020-06-25T09:09:04.259Z | 2020-06-19T00:00:00.000 | {
"year": 2020,
"sha1": "3782998fc8515402e6791e8d53db083d424eb0b5",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2076-3417/10/12/4204/pdf",
"oa_status": "GOLD",
"pdf_src": "Adhoc",
"pdf_hash": "42d4cf506c115f40309fd55cfd4c992237bf6440",
"s2fieldsofstudy": [
"Materials Science"
],
"extfieldsofstudy": [
"Computer Science"
]
} |
119340109 | pes2o/s2orc | v3-fos-license | Sea Quark Effects on the Strong Coupling Constant
We present results showing that the strong coupling constant measured in two-flavor full QCD with dynamical Kogut-Susskind quarks at $\beta=5.7$ exhibit a 15\% increase due to sea quarks over that for quenched QCD at the scale $\mu\approx 7$GeV . (talk at lattice93)
Introduction
Recently a lattice QCD determination of the strong coupling constant s has been attempted employing the spin-averaged charmonium 1S 1P mass splitting to x the scale 1]. The calculations are so far restricted to quenched QCD, and corrections due to sea quarks have to be estimated to make a physical prediction for s . In this article we report on our calculation of the strong coupling constant in two-avor full QCD in the spirit of Ref. 1] and discuss implication of the results on the e ects of sea quarks.
Simulation
Our study is based on the full QCD con gurations on a 20 4 lattice previously generated with two avors of dynamical Kogut-Susskind quarks at = 5:7 with m q a = 0:01 2]. For charmonium spectrum measurement we employ a subset of these con gurations, periodically doubled in the temporal direction. We use both Wilson (without`clover' improvement) and Kogut-Susskind actions for valence quarks. The Wilson results are obtained at K = 0:130 and 0.135 with 72 and 75 con gurations, and the preliminary presented by T. Onogi y present address: Theory Group, Fermi National Accelerator Laboratory, Batavia, IL 60510, U. S. A.
Determination of scale 3.1. Wilson valence quarks
In Table 2(a) we list our result for the charmonium 1S 1P mass splitting obtained with Wilson valence quarks and the corresponding scale =a. We observe unusually large errors of 20{25% in spite of the use of over 70 con gurations. In contrast a test run in quenched QCD on a 16 3 32 lattice at = 6:0 with 20 con gurations showed only a 7% error, and the full QCD results with Kogut-Susskind valence quarks described below also exhibit smaller errors with 19 con gurations. A possible cause of the large errors for Wilson valence quarks might be ascribed to the mismatch of quark actions taken for sea and valence quarks, though actual connection is not clear to us.
Kogut-Susskind valence quarks
It is more natural to employ the Kogut-Susskind quark action for valence quarks on our full QCD con gurations generated with the same action for dynamical sea quarks. Our results for the 1S 1P splitting obtained for this case are listed in Table 2(b).
In Fig. 1 we show the charmonium spectrum for the valence quark mass m q a = 0:3. Con- version to physical units is made with =a obtained with c (NG) given in Table 2. Except for the degeneracy of J= and c in the non-Nambu-Goldstone channel, the pattern of the spectrum is consistent with the experiment. In particular c in the Nambu-Goldstone channel is lighter than J= (the value of the mass splitting m J= m c = 360(40)MeV is three times larger than the experimental value 118MeV, however). This is quite di erent from the results with the Wilson action for which the addition of the clover term is needed to split the two 1S states. We note that the errors in the physical values of masses shown in Fig. 1 do not include that of the scale =a of 11%, which translates to about 400MeV for the masses. An increase in statistics is needed to see if the physical charm quark corresponds to a smaller value of m q a.
Strong coupling constant
Our full QCD results for the 1S 1P mass splitting yield a value =a 7GeV with an error of 10{20% for Wilson and Kogut-Susskind valence quarks (the latter with the Nambu-Goldstone c ). This value is consistent with =a = 7:01(28) 2] estimated from the meson mass using Kogut-Susskind valence quarks. The use of the non-Nambu-Goldstone operator for c , however, leads to a value 30% larger for the scale. This discrepancy represents a systematic uncertainty for the Kogut-Susskind case.
In order to estimate the strong coupling constant at the scale estimated above, we employ the relation 1,5], for =a = 7:63(48)GeV at = 6:1 1]. We thus nd that the strong coupling constant extracted for two-avor full QCD is about 15% larger than in the quenched case at 7GeV. A larger value is in fact expected from the consideration that xing the scale by the 1S 1P mass splitting means adjusting the QCD coupling strength at a scale typical of charmonium and that the coupling for larger momenta decreases more slowly in the presence of sea quarks. This view underlies the heuristic procedure of Ref . 1] in which the value of (4) MS was estimated from the measured value of (0) MS by matching the two couplings evolved down to the scale 0:4 0:75GeV with the two-loop renormalization group function.
In Fig. 2 we apply this procedure to (2) MS and (0) MS starting with (2) (the Wilson result with h c (P 2) at K = 0:135 is used for =a; other cases are similar) and (3). The overlap of the two bands at 0:5GeV shows that a 15% increase found for (2) MS is quantitatively consistent with the expected magnitude of sea quark e ects. This agreement also implies that our two-avor result would lead to a value of (4) MS similar to the original result (4) MS (5GeV) = 0:172 (12) 1]. In fact we nd (4) MS (5GeV) = 0:172 +0:012 0:009 using our Wilson result with h c (P 2) at K = 0:135 in Table 2.
Summary
Our results and analyses show that sea quark e ects are visible in the strong coupling constant measured in current full QCD simulations. This indicates a promising prospect in the near future for a full QCD determination of the strong coupling constant including a more realistic spectrum of sea quarks than was attempted here. | 2019-04-14T02:29:52.772Z | 1993-11-29T00:00:00.000 | {
"year": 1993,
"sha1": "f2b865185bbfa53d03fec5e890187ad950d927cf",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/hep-lat/9311050",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "f2b865185bbfa53d03fec5e890187ad950d927cf",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
165601673 | pes2o/s2orc | v3-fos-license | The Beginning of the Viking Age in the West
During the Viking Age, Arctic Scandinavia was a source of exquisite furs, down, walrus ivory, and other commodities that met with high demand in England and on the Continent. Hitherto, the earliest firm evidence of this trade has been Ohthere’s account c. 890, but in light of this paper’s findings, its history may be pushed further back in time. Geological analyses of whetstones retrieved in eighthto early ninth-century Ribe, south-western Jylland, in present-day western Denmark, demonstrate that the majority were quarried near the aristocratic manor Lade (‘loading/storing place’) in Trøndelag, present-day central Norway, some 1100 km by sea to the north. Because of their high numbers and durability, whetstones retrieved in Ribe and other urban sites may be regarded as a proxy for long-distance seaborne trade from the Arctic. The peak in this trade on the threshold of the Viking Age invites a reconsideration of the coinciding and conflicting interests of Scandinavian long-distance traders, kings, and Vikings. It is argued that coalitions and conflicts that arose from these interests, and new constraints and opportunities that emerged for these three types of agents, provide keys to understanding why and where Vikings raided overseas up to the mid-ninth century.
Introduction
Around AD 800, Scandinavians began setting off on Viking raids across the North Sea, an activity that continued over more than two centuries. Historians and archaeologists have done admirable work in identifying the economic, political, and cultural aspects of Scandinavian societies that were necessary longue-durée conditions for the Viking incursions overseas. 1 However, the immediate causes as to why the Viking raids began there and then remain undiscovered; thus their outbreak and early phase must be considered unexplained (Ashby 2015:100).
Recent scholarship has narrowed the scope of inquiry to cast possible causes in sharper relief against a general backdrop. For instance, several studies have focused on the practice of the bride wealth, whereby a man who wished to marry a woman had to pay a sum to her family. Barrett (2010) and Raffield et al. (2017) have suggested that young men's search for treasure to pay bride wealth led them to pillage overseas. Barrett proposes that an assumed dearth of potential marriage partners in Scandinavia was a result of selective female infanticide, while Raffield and his co-authors propose that the supposed paucity of females was the combined effect of polygyny, concubinage, and social inequality. Sindbaek (2011Sindbaek ( , 2017 considers the influx of Islamic silver as the main driver behind the Viking incursions, and he regards bride wealth as an example of what he holds to be the more general significance in Scandinavia of silver: it was used to establish and maintain social networks over time. Ashby (2015) favours a more general condition: pillaging was motivated by the social capital acquired through fame and glory. A more purely cultural condition is emphasised by Price (2002): the connection between Norse religion and a fatalistic warrior mentality.
It seems likely that more or less all of these cultural and social factors played some role in motivating the Viking raids. Still, these factors are hardly unique to ninth-century Scandinavia. For instance, young men's urge to violently acquire wealth and glory has modest explanatory value since it may be regarded as a given, a generic feature of Germanic pre-state societies throughout the first millennium AD. 2 The grounds for explaining the timing and location of the early Viking raids must be more precisely historically situated in order to frame the decision of Scandinavian ship commanders to direct young men's violent potential into overseas raiding.
The paucity of evidence regarding the acute constraints and opportunities of Viking-ship commanders of the 780s-850s is probably the main reason why, compared to general conditions, the search for immediate causes, or 'trigger factors', has been less intense and successful- Barrett (2010:297) finds the enterprise 'unrealistic'. However, the current surge in provenancing of archaeological materials opens possibilities to produce new evidence on two issues of great relevance: the eighth-to ninth-century production and long-distance trade of commodities from Scandinavia, and the interaction between Scandinavians and Continental and British traders and consumers in the southern North Sea zone and along the English Channel (Fig. 1). 3 Recently, provenancing of reindeer antler has indicated that trade from the Scandinavian Peninsula, present-day Norway and Sweden, to the southern North Sea zone was already underway in the 780s-90s (Ashby et al. 2015).
In the following, we present results from the provenancing of whetstones, demonstrating that this trade was ongoing since the early eighth century and that traded commodities originated in Arctic Scandinavia. Importantly, the high quantities of whetstones allow us
Gipeswic Lundenwic
Site Urban site m k 0 0 5 0 Fig. 1 Scandinavia, the west, and the Baltic. Sites referred to in the text are indicated. Illustration: Ingvild T. Bøckman to assess the shifting volume of this long-distance trade through the eighth to mid-ninth centuries.
This evidence, set in the context of the contemporary surge in production and trade around the southern North Sea and English Channel, the early urbanisation in southern Scandinavia and the Baltic (Sindbaek 2011), and the political integration in southern and western Scandinavia, allows us to suggest immediate reasons for why Viking ship commanders turned their activities overseas in the late 700s. The evidence also sheds light on why, after the initial 'scouting phase', raiding in three decades since c. 806 took place predominantly in Ireland and Scotland, and why Vikings in the mid-830s began overwintering overseas and took up raiding in England and the Frankish Empire.
The close connection between raiding and trade that is argued here is based on the fact that the former parasitizes on wealth accumulated through the latter. Trade ventures, particularly over long distances, expose the traders' commodities and proceeds to the danger of piracy. More importantly, the mere threat of piracy in waters, harbours, and production sites tends to subdue production, trade, and consumption, resulting in reduced profits for traders. Thus, conflicting interests between raiders and traders go beyond the direct threat of attacks on traders' ships. In this paper, assessment of the relative strength of the two parties to this conflict, traders and Vikings, and that of their allies, provides a background for discussing changes in volumes and routes of trading and raiding. Such conflicts lend themselves to the agent-oriented approach pursued here. The main characteristic of that approach-to explain decisions through analysis of the agents' constraints and opportunities at the time rather than through their consequences (Gardner 2007:5-8)-assumes particular relevance when analysing situations with uncertain outcomes involving entrepreneurial agency: taking up raiding in new lands, exploring new trade routes, scaling such activities up or down, entering into alliances, or deciding whether to shift from trading to raiding or vice versa.
Scandinavia and the Southern North Sea Zone (Fifth-Ninth Centuries)
Because of its early date (c. 705-850) and the well-preserved deposits that allow precise dating of artefacts and features (Feveile and Jensen 2000), Ribe ( Fig. 1) in the very southwest of Scandinavia is a key site in discussions of the beginning of the Viking Age in the west. Apparently modelled on Frisian sites, it is the north-easternmost of the seventh-to mid-ninth-century seasonal market sites and urban settlements in the southern North Sea/ English Channel zone. While displaying clear Frisian characteristics in the earliest phase, the first few decades show evidence of an increasingly Scandinavian character of metalwork and other artefacts produced at the site. However, most non-regional commodities and raw materials for artisans (glass and copper alloy) appear to have been imported from the west and south (Feveile 2006:30-31). Positioned near the northern end of the area settled by Frisians in the late seventh century (Ellmers 1985:22 and Abb. 2;IJssennagger 2013:88), the site seems intended to facilitate trade between Frisians and people in southern and western Scandinavia.
When the town Kaupang was established c. 800 (Fig. 1), Ribe would have been the most significant harbour among Kaupang's well-testified connections to the southern North Sea zone (Pilø 2011;Skre 2011b;Wamers 2011). Until recently, the earliest commodities produced in the Scandinavian Peninsula found in Ribe were early ninth-century soapstone vessels. 4 However, the beginning of Ribe's northern trade was pushed back to the late eighth century-before Kaupang's founding-when Ashby et al. (2015) identified ten pieces of reindeer antler among the waste in combmakers' workshops in Ribe phases E-F (780-800). The occurrence of reindeer in Scandinavia is restricted to the woods, highlands, and tundra of the peninsula (Ashby et al. 2015, Fig. 3); evidently, in that period, rather large quantities of reindeer antler were traded to artisans in Ribe as raw material. Additionally, Ashby and his coauthors found indications of even earlier contacts of a different type: two finished reindeer-antler combs, of which fragments were found in phase B (705-25), 5 probably arrived in Ribe as personal equipment rather than as commodities; the same probably holds for the ship anchor made from iron most likely produced in present-day southern Norway and retrieved in Ribe deposits of a 750-780 date (Ashby et al. 2015:692;Buchwald 2005:296-297). Hence, inhabitants of the peninsula appear to have visited Ribe since the market site's initial decades.
The early eighth-century contacts between the Scandinavian Peninsula and the southern North Sea zone were not a recent development. During the fifth-century migrations and remodelling of communities across the southern North Sea, vast numbers of Roman imports were brought to the west-Scandinavian coast, copper-alloy kettles in particular (Hauken 2005), indicating direct contacts with the Continent. Based on Continental and Insular evidence, Hines (1984Hines ( :276-278, map 6.1, 1999 has pointed to the involvement of west-Scandinavians in the fifth-century Germanic migrations to England, andNicolay (2005:73-79, 2017) has concluded that people from the Scandinavian Peninsula were involved in the contemporary resettlement of Frisia that followed the near-hiatus in the fourth century (Bazelmans 2009;Bos and Brouwer 2005:25-27;Knol and Ijssennagger 2017:11-12;Krol 2006).
In the mid-sixth century, eastern trade routes from Scandinavia along the Danube, Vistula, and Oder to the Black Sea and the Mediterranean were cut off, probably due to the westward movement of the Avars and the pressure on the Byzantine Empire under Justinian (Ellmers 1985:7-8;Ljungkvist 2009:45). For Scandinavians going south, only the western route across the southern North Sea remained; seventh-eighth-century imports to Scandinavia are predominantly of west-European origin. 6 In a seminal paper, Jellema (1955) identified Frisians as the main agents in the sixthto seventh-century southern North Sea trade; they extended their activities into the Baltic in the mid-eighth century (IJssennagger 2013;Lebecq 1992Lebecq , 1999Näsman 1986Näsman :88, 1991Vierck 1983:37-39). Ellmers (1985) calls the Frisian trade of the seventh to eighth centuries the northern peoples' 'umbilical cord' ('Nabelschnur', Ellmers 1985:8) to the south. In this period, says he, Mediterranean and Frankish imports, fashion, and other cultural impulses reached Scandinavia only through Frisian middlemen.
Ellmers seems to somewhat overstate his point. Outside their homelands, Frisians mainly restricted their trade to urban harbours, and there is no evidence that they extended 4 In the ASR 9 excavation there are two fragments in phase G, 800-820, and 22 in phases H and I, 820-850 (Feveile and Jensen 2006: Fig. 9.12). 5 Following our inquiry, Ashby has generously supplied additional information on these finds. 6 Recent analyses of seventh to eighth century glass beads and raw glass indicate that some trade continued between the Mediterranean and the south Baltic (Delvaux 2017 and pers. com.). Some exchange moved via the Danube (Callmer 1991;Curta 2007), while an eastern route, possibly via the Russian river system, brought small amounts of glass from Northern India, some two centuries prior to the much more intense Viking Period trade with the east (Sode et al. 2017). Still, this import is close to negligible compared to that which arrived via the North Sea route. their trade into Scandinavia before Ribe was established, into the Baltic only when seasonal markets and market sites such as Åhus, Reric, and Truso were established from the mid-eighth century onwards (Callmer 1998;. Before that time, in the seventh century, Scandinavians would have travelled overseas to obtain non-Scandinavian items; predominantly to Dorestad, the town nearest to them. When trading there and possibly also in urban sites further west such as Quentovic, Lundenwic, Gipeswic, and Hamwic, smaller urban sites and beach markets such as Domburg and Medemblik (Fig. 1), Scandinavians would have interacted with Anglo-Saxons, Franks, and others (Wilson 1985:261).
Thus, before the eighth-century establishment of market sites in Jylland and the Baltic, Scandinavians were compelled to undertake long-distance journeys to the Rhine Estuary and beyond to obtain glass vessels, etc. Few apart from aristocrats could provide sufficient manpower and ships for such long-distance travel. This is probably why seventh-century Continental imports to Scandinavia consisted of a relatively narrow scope of commodities of interest to aristocrats. Largely, imports such as glass vessels and garnet jewellery have been retrieved from aristocratic contexts (Arrhenius 1985;Ljungkvist 2009;Näsman 2000). However, more affordable items, in casu glass beads, are quite widespread in modestly equipped graves and in medium-status settlements (Røstad 2016:52-92;Sjøvold 1974;Vinsrygg 1979). Aristocrats probably imported and distributed them within regional networks. Probably, they also imported copper-alloy raw material for their artisans; the regionality of brooch types and their general occurrence in graves (Røstad 2016:307-310;Sjøvold 1974:196-200) suggest that the brooches were produced in Scandinavia and distributed within the same networks as the beads.
In the eighth to early ninth centuries, the non-Scandinavian traders who found their way to urban sites in southern Scandinavia and the Baltic were predominantly Frisians, while Wulfstan's account testifies to the presence of Anglo-Saxons (Jesch 2009). Still, continued trade by Scandinavians in the southern North Sea/English Channel zone is evidenced by a mid-ninthcentury seasoned trader in Vita Anskarii (ch. 27) and by a Frankish bishop who during his flight northwards in 834, probably from the Rhine Estuary, 'found help from certain Northmen who knew the route and the harbours of the sea and the rivers who flow into it' (Nelson 1997:20).
Thus, while commodities in the sixth-seventh-century were brought to and from Scandinavia by Scandinavian traders who obtained them in Dorestad, etc., the eighth to early ninth centuries saw traders of diverse origins bringing goods to towns and markets in southern Scandinavia and the Baltic. In addition to luxuries, they brought cloth and salt of Frisian production, and quernstones, pottery, and other goods from the period's surge in production and trade elsewhere in the Frankish Empire and in the Anglo-Saxon kingdoms. Now, a wide scope of affordable commodities became available for the wider population, because buyers other than aristocrats could make the shorter journey to Scandinavian towns and markets like Birka, Hedeby, and Kaupang. This development is in line with what Wickham (2008) has found to be a general European trend: aristocratic wealth is necessary for regional economic complexity-commodity production and urban sites-to emerge. Only when markets had been established in the region could the wider population engage in market trade.
Interestingly, the western coast of the Scandinavian Peninsula ( Fig. 2) deviates from the south and east in that no market sites or towns appear to have existed in the west before Trondheim was established in the late tenth century. Therefore, unlike in the south and the east, long-distance aristocratic trade there was not supplemented by regional and local trade involving the wider population. In the west, trade retained its character from the seventh century: It was conducted over long distances by aristocrats based along that coast. The results from our analysis of Ribe whetstones, presented in the following, provide for the first time a basis for assessing the chronology and volume of west-Scandinavian long-distance trade in this period. This reassessment has ramifications for the understanding of several issues in the early medieval north; of these, the beginning of Viking raiding overseas is the question explored here.
Whetstones in Ribe (c. 705-850)
Whetstones were among the most important and common tools during the Viking Age; they were a requirement for using knives, axes, needles, arrows, and other tools with a cutting edge or a point. Suitable as raw material for whetstones were rock types containing a small, hard mineral grain, which would serve as an abrasive during whetting (Resi 2011:374). With its favourable stratigraphy from the eighth century onwards combined with a large and varied artefact material, Ribe stands out as an important site in discussions of the early trade with the western Scandinavian Peninsula. Approximately 1800 whetstone fragments dated to the eighth to thirteenth centuries have been found in Ribe. These are largely fragmented due to heavy wear; the smallest fragments are less than 0.5 cm long and approx. 1-2 mm thick. Frequent use led to thinning and then breakage; thus, some of the retrieved fragments may have once been part of the same whetstone. Blanks and unused whetstones have not been unearthed in the contexts studied here; such are rarely found in Viking Age settlements (Resi 2011:379).
A total of 446 stones from five different sites in Ribe are included in the study. Altogether 440 of the stones have been retrieved from ASR7 Sct. Nicolajgade 8 and ASR9 Post Office; both sites date to c. 705-850 and were selected for their long date range, high chronological resolution, and high number of whetstones. The small size of the fragments posed a difficulty: the geological analyses used in the study require a minimum of 10 g. Most of the fragments were smaller and not suitable for analysis. Therefore, three fragments found in reliable contexts dated to 720-800 from the site ASR 4M75 Kunstmuseets Have, one fragment from ASR 5M74 Dommerhaven and two fragments dated to the eleventh to thirteenth centuries from ASR 13 Torvet 13-15 were also included in the study.
Research History
Earlier studies have indicated that material for whetstones in Northern Europe during the Viking Age frequently were sourced from the territory of present-day Norway (e.g. Askvik 1990Askvik , 2008Crosby and Mitchell 1987;Ellis 1969;Hald 1991;Mitchell and Askvik 1984;Myrvoll 1991;Resi 1990Resi , 2008Resi , 2011. These studies, however, have been constrained by the fact that only one type of whetstone could be assigned to a quarry site or region. Whetstones of light-grey, fine-grained muscovite quartz schist are considered to stem from the quarries in Eidsborg in Telemark (Fig. 2). This rock type is here referred to as light-grey schist. The Eidsborg schist belongs lithostratigraphically to the Eidsborg Formation, the uppermost formation in the Bandak Group of the Proterozoic Telemark Supergroup (Oftedahl 1980). The proposed Eidsborg provenance of whetstones of light-grey schist is based on the fact that the area encompasses numerous whetstone quarries with evidence for a long history of production, and on the radiometric dating (K-Ar) of mica, displaying a cooling age between 900 and 950 million years, rather typical of the Precambrian rocks in this region (Mitchell and Askvik 1984). Macroscopic identification and microscopic studies of thin sections on whetstones of light-grey schist from Kaupang, Hedeby, Wolin, Aggersborg, Ribe, and various sites on the British Isles have demonstrated their origin in Eidsborg (Askvik 1990(Askvik , 2008(Askvik , 2014Crosby and Mitchell 1987;Hald 1991;Mitchell and Askvik 1984;Moore 1978;Resi 2011).
A second type of schist considered to have originated in the western Scandinavian Peninsula is a more fine-grained schist that has been referred to under slightly varying terminology, such as muscovite-biotite-quartz-phyllite (Ellis 1969), blue phyllite (Moore 1978), bluish-grey to dark grey phyllite (Kars 1983), dark, blue-purple phyllite (Gaunt 2000;Moore 1983), dark grey, very fine-grained muscovite-quartz schist (Askvik 1990(Askvik , 2008Mitchell and Askvik 1984), purple phyllite (Crosby and Mitchell 1987), and metasiltstones (Hald 1991). We find the terminology very fine-grained muscovite-quartz schist used by Mitchell and Askvik (1984) and Askvik (1990Askvik ( , 2008 to be the most precise, and is here shortened to very fine-grained schist. Such whetstones from Ribe vary from dark grey to purple in colour. Radiometric dating (K-Ar) of whetstones have dated this rock type to between 403 ± 10 and 446 ± 7 million years (cooling age for mica), coinciding with the late phase of the Caledonian Orogeny (Mitchell et al. 1984). The Caledonides in Europe occur in a belt crossing Scandinavia, England, Scotland, and Ireland, as well as in a zone in central Europe. In the latter zone, the Caledonian rocks are covered by post-Caledonian deformation, meaning that a central European origin of Caledonian whetstones can be excluded (Askvik 2008:8). England is not a likely source area due to the relatively small numbers of whetstones of the Caledonian type found there (Crosby and Mitchell 1987:498). Greenland can be excluded as a possible source because of the early date of the Caledonian whetstones; they predate the eleventh century colonisation of Greenland. Scotland and Shetland have been suggested as a possible origin; however, the Shetland schist is more coarsely grained than the Caledonian whetstones discussed here (Crosby and Mitchell 1987:501). Therefore, a Scandinavian provenance seems plausible, where the Caledonian belt covers the approx. 1700 km of the western Scandinavian Peninsula from Rogaland to the North Cape (Fig. 2;Askvik 2008:8;Mitchell and Askvik 1984). Yet, the whetstones' provenance within this area has until now remained unconfirmed. This type of whetstone is common at the same Viking Age sites in Northern Europe where Eidsborg whetstones are found.
The first study of whetstones from Ribe, conducted by archaeologist Siri Myrvoll and geologist Niels Hald in 1991, included a total of 117 whetstones from excavations during 1970-6. Based on macroscopic identification and thin sections, Myrvoll and Hald identified several petrographic groups, such as grey schist, siltstones, sandstones, and miscellaneous (Hald 1991;Myrvoll 1991), indicating a variety of origins. They also identified whetstones of both the light-grey schist and the very fine-grained schist. The former were identified as whetstones from Eidsborg; 17 of the pieces examined by Myrvoll and Hald,14.5%, are of this type. Based on datings of the find contexts, Myrvoll concluded that the distribution of whetstones from Eidsborg began in the ninth century (Hald 1991;Myrvoll 1991:117, 121, 129).
The very fine-grained schist-Myrvoll and Hald termed it metasiltstone-was their largest group of whetstones, consisting of 61 pieces (51%). The stones have fairly variable colour nuances: they considered whetstones with a purple colour to share a common provenance, whereas stones with a grey colour were thought to represent a different group (Hald 1991:143;Myrvoll 1991:119). This delineation was not firmly established, and the provenance of the whetstones could not be identified beyond a possible origin in the Scandinavian Caledonian belt (Hald 1991;Myrvoll 1991:129).
Shortly thereafter, the 'Post Office excavation' in Ribe during 1990-1 produced a chronological breakthrough and a solid base for absolute dating from the period 705-850, and thus new possibilities regarding the chronology of whetstones.
Narrowing the Search
Left with a huge area covered by Caledonian rocks, a strategy for narrowing the search for the unknown source for the very fine-grained schist was needed. The cooling ages alone give us a large window of opportunities, and it is not a viable strategy to blindly follow that line of research. Firstly, based on the arguments above, we assumed that the source is within the Norwegian Caledonides, and we considered that there are good enough reasons for present-day Norway being the first area to search. Secondly, due to the grain-size and metamorphic grade of the schist (low-grade) we assumed that the source must be within the low-grade metamorphic parts of the Norwegian Caledonides. Thirdly, we assumed that a quarry site of such potential importance would have been used at later stages in history, and would be known. These assumptions lead to the following strategy of research: • Review of known whetstone quarry sites (Geological Survey of Norway database: http://geo.ngu.no/kart/miner alres surse r/) • Comparing visual appearance, lithology and cooling ages The review is summarised in Table 1, where two areas (Mostadmarka and Soknedal, both in Trøndelag) seemed to represent the best match. It is interesting to note that both these quarry areas are situated in the same geological unit-the Støren Nappe. This unit also contains schist formations at Stjørdal, about 15 km NNE to Heingruva, sharing similar cooling history/ages. The Stjørdal schists were actually discussed as a potential source to the "Caledonian whetstone" by Crosby and Mitchell (1987).
Given that we established a hypothesis that the quarry sites in the Mostadmarka area (20-25 km ESE of Trondheim) and the Soknedal area (50 km SSW of Trondheim) were the most likely candidates for a source, the next step was to confirm or reject the hypothesis by conducting geochemical and petrographical analyses. In addition to the quarries in question within the two quarry areas, we added similar analyses for some of the less likely candidates, for the sake of visualising petrographic and geochemical contrasts. Thus, the data from the four quarries were compared with data from the Årekoll and the Eidsborg quarries. The latter was included also for confirming or rejecting that Eidsborg really was the source for the light grey schist in Ribe, and thus also for providing further evidence of the viability of the chosen methods.
Methodology
In total, 11 quarries are included in this study and compared with whetstones from Ribe: two quarries in Mostadmarka (Rollset and Heingruva) and two quarries in Soknedal (Skjeftli and Sørlifjell) in the Trøndelag region, six quarries in Eidsborg in Telemark (Jar, Storeufs, Lofthus, Steinbergnut, Gunnheim and Berg) and one quarry in Hardanger in Hordaland (Årekoll) (Fig. 2). The six quarries in Eidsborg bear traces of large-scale, long-term production, in part due to their size and extent, and all are considered as possible sources for the Ribe whetstones. The two quarries in Mostadmarka are located at a distance of 8 km from each other (Fig. 3). The size of the Heingruva quarry indicates a large-scale production, most likely intended as surplus, whereas both the Soknedal quarries are rather small. The Rollset quarry is difficult to evaluate, since it is partly covered by vegetation. Undiscovered quarries in the area are expected to be found within the same unit (Fig. 3). The Årekoll quarry in Hardanger that supplied whetstones of fine grained quartz mica schist is known from the early nineteenth century, but may be older. The quarry has not been studied in detail, but a geological description, including a thin section study, has been carried out (Jansen 2001). The analysis proceeded along following five steps: 1. Visual characterisation of whetstones from Ribe (446 pieces) on site in order to identify whetstones with a possible provenance within present-day Norway 2. Visual characterisation and microscopic study on site in Ribe of whetstones with a possible provenance in present-day Norway, and dividing the objects into three lithological groups (306 pieces, Due to practical limitations, it was not possible to run the same analytical procedure for all samples from Ribe. The small size of many of the samples was a constraint, especially for the geochemical whole-rock analyses; thus, not all the samples used for the petrographic analysis were suitable for geochemical analyses. However, samples for both the petrographic and the geochemical analyses were selected from all three lithological groups ( Table 2). Because petrographic and geochemical analyses both largely confirmed the integrity of the groups established through visual characterisation and microscopic study, we consider the number of samples analysed in the two final steps sufficient to provide reliable results. Analyses were carried out by the Geological Survey of Norway (NGU).
On-site Visual Examination
In step 2, 306 items with an assumed west-Scandinavian origin were divided into the three lithological groups seen in Table 2.
Petrographic Examination and Thin Sections
Thin sections were made from 11 whetstones from two sites in Ribe dating from the eighth century: two of light-grey schist, eight of very fine-grained schist (one of which was of a dark variety and more coarse-grained compared to the 'typical' very fine-grained schist), and one biotite schist.
Not surprisingly, from comparing the 11 samples to samples from quarries in Table 1, the whetstone sample of light-grey schist type has a petrography similar to that of the quarries in Eidsborg. The petrographic examination also suggests a strong match between the very finegrained schist in Ribe and that of the Mostadmarka quarries in Trøndelag (Heingruva and Rollset). Both visually and in thin section, the eight whetstones of very fine-grained schistincluding the more coarse-grained and darker fragment-turned out to be indistinguishable from the samples collected at the two quarries (Fig. 4). To date, no other whetstone quarry displaying the same geological characteristics has been found in the Scandinavian Peninsula. The rocks in question are not common constituents in the Caledonian bedrock in Scandinavia. Although the possibility remains that similar lithologies might yet be identified elsewhere in the Trøndelag region and in the south-western part of Hordaland, there is little likelihood of finding such rocks containing undiscovered whetstone quarries (Fig. 2). The two quarries in Mostadmarka, however, share a similar geology, complicating attempts to distinguish them from each other. A sample from the third group from the on-site visual examination, biotite schist (ASR 9 × 311), could not be matched with any of the quarry-site samples; the provenance for this group remains unknown. One pinkish fine-grained fragment, (ASR 9 × 400 id 200057107) also came out with an unknown provenance (Table 3).
Geochemical Whole-Rock Analyses: Major Element Analyses by XRF
Fourteen samples from Ribe (Table 2) were analysed by XRF on major elements at NGU laboratories (PANalytical Axios 4 kW XRF). Nine of the samples were of the very finegrained schist, three were of light grey schist (assumed Eidsborg), and two were biotite schist. These were compared to a total of 60 samples from the quarries selected for comparison (see above).
In Fig. 5a-c the Ribe samples are plotted against groups of quarries in various combinations of Al 2 O 3 , MgO, Fe 2 O 3 , SiO 2 , TiO 2 , and MnO. The results are rather conclusive: the quarries are separated into distinct clusters; samples assumed to originate from Eidsborg quarries (light-grey schist) plot within the Eidsborg quarries cluster, whereas samples assumed to originate from the Mostadmarka quarries (very fine-grained schist) plot (c); e Rollset quarry, Mostadmarka; f sample from Ribe, similar to (e); g Rollset quarry, Mostadmarka; h sample from Ribe, similar to (g); i Skjeftli quarry, Soknedal, j Sørlifjell quarry, Soknedal; k Jar quarry, Telemark; l sample from Ribe, similar to (k). Photos: Tom Heldal within the Mostadmarka cluster. Notwithstanding the possibility of additional, undiscovered sources discussed above, the XRF analyses indicate with reasonably high confidence that the very fine-grained schist from Ribe was quarried in the Mostadmarka area.
The two biotite samples do not fit into any of the quarries; they both show deviating geochemistry in at least one of the three plots. It is therefore reasonable to conclude that they have their origin somewhere other than the sampled quarries.
Considerable overlap among major-element plots of samples from the two quarries in Mostamarka precludes determining whether the Ribe whetstones originate either from one of the quarries exclusively, or from both.
Results
As suspected, whetstones of light-grey schist most likely originate from the quarries in Eidsborg. Likewise, the present study demonstrates with a high level of confidence that whetstones of very fine-grained schist from Ribe were quarried in the Mostadmarka area in Trøndelag (Table 4). We suspect that these results also apply for whetstones in varieties of this latter type of schist found in other eighth to eleventh century sites in Scandinavia and the North Sea zone. Although we cannot fully exclude the possibility of undiscovered quarries, we consider it highly probable that whetstones of very fine-grained schist from Ribe were quarried in Mostadmarka. The schist in the Mostadmarka quarries displays distinct colour nuances, and whetstones with both purple and grey colours seem to have been extracted here. We also suspect that there are several additional, undiscovered quarry sites of this rock type within the Mostadmarka area, particularly in the vicinity of Heingruva and Rollset. These results open several avenues of future research: further development of methodology to obtain more exact provenance, analyses of whetstones from other sites in Norway, and more detailed investigations of the quarries in Mostadmarka.
Our study thus indicates that quarries within two large production sites within presentday Norway, Eidsborg and Mostadmarka, were major suppliers of whetstones to Ribe. In Mostadmarka, the size of the quarry Heingruva is indicative of large-scale production. Quarrying here is known to have taken place in the 1600s, with some suggestion that it was used in the nineteenth and twentieth centuries (Bakmark and Rø 2014; http://geo. ngu.no/kart/miner alres surse r/). Today, the quarry is present as a wide shaft of 15 × 30 m filled with water, with an estimated depth between 5 and 15 m. The bedrock is visible as a semi-circle around the waterhole, entering into the spoil-heap to the west (Fig. 6). Traces from quarrying-both quarried rock as well as spoil-are visible in an area approx. 200 m west of the water (Fig. 7). The Rolset quarry is a small quarry located in a sloping, quite overgrown area with the quarried rock stretching in an east-west direction. Because of the dense vegetation, neither extraction traces nor the spoil-heaps are easy to identify. Within this area, another quarry is known to have been in use in the twentieth century, and the area probably display a number of small quarries hidden under the vegetation.
Chronological Distribution of the Ribe Whetstones
The site ASR 7 is considered to represent the oldest part of Ribe. Deposits and finds from ASR 7 begin c. 705 and come to an end c. 795, not because activities ended, but because more recent deposits, except for the context G2, have been truncated (Feveile 2006:40). A total of 107 fragments of whetstones were found during the excavation. Only whetstones from reliable datable contexts are included in this study (89 pieces), comprising finds from the workshop-sequences VH1 to VH6, and the 'Øverste skelgrøft' G2 (Table 5) (Frandsen and Jensen 2006:18).
Nearly half of the whetstones dated to 710-850 (47%) are assumed to have an origin in Eidsborg or Mostadmarka. Only one of 11 whetstones from pre-760 is west-Scandinavian, increasing to more than half of the material post-760. Only four whetstones date between 795 and 850; three of them originate from the quarries in Mostadmarka. In total, whetstones from Mostadmarka are over four times as common as Eidsborg stones.
The site ASR 9 Post Office comprises approx. 100 square meters, and was excavated 1990-1. The stratigraphy allowed the definition of phases that could be dated within narrow timeframes based on dendrochronology and artefacts. (Feveile and Jensen 2006:119, 24-29). A total of 411 whetstones were retrieved during the excavation, but only Fig. 5 a Ribe samples plotted against groups of quarries. Al 2 O 3 versus MgO define two distinct trends; one following the Mostadmarka and the Soknedal quarries, and one defined by the Telemark quarries. Further to the right is the Hardanger quarry. The very fine-grained schist from Ribe plots clearly within the Mostadmarka quarries, except for one sample that also fits with the Soknedal quarries. Light grey schist from Ribe is clustered within the samples from the Telemark quarries, while the two biotite schist samples do not fit any of the quarry areas. Illustration: Tom Heldal, Ingvild T. Bøckman. b Ribe samples plotted against groups of quarries. Ratio diagram showing basically the same pattern as (a), but here the separation between the Mostadmarka/Soknedal quarries and the Telemark quarries/Hardanger quarry is much clearer. Moreover, one of the biotite schists plots clearly outside the quarry areas. Illustration: Tom Heldal, Ingvild T. Bøckman. c Ribe samples plotted against groups of quarries. SiO 2 versus MnO, showing that all the Ribe samples of very fine-grained schist plot within the Mostadmarka cluster, and none within the Soknedal cluster. Note that the Ribe light grey schist are confined entirely to the Telemark quarry cluster. Illustration: Tom Heldal, Ingvild T. Bøckman Table 6). 7 64% of the total number of whetstones from ASR 9 is assumed to originate in Eidsborg or Mostadmarka. Numbers of stones are too low in the pre-725 phase (B) to provide percentages of significance, but possible Mostadmarka whetstones occur there. In the 725-60 phase (C) whetstones from Eidsborg and Mostadmarka constitute nearly more than 40% of Table 6 Whetstones from ASR 9 Post Office assumed to originate from Eidsborg and Mostadmarka Following the procedure in Feveile and Jensen (2006), stones that cannot be related more precisely than to two subsequent phases are placed in the youngest phase (e.g. stones in D/E are placed in E). The numbers of such finds are italicised Two whetstones from ASR 9, one possibly from Eidsborg and one from Mostadmarka, are related to the phases I-J, and thus placed in phase J dated to the twelfth to thirteenth centuries, and consequently not included in this study a Four of these six stones are from three subsequent phases, E/F/G, and are here related to the youngest of them, G. In the publication from Feveile and Jensen, however, such finds were placed under the group "without phase" (Feveile and Jensen 2006). Consequently, the total number of whetstones from phase G is also increased by four in relation to Feveile and Jensen 2006, Fig. 9 the total, thereafter increasing gradually to 75% in the 790-800 phase (F). In the first half of the ninth century (G, H-I) they constitute more than half of the total. All four fragments studied from ASR 4M75 and ASR 5M74 dated to 720-800 originate in Mostadmarka, whereas the two fragments from ASR 13 originate in Eidsborg (Table 7).
Summing up, 47% of the whetstones from ASR 7 and 64% from ASR 9-60% of all whetstones from the two sites-are assigned to two quarry sites in the western part of the Scandinavian Peninsula. 8 Whetstones of purple very fine-grained schist considered to originate from western Scandinavia are previously identified in phase C (725-760) by Feveile and Jensen, but without any provenance to specific quarry sites (Feveile and Jensen 2006:140). Thus, this study identifies for the first time the origin of the very fine-grained schist-both purple and dark grey-documenting long-distance trade in commodities from Arctic Scandinavia to Ribe in the eighth to mid-ninth centuries. The material suggests that since the second quarter of the eighth century, possibly before, people in Ribe received a substantial proportion-since c. 760 a majority-of their whetstones from northern supply networks. The occurrence of these stones in Ribe's earliest phase (B) coincides with the earliest occurrence of finished reindeer combs there, and the peak in the proportion of west-Scandinavian whetstones in phase F (790-800) overlaps with the earliest occurrence of reindeer antler raw material (phase E-F, 780-800) (Ashby et al. 2015).
Thus, from the early 800s onwards, a gradual shift towards a higher proportion of Eidsborg-type whetstones can be observed in both Kaupang and Ribe. 9 Despite Kaupang's proximity to Eidsborg-about 130 km as the crow flies-Mostadmarka-type whetstones there constitute a larger proportion compared to Eidsborg-type stones (approx. 6.8:1) than they do in Ribe (approx. 2:1) for the period when the two sites existed in parallel (c. 800-850). The sailing distance from Mostadmarka is about the same to Ribe as to Kaupang-approx. 1100 km. Possibly, the increase in Eidsborg whetstones in Ribe c. 820-850 (phase H-I) may be connected to the establishment of Kaupang and increased traffic between the two sites across Skagerrak, a sailing distance of some 460 km. The contemporary limited but abrupt increase in soapstone items in Ribe (above, n. 4) supports this suggestion. Until the predominance by Eidsborg whetstones in the eleventh century, Mostadmarka stones constituted the majority of whetstones in Ribe, Kaupang, and Hedeby. 10
Discussion
It appears that since the early eighth century and well into the twelfth, there was a steady supply of whetstones from Mostadmarka to markets and urban sites in southern Scandinavia. The precise volume is hard to determine, but a rough estimate suggests that in Ribe an annual average of approx. 170-200 fragments of Mostadmarka and Eidsborg whetstones were deposited. 11 In some cases, multiple fragments may have come from the same whetstone; however, many of those who acquired whetstones in Ribe, particularly during the site's seasonal-marketplace phase, will have used them elsewhere and discarded the remains there. Hence, as an absolute minimum, an average annual supply to Ribe of several hundred Mostadmarka and Eidsborg whetstones-until c. 820 nearly all of them from Mostadmarka-seems likely, sufficient to cover more than half of the demand among craftsmen, traders, and other buyers there. The likely shipping site for the Mostadmarka whetstones is Lade, 20-25 km north-west of the two Mostadmarka quarries (Figs. 1 and 3). From the late ninth to the early eleventh centuries the prominent manor Lade (Old Norse Hlaðir, 'storing place' or 'loading place') was the residence of five generations of Lade Earls, high-level political agents in Scandinavia. The first of these, who apparently already resided at Lade (Schreiner 1928:9-10), became King Harald Fairhair's earl, while the last ruled Norway as the earl of the Danish kings Harald Bluetooth and Sven Forkbeard. The manor is situated in the second richest agricultural region on the western coast (after Jaeren in Rogaland, Fig. 2), on a small promontory with several natural harbours on the southern shore near the mouth of the Trondheim Fjord.
According to the skaldic poem Háleygjatal ('Enumeration of the Háleygir', composed c. 985), the Lade Earls originated in Hålogaland, which comprises the approx. 650 km of coastland from northern Trøndelag to the Malangen/Lyngen area (Fig. 2). Further north and east lay Finnmark, the land of the Finnas (Sámi). From these northern regions came highly desirable goods that the Háleygir obtained from the Finnas. Describing his commodities during his visit to King Alfred's court c. 890, Ohthere from Hålogaland listed walrus tusk (ivory), rope from walrus and seal hide, down and feathers, and fur from marten, bear, otter, and reindeer (Bately 2007:46). This trade had a long history prior to Ohthere's time. Jordanes, in his mid-sixth-century History of the Goths (ch. 19, Mierow 1915:56), writes of the Adogit (alogii) people in Scandinavia, who live where summer has 40 days without nights and the winter 40 days without sun. The description fits Hålogaland, and the Adogit are commonly identified as Háleygir (Sitzmann and Grünzweig 2008:21-22;Svennung 1967:32-41). Jordanes also mentions the neighbouring Scrithifinni, apparently the Finnas, which he identifies as hunters and gatherers. From these northern regions come exquisite furs enjoyed by the Romans, Jordanes reports (ch. 21, Mierow 1915:56).
Evidence collected in Hålogaland demonstrates increasingly larger and more seaworthy ships and a rise in seafaring from the seventh century onwards (Storli 2006:22), in tandem with an aristocratic stratum with access to long-distance goods. A number of prominent manors have been identified along the coast (Fig. 8;Berglund 1995;Hansen and Olsen 2004;Holberg 2015), but only one of these, in Borg, Lofoten, has been thoroughly excavated: In the remains of the hall section of a 67-meter longhouse, rebuilt to 83 meters, shards from 15-16 glass vessels were found as well as 36 glass beads, rather evenly spread out over the longhouse's existence (seventh to tenth centuries). Six vessels can be provenanced respectively as Anglo-Saxon (one from the seventh century, two from the eighth), Rhinish (one, eighth century), and Continental (two, ninth to tenth centuries). The beads are less securely provenanced; they were probably imported to Scandinavia and are common occurrences in southern Scandinavia and the Baltic (Holand 2003;Näsman 2003).
Although the Borg finds demonstrate that glass vessels were available to seventh to tenth century west-Scandinavian aristocrats, the latter, unlike their peers in southern and southeastern Scandinavia, did not include them in grave furnishings (Holand 2001:164-165). Fortunate depositional and post-depositional circumstances and the sieving strategy applied in the particularly find-rich north-western corner of the hall section appear to be the main reason for the uniqueness of the artefact assemblage in Borg. 12 However, the frequent occurrence of glass beads and copper-alloy brooches in graves in Hålogaland (Eldorhagen 2001;Vinsrygg 1979 tables II-IV) and elsewhere in western Scandinavia (Røstad 2016:52-92, 273-97) corroborates the eighth-century phenomenon evidenced by the Ribe whetstones: Regular long-distance trade from the Arctic to the southern North Sea zone was indeed undertaken from aristocratic manors, including other than Borg, in the seventh-eighth centuries.
In written evidence, the trade in commodities acquired from the Finnas through tax, tribute, trade, and plunder is a recurring theme in royal politics from the time of the first king of Norway, Harald Fairhair (reign c. 872-932), until the twelfth century. Harald secured the transport of these and other commodities along the coastal sailing route, the Norðvegr (Fig. 2), by naming local chieftains earls of the main regions along the route; its southern end in Hordaland and Rogaland was his own heartland (Skre 2018b). In Møre he appointed Rognvald and in Trøndelag/Hålogaland appointed Håkon, the first Lade Earl (Fig. 2).
Prior to that, a prime driver behind political integration processes along the route seems to have been the securing of seaward traffic in general, but in particular probably the transport of Arctic commodities to sites and markets along the west-Scandinavian coast and in the southern North Sea zone. Integration of Hålogaland and Trøndelag appears to have developed well before Harald's time (Bratrein and Niemi 1994;Holberg 2015;Koht 1919:16;Schreiner 1928:9-10). Lade, positioned at the southern end of this long stretch of land, is a likely place where a variety of Arctic commodities from Finnmark, Hålogaland, and Trøndelag were stored-the latter region would contribute whetstones and possibly iron (Stenvik 1997) and furs (Holm 2015;Lindholm and Ljungkvist 2016) from the neighbouring woodlands-to be loaded onto ships headed for southern markets. The common occurrence since the late-sixth century in several east-Scandinavian regions of gaming pieces made from North-Atlantic whalebone (Hennius et al. 2018) suggests that the Hålogaland/Trøndelag trade network also extended overland eastward to the Baltic.
The steady supply of Mostadmarka whetstones to southern Scandinavia through the eighth to eleventh centuries and the quarry's proximity to Lade suggests that these stones were a common commodity in the ship-bulk of long-distance traders from Trøndelag and Hålogaland. All the commodities mentioned by Ohthere were high-value items for a narrow group of buyers: either luxuries (down, ivory, fur) or sought-after utilities (ship-ropes from hide). These high-quality products from Arctic Scandinavia were in high demand in the aristocratic segment in the Continent and the British Isles. At the same time, Ohthere's list should not be taken as exhaustive; the commodities recorded would have been of interest to his audience of royal scribes and officials, whereas commodities for craftsmen and the general population, such as reindeer antler (Ashby et al. 2015), oil from marine mammal blubber (Nilsen 2016), gaming pieces from whale bone (Hennius et al. 2018), and whetstones, might not have been deemed worthy of mentioning in the royal quarters.
All materials in Ohthere's list are perishable and are only preserved in exceptional cases. 13 The identification of the Mostadmarka origin of a substantial portion of whetstones in sites such as Ribe, Hedeby, Kaupang, and Oslo adds a non-perishable commodity to the list. Their continuous occurrence in high volumes at these sites allows their cautious use as a proxy for the trade in commodities from the Arctic transported on the Norðvegr along the western coast of the Scandinavian Peninsula in the seventh to tenth centuries.
Taking Up Raiding in The West (c. 789-850)
Essential constraints and opportunities of Viking-ship commanders of the 780s-850s are discussed in the following, suggesting how they may have influenced their decisions on where to raid in various periods. The discussion is framed by the substantial trade in Arctic commodities along the west-Scandinavian coast evidenced by the Ribe whetstones. The interests of various groups in this trade produced both conflicts and coalitions.
The most prominent concern of Viking-ship commanders is the coalition between one of their primary targets, long-distance traders on the Norðvegr, and kings along the route. While this coalition appears to have had existed since the Roman Period, strengthening of royal authority in the late eighth century posed an obstacle for Viking-ship commanders, who now faced a stronger adversary.
Their second concern, emerging through the 820s-30s, was that they were victim to their own success: the profitability of overseas raiding attracted increasingly greater Wamers 1985). The three sea-king zones along the west-Scandinavian coast are indicated (based on Hansen and Olsen 2004:59;Skre 2018b:790). Illustration: Ingvild T. Bøckman ▸ 13 Some Continental and British walrus-ivory items from c. 300-870, before hunting began in Iceland and Greenland, are mentioned by Roesdahl (1995Roesdahl ( :13-14, 2005. numbers of ships and men to that enterprise, with the resulting competition reducing their spoils. However, this situation also produced a new opportunity: raiders could join forces in Viking fleets that had the necessary strength to conduct successful raids on prosperous and well-defended sites. By overwintering overseas, Vikings could reduce the danger of retaliation from kings and traders based in the homelands for raiding lands and waters where the latter wanted to maintain peace. Thus, we suggest that Viking raiding overseas began as Vikings became the weaker party in a longstanding conflict in the homelands. That such 'push' factors were the key trigger is supported by the character of the earliest raiding. During the first 15-20 years of overseas raiding Vikings struck at widely dispersed sites (see below), suggesting that they were not 'pulled' to certain lands so much as they were 'pushed' into searching for prey in new waters. As their activities gained volume and momentum through the 820s-30s, the homeland conflict with traders and kings maintained significant influence on where Vikings raided. However, 'pull' factors (e.g. the weakening of the Frankish Emperor's power in the 830s) attained more significance as Vikings overseas gained numbers and strength.
Traders, Kings, and Vikings (Seventh-Ninth Centuries)
Regarding his northward voyage to the Beormas, Ohthere reports that he did not enter their land for unfriðe, often translated as 'because of hostility' (e.g. Bately 2007:45, 56-7). However, Christine Fell (1982-3) convincingly argues that the term has a more specific meaning, namely that Ohthere did not have the personal frið status among the Beormas that would have allowed him to travel safely into their land.
There is no mention of such hindrances on Ohthere's southwards journey along the Norðvegr and into Skagerrak and Kattegat to Hedeby, or indeed across the southern North Sea to King Alfred's court in Wessex. The latter is a case in point for Fell's conclusion (1982-3:96) that frið was a personally held privilege and not necessarily affected by hostilities between people from the very same polities or regions. King Alfred's realm had suffered substantial Viking incursions throughout his lifetime. Apparently, the King still granted frið to this man from Arctic Scandinavia, presumably because he identified himself and was accepted as a trader. Ohthere must have received the same privilege by others to be permitted to travel across the various political and cultural zones on his long route.
Thousands of windblown and mostly barren islands and skerries protected Ohthere's voyage along the west-Scandinavian coast from the rough winds and waves in the Atlantic Ocean. But intermittently along the Norðvegr occur pockets of relatively fertile land. In the best of these were situated manors rich in monuments and lavishly furnished graves spanning the early Bronze Age to the Viking Age. While rich finds also occur in the much more fertile and densely settled districts in valleys and along fjords further inland, the archaeological record of the outer-coast manors is unsurpassed. Among them, in Rogaland and Hordaland, are the five manors that according to the Icelandic saga tradition belonged to Harald Fairhair. Further north are two additional coastal-manor zones, one in Møre and the other in Hålogaland (Fig. 8). Between the three zones the islands and headlands along the sailing route are too barren to support large or numerous settlements.
Avaldsnes, the most prominent of these coastal manors, is situated at a bottleneck on the sailing route's southern end (Figs. 1 and 2). Based on excavations here, Skre (2018b) argues that from the third to the eleventh centuries these manors served as supply bases for the sea kings who exerted authority over the sailing route. By fighting Vikings who lurked in the innumerable islands and bays along the route, the sea kings could provide safe sailing for traders and other travellers. Snorri recounts in Harald's Saga (ch. 22) that every summer Harald and his army searched the islands and outlying skerries, pursued the Vikings that camped there, and drove them over the sea to the west -all the way to the Irish Sea. In the same saga (ch. 24) Snorri writes that Harald outlawed Rolf, the son of his close ally Rognvald, because he had harried in Viken (Fig. 2); the king had strictly forbidden robbery in the realm. Of course, there is ample reason for scepticism about the historicity of Snorri's accounts, committed to parchment some 300 years after the events. Nevertheless, outlawry is precisely the penalty stipulated in the west-Norwegian Gulaþing law code (ch. 314) for those who renounce the frið and ravage the homelands. Scourging one's home district was even worse; it was considered an honourless deed and perpetrators were declared níðingr, irredeemable outlaws.
Frið arrangements with kings and earls who controlled various stretches of the Norðvegr, surely involving payment of shares to the latter, would have been vital for longdistance traders as well as for other travellers. Although aristocrats would have had their own ships and men to defend their lives and cargo, it would have lain entirely within their interest to support royal peacekeepers along the route. While the power balance between Vikings on the one hand and traders and kings on the other varied through the centuries, the constellations and roles would have remained relatively constant during the periods when manors along the route show evidence of sea-kings' presence. At Avaldsnes such evidence (i.e. monuments, remains of halls and other prominent buildings, and extensive food processing) is found from the mid-third to the eleventh centuries (Skre 2018a).
The same person could of course be a trader, Viking, and royal warrior in different waters or at different times-after all, the same skills were needed by all three: seamanship, negotiating abilities, and martial proficiency. While some sea kings mentioned in sagas and skaldic verse were evidently Vikings, Harald appears to have been a warrior who rose to power in a peacekeeping sea-king milieu along the sailing route in Rogaland and Hordaland, subsequently extending his realm from the sea route to the inland (Skre 2018b). Other rulers (e.g. Olav Tryggvason, reign 955-1000) appear to have spent time in service to kings in other lands and participated in Viking raids before they became kings in their own right.
From the late ninth century, Harald Fairhair and his earls controlled practically the entirety of the Norðvegr sailing route. Already by the late eighth century, kingship appears to have developed at the southern end of the route, in Rogaland and southern Hordaland. Based on new datings from three assembly sites in Rogaland-so-called courtyard sites-Iversen (2018) provides empirical support for Myhre's (1992) suggestion that in the eighth century trans-regional royal power emerged there. Independently of Iversen and based on other types of evidence, Stylegar and Bonde (2016) date this rise of kingship by the southern end of the Norðvegr to the late eighth century. They maintain that kingship there was modelled after the Anglo-Saxon version, and that the burial rites of the two first Scandinavian ship graves near Avaldsnes-the ships were built c. 770 and 780 and entombed in 779 and the early 790s respectively-were modelled on the Sutton Hoo ship burial in East Anglia (Stylegar and Bonde 2016:10-13).
The increase in the 780s and peak in the 790s in the total number of whetstones deposited in Ribe each year-annual average in ASR 9 of 3.3 and 14.3 (N = 33 and N = 143 respectively, see Table 6) in the two decades respectively-reflects increased activity there. The reason for the increase is probably that this period saw the heyday of trade in the Channel and the southern North Sea zone (Coupland 2002;Verhulst 2002:92). However, the percentage of Mostadmarka whetstones also rises in these two decades to a peak of 75% (ASR 9), probably reflecting a vast increase in trade in Arctic commodities, both in relative and absolute volume. Likely, the main reason for this is the existence of a polity in Rogaland and southern Hordaland that was sufficiently strong to guarantee safe sailing in adjacent waters, and thus, with whom Arctic traders could make frið arrangements. Contemporary developments of royal authority over Viken (Fig. 2) and southern Scandinavia under Sigfred and his son Godfred, kings of the Danes, may be the reason why Frisian and Slavic traders sought the town Kaupang from its very founding c. 800 (Skre 2011a).
The beginning of Viking Raiding Overseas
It appears that the earliest Viking raiders in the west emerged from the western coast of the Scandinavian Peninsula, suggesting a connection to the contemporary peak in the transport of Arctic commodities along the very same coast and the building up of royal authority there. Following a brief summary of the whens and wheres of the early raiding, these suggested connections are explored below.
Beginning in 789, the earliest reported Viking raids in the west were a series of disparate attacks on coastal settlements from the Bay of Biscay in the south to the Atlantic Scottish Isles in the north (Fig. 1). While, surely, many attacks are unreported, the overall chronology in England, Francia, and Ireland is rather well testified. Northern Scotland is the least reported (Barrett 2008), but was probably targeted early. Based on reported raids, beginning in Portland in Dorset in 789, the English Channel coast from Kent and westwards seems to have been hit first (Downham 2017). Thereafter, raids are reported in Lindisfarne in 793 and Monkwearmouth (both in Northumbria) in 794, Ireland and Scotland in 795 (Ó Corráin 1998), Isle of Man in 798, and Aquitaine and the southern Channel coast in 799 and 800 (Walther 2004:168-170).
From c. 806 raiding centred on Scotland and Ireland, and for the following 30 years nearly all raids were conducted there, escalating sharply through the 820s and 30s (Etchingham 1996: Fig. 2). 14 Ó Corráin (1998:27-28) suggests that from 814-20 (the only period after 806 when chroniclers do not report attacks), Vikings were busy in Scotland, while Colmán Etchingham (1996), based on more recent evidence, has disputed the notion that the lack of chroniclers' reports necessarily reflects a hiatus in raiding in Ireland for those years. From the mid-830s, Viking activities entered a third phase: larger armies attacked Ireland, England, and Francia, penetrated inland, and in some cases began overwintering.
There is little Insular or Continental evidence as to where in Scandinavia the ship crews of the pre-mid-830s raids originated; the information in the Anglo-Saxon Chronicle that the ships in Portland 789 came from Hordaland is an addition to the chronicle a century after the event (Downham 2017) and thus less reliable. The origin of the culprits in the early raiding in northern Great Britain and Ireland, however, is indicated by Scandinavian evidence. Except for a single piece in eastern Agder, the buried Insular loot from around 800 or slightly earlier has been retrieved solely in the five neighbouring regions of northern Hordaland, Sogn, Nordfjord, Møre, and Trøndelag (Fig. 8;Wamers 1985:49-56). Admittedly, such items may already have been old when taken, and burial customs in southern Scandinavia provide a meagre basis for quantitative comparison with those of the western 14 The Danish King Godfred's raids in Saxony 804 and in Frisia 810, and his sons' raid in 815, were not Viking raids but moves in royal territorial disputes. The attack in 820 on several sites in the Empire by a fleet of 13 piraticae naves from Nordmannia, however, was clearly a Viking raid, the only one recorded in the Empire between 800 and 834 (Nelson 1997:22-23;Walther 2004:171). Scandinavian Peninsula. Still, the distinct concentration of the earliest loot within a rather narrow area instils confidence in the conclusion that most raiding into Scottish and Irish waters in the pre-806 phase, possibly also somewhat later, emanated from these few regions on the west-Scandinavian coast (Ó Corráin 1998:1-2;Wamers 1985:85;Williams 2008:193). From Heen-Pettersen's (2014) analysis of the Trøndelag finds it appears that the earliest raids were organised from prominent and well-established aristocratic manors.
No such find patterns can guide the search for the origin of the early raiders in England and Francia, but western and southern Scandinavia is a safe bet. The royal power that emerged in southern and western Scandinavia in the late eighth century will have had two effects, both of which produced redundant military capacity. Firstly, royal power would have subdued rivalry between smaller polities and among royal pretenders, and leaders previously engaged in such unrest will have been ready to direct their troops elsewhere. Secondly, royal power provided safe sailing for traders, thus reducing spoils and increasing risks for Vikings who previously had parasitized on traders. Some of this excess military capacity may have joined a king's retinue or taken up long-distance trade. However, the latter enterprise was available only for those who produced or had access to commodities that were in demand overseas. For those who lacked access to such goods or were not involved in the protection of the sailing route, there were few alternatives to taking up plunder in new waters if they hoped to partake of the benefits enjoyed by their peers among traders and in the king's service.
Once rising royal power put the damper on raiding close to home, warriors and ship commanders in southern and western Scandinavia would have looked overseas for alternative hunting grounds. Through several generations' recurring trade ventures in the southern North Sea zone, Scandinavians would have collected knowledge from traders and sailors of lands and waters in all the areas affected by the earliest raids-details would have been extracted from captives taken en route.
Why, then, was raiding c. 806-35 concentrated in Ireland and Scotland? Hypothetically, resistance was weaker there. Elsewhere, Vikings may have been put off by defensive measures, for instance by Offa's upholding in 792 of the obligation of churches and monasteries in Kent to contribute to the defence 'against seaborne pagans with migrating fleets' (Downham 2017:5), by the alleged slaughter of 105 of the Paganae vero naves that attacked Aquitaine in 799, and by Charlemagne's building of a fleet and establishment of watch posts in 800 to defend against pirates that troubled the sea outside Gaul (Walther 2004:168-169). However, on several occasions in Ireland, efficient and successful resistance was indeed mustered (Ó Corráin 1998); a more complex background for the concentration of Viking activities to Ireland and Scotland must be sought.
Another contributing factor to the three decades' confinement of Viking raiding to Ireland and Scotland is that raiding there did not interfere with the prosperous trade in the southern North Sea zone and the English Channel. Royal peacekeepers and traders along the Norðvegr and in southern Scandinavia would have had an interest in preventing raids in the lands and waters where they traded; that Vikings in this period returned to the homelands each year kept them within reach of royal power and aristocratic traders.
In an abrupt shift in the mid-830s, extensive Viking raiding commenced in England and in the Empire. Unprecedented in magnitude, this wave of raiding has been considered to have dealt a blow to craft production and trade in those countries. Dorestad was sacked in 834 and annually again for three consecutive years. The internal disputes between Louis the Pious and his rebellious sons in the early 830s and among his sons following his death 840 left the Empire vulnerable. For several decades, coasts and riverbanks from Frisia to Bordeaux were heavily and repeatedly sacked (Nelson 1997;Walther 2004:171-177). In England a large army landed in 835 on Isle of Sheppey in the Thames Estuary, heralding several decades of intense raiding in England, targeting towns (Southampton in 840 and 842, London in 851), culminating in the invasion in 865 of the Great Heathen Army that ravaged eastern and northern England for more than a decade.
How did the interests of Vikings come to eclipse those of Scandinavian traders in the mid-830s? This is a complex issue and we will restrict the discussion to suggesting some factors that may have contributed to these developments. As a prelude, the question of precisely how harmful Viking raids were to trade needs to be weighed. Hodges (2006:157-162) has downplayed the significance of raids, arguing that in the 830s, trade in the southern North Sea zone was already dwindling, while Dorestad, London, and Southampton were in recession; thus, Viking attacks were not the main reason for the slump in trade and the abandonment of towns around 850. However, the archaeological evidence he refers to is not dated with sufficient precision to determine whether the recession began before, during, or after the 830s. The trade in Arctic commodities in Ribe appears to have remained on a consistently high level even in the town's final phase in 820-50; there is no indication of Hodges' suggested pre-830s recession. When production and trade in Ribe dwindled, this was probably due to the breaking of trade connections to the south and west, particularly to Dorestad. Clearly, Hodges is right that the recession in Carolingian economy in the mid-800s has a complex background, but his reasons for minimizing the impact of Viking attacks on this development do not seem convincing.
Trade in the Baltic prospered in these years as the eastern riverine routes towards the Finns, Slavs, Kahzars, Bulgars, Arabs, and others opened, and urban sites in the Baltic flourished (Callmer 2007). Kaupang's trade links to the southern North Sea zone were broken off c. 850, probably because trade in Ribe and Dorestad tailed off. Contemporaneously, brooch types from the west-Scandinavian coast, which did not occur at Kaupang in the early ninth century, began to predominate in cemeteries there. Evidently, trade routes along the western coast of the Scandinavian Peninsula shifted from the North Sea zone to Skagerrak, Kattegat (Skre 2011b), and the Baltic (Skre 2018c:15)-the very route followed by Ohthere.
In view of the above, it seems likely that the Viking raids in England and the Empire from the mid-830s were detrimental to trade in the southern North Sea zone, and were thus contrary to the interests of kings and traders in western and southern Scandinavia. Why, then, were they not stopped?
Firstly, the Danish King Horik's message to the Emperor that he had captured and killed those who had raided Frisia in 836 (Nelson 1997:24) indicates that royal attempts to quell overseas raiding were in fact being undertaken. The battle against returning Vikings that led to Horik's death in 854 may have resulted from such controversies. Secondly, the motivation on the part of Scandinavian kings and traders to stop Viking raids in the North Sea may have been undercut by the viable alternative in the Baltic trade. Thirdly, and possibly most importantly, Vikings gained much greater independence from kings and traders when they began overwintering overseas in the mid-830s; they were less dependent on frið in the homelands and were out of reach for kings and traders based there, hence more difficult to subdue.
In parallel, there is potentially a 'pull' factor behind overwintering overseas. While the first fleets that ravaged Ireland were small, the two fleets that entered the Liffy and the Boyne in 837 were each composed of 60 ships and carried a total of some 3000 men (Ó Corráin 2008:429). Kurrild-Klitgaard and Svensen (2003) hold that because successful raiding attracts other pirates over time, the success of roving raiders produces a 'common pool resource problem', reducing the share of spoils for all raiders. When proceeds decrease below an acceptable level, one solution would be to establish a settlement in the vicinity of the potential loot so as to exclude others from raiding in the surrounding territory. While Kurrild-Klitgaard and Svensen see this as the logic behind state formation and taxation, it may equally well contribute to explaining the two shifts in Viking behaviour in the mid-830s: the start of overwintering and the taking up of raiding in England and the Empire. The motif of establishing well-defended longphuirt in Ireland (Sheehan 2008) may not have been to protect only against the Irish but also against other Vikings who sought to obtain spoils from the region. Thus, defended Viking bases in Ireland and Scotland may have compelled latecoming Viking-ship commanders to look elsewhere for prey. As the numbers of Vikings increased, attacks on prosperous towns and regions in southern England and Francia became realistic ventures. Large fleets that could undertake such operations were formed through agreements between ship commanders, possibly up to 50, each in command of only a few ships (Price 2016:164). Drawing on parallels to seventeenth to eighteenth century piracy, Price (2014; uses the term hydrarchy to characterise this distinctive Viking strategy of joining and splitting up forces depending on the target's strength. It ensured that satisfactory proceeds for all raiders could be obtained from a large force's attack on a well-defended town as well as from a small unit's pillaging of a less rich and less protected monastery or settlement.
The earliest record of overwintering refers to Ireland in 836, the first in the Empire at Noirmoutier Island off Aquitaine in 843, and in England on Thanet Island in Kent in 850. However, all these instances may have been preceded by overwintering in Scotland. Permanent Viking camps in Scotland, possibly also in Ireland, appear to have been the bases from which many of the subsequent raids set out for England and the Empire, not to mention for occasional raids against the west-Scandinavian coast. The paucity on the western coast of the Scandinavia Peninsula of Frankish and south-Anglo-Saxon buried loot as compared to Northumbrian, Scottish, and Irish loot suggests that few of those who pillaged in the south returned to the western coast of the Scandinavian Peninsula, and conversely that some of those who pillaged in the north did return to the homelands.
Irish annals mention no kings among the Vikings until 848, after which their deeds were mentioned on several occasions. Their main mission, it seems, was not primarily to raid the Irish, but to assert control over the Vikings who were already based there (Ó Corráin 1998). There has been a lengthy debate as to where these kings ruledtheir land is called Laithlinn-in Scotland (Ó Corráin 1998) or in what later became Norway (Etchingham 2014). That question is connected to the debate on the origin and nature of two groups of foreigners identified in the annals in these years, the Finngall ('fair') and the Dubgall ('dark'); the latter arrived to attack former (Downham 2011). While this discussion involves intricate philological and historical problems that will not be addressed here, we suggest that the conflicting interests of overwintering Vikings on the one hand and west-Scandinavian traders and royal peacekeepers on the other may be of relevance for these debates.
Concluding Comment
As discussed in the beginning of this paper, the primary driver for economic complexity within regions in early medieval Europe appears to be aristocratic wealth. Moreover, that complexity, including commodity production, is a prerequisite for long-distance trade and the establishment of urban markets. However, this model, advocated primarily by Wickham (2008), does not quite suit the early medieval west-Scandinavian case. At least at times in this long period, agents in the aristocratic segment, identifiable along the whole coast since the Roman Period, were engaged extensively in long-distance trade and represented a substantial demand both regionally and during trade expeditions. Additionally, political integration along the coast beginning in the seventh century, if not before, led to the economic integration of substantial regions with diverse resources, providing ample opportunities for developing intraregional economic complexity and wealth.
While the whetstones form Mostadmarka that form the empirical basis of this paper were produced close to the aristocratic manor Lade, it seems that most other commodities that the Håløygi and the Trønder brought into long-distance trade were not produced within their region but were acquired from neighbouring peoples such as the Finnas and the Beormas. Ohthere's report is explicit that his wealth consisted of what he acquired from the Finnas, rather than from his own land. While economies may have been more complex further south along the coast, as in Møre, Sogn, Hordaland and Rogaland, military leaders in these regions appear to have had two conflicting sources of profit: protecting and parasitizing on long-distance traders from the Arctic.
Thus, the wealth of the west-Scandinavian aristocracy to a lesser degree derived from intraregional economic complexity; hence the late establishment of markets and urban sites. Exchange of commodities for long-distance trade was not dependent on market sites alone: the Finnas' payments of tax or tribute was bound by personal connections and obligations. Commodities thus acquired, as well as whetstones and other goods produced at or near aristocratic manors, were loaded onto ships that brought them to markets and towns in southern Scandinavia and in the southern North Sea zone, where traders could acquire attractive commodities in exchange. The western coast of the Scandinavian Peninsula appears to be one of what Wickham (2008:22) calls 'a tiny handful of regions whose economic complexity was determined by their commitment to external exchange'.
Variations in intraregional economic complexity, aristocratic wealth, and long-distance trade will be found all over Scandinavia. Similar types of agents as in the west-traders, kings, and Vikings-will also be found elsewhere, but their relative and absolute strength and constellations will vary from region to region; so too will factors like the size of regions and the socioeconomic significance of landed property, of commodity production, and of long-distance trade. The Viking phenomenon is varied and complex and the onset of Viking raiding cannot be captured within a single explanatory framework no matter how extensive and detailed. Still, the current steady flow of new evidence regarding the provenance of materials and the movements of people will undoubtedly continue to remodel, extend and refine the understanding of this transformative period in Northern Europe. | 2019-05-27T13:25:12.900Z | 2018-12-07T00:00:00.000 | {
"year": 2019,
"sha1": "8b05e9f9eccb450fd16bb8a186626e495f66be21",
"oa_license": "CCBY",
"oa_url": "https://link.springer.com/content/pdf/10.1007/s11457-018-9221-3.pdf",
"oa_status": "HYBRID",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "1724550e8fa8ecfef21fb4fdf7224b39f537786f",
"s2fieldsofstudy": [
"History"
],
"extfieldsofstudy": [
"History"
]
} |
136668217 | pes2o/s2orc | v3-fos-license | Relationship between the Structure and Properties of Pharmaceutical Crystals
The majority of drug products are solid dosage forms, most of which contain the drug substance in the crystalline state. This review considers the forces responsible for crystal packing, the various types of pharmaceutical crystals, and the methods used to determine the structure of pharmaceutical crystals. These topics provide background for the main thrust, which focuses on the importance of studying the structure of pharmaceutical crystals with particular stress on phase changes of crystal forms of drugs during pharmaceutical processing and implications of dif ferent solid forms of drugs on its mechanical properties. The present review does not consider pharmaceutical co-crystals, which could be the subject of another review.
Forces responsible for cr ystal packing
While the intramolecular interactions bond the atoms in a molecule, the intermolecular forces minimize the energy of the molecules in the crystal and are primarily responsible for the formation of organic crystals. 8) A crystal structure therefore corresponds to a free energy minimum that is not necessarily the lowest (so called, global) minimum. The intermolecular forces may be either attractive or repulsive. The attractive interactions consist of three types: nonbonded (sometimes termed non-covalent interactions) such as van der Waals forces (which depend on dipole moments, polarizability, and electronic distribution of the molecules) and hydrogen bonds (which require donor and acceptor functional groups), 9) ionic interactions, and electrostatic interactions. The major attractive interactions in most pharmaceutical cr ystals are hydrogen bonds and van der Waals interactions. 10,11) However, in ionic crystals, electrostatic interactions can significantly affect the overall crystal packing energy. 12) Non-bonded interactions are relatively weak and are generally treated as isotropic, although a more realistic interpretation may require the inclusion of anisotropicity in the treatment. 13) Hydrogen bonds, the energies of which are in the range of 1-10 kcal/ mol, are anisotropic and directional. 10) The magnitude of the sum of the forces acting on a molecule, and the energies involved in the interactions of individual atoms of a molecule with atoms of the surrounding molecules, may be estimated from the sublimation energy of the molecular crystal. 14) For most molecular crystals, the sublimation enthalpy is within the range, 10-25 kcal/mol. 15) The arrangement of molecules in a crystal deter-mines its physical properties and, in some cases, its chemical properties. 16) The physicochemical properties of the solid drug can affect its performance. Thus, an understanding of the crystalline state leads to an understanding of the drug properties, which is crucial for preformulation and formulation in the pharmaceutical industry.
Types of pharmaceutical cr ystals
The molecules in an organic pharmaceutical crystal may be chiral or achiral. Some pharmaceuticals are salts. Based on its internal structure, a pharmaceutical crystal may be a molecular adduct (hydrate or solvate), or may be one of a group of polymorphs, as shown in Table 1 and explained below.
Polymorphism, in general, denotes the ability of a substance to exist as two or more crystalline phases that have different arrangements and/or conformations of the molecules in the crystal lattice. 17) Conformationally rigid molecules exhibit orientational, or packing, polymorphism. Conformational polymorphism arises when a flexible molecule adopts different conformations in different crystal structures. 18) Polymorphs can be classified as enantiotropes or monotropes, depending on whether or not one form can transform reversibly to another. 19) Enantiotropes are members of a pair of polymorphs whose mutual transition temperature is less than the melting point of either polymorph. Each enantiotrope has its own temperature range of stability. Monotropes are members of a pair of polymorphs that have no mutual transition temperature. One monotrope is always more stable than the other polymorph under all conditions in which the solid state can exist.
A crystal is termed a molecular adduct when its lat- 16) tice consists of more than one chemical component, 16) and includes solvates and hydrates. A solvate is a solid phase containing solid molecules in addition to molecules of the major component in the crystal lattice. When the solvent is water, the solid phase is termed a hydrate. Molecular adducts can be stoichiometric or nonstoichiometric in nature. 16,20) Clathrates 21) are special types of molecular adducts that consist of two distinct components, a relatively rigid host and a quite mobile guest. Within clathrates, the guest molecules lie trapped in closed, three dimensional cavities or cages formed by the crystalline structure of the host. 22) The term clathration is used instead of solvation when there is no specific interaction between the solvent and solute. 23) Approximately one third of active pharmaceutical substances are capable of forming crystalline hydrates. 24) Solvates and hydrates generally demonstrate different solubilities and consequently different intrinsic dissolution rates (dissolution rates per unit surface area) than their unsolvated counterparts. 3) Moreover, the stability profiles of hydrates and solvates at various temperatures and at different vapor pressures of water or organic solvents differ from those of the unsolvated crystal form. 25) These differences can inf luence formulation, processing, and stability under various storage conditions of the drug compound, as well as in the pharmaceutical product.
Polymorphism and the formation of molecular adducts are also common among pharmaceutical salts, 26) leading to unique molecular environments and physicochemical properties that differ from those of their respective free acid or base. Polymorphism in a chiral drug 27,28) can be exhibited by individual enantiomers (for example, carvoxamine 29) and nitrendipine 30) ) as well as by racemates (for example, mandelic acid 31) ) and can be expressed by the interconversion between different types of racemates, as shown by nicotine derivatives 32) and sodium ibuprofen. 33) Determination of the structure of pharmaceutical cr ystals Methods for determining the structure of pharmaceutical crystals fall into three broad categories ( Table 2): (1) methods utilizing single crystals, (2) methods utilizing powder X-ray diffraction patterns, and (3) methods that determine crystal structure from molecular structure alone, i.e., ab initio methods. Each of these methods may be further subclassified, as in Table 2. The salient features of the three methods are outlined below. 38 KONA No. 23
Method utilizing single crystals
This method requires crystals of suitable size and quality. 34,35) In general, the minimum dimension along each axis should exceed 50 mm for single crystal X-ray diffraction. Although use of synchrotron Xray radiation has considerably reduced this minimum crystal size, single crystal X-ray diffraction still remains the most commonly used method. The principles and methods of single crystal X-ray diffraction for organic molecules are described in Stout and Jensen. 34) It is sometimes difficult to obtain the desired compound as single crystals of adequate size and quality. The material may be available only as a polycrystalline powder. Moreover, solving the structure of certain types of crystalline substances may be beyond the capability of single-crystal techniques, 3) even when performing microcrystal diffraction with a synchrotron radiation source. In particular, metastable polymorphic forms that are isolated by rapid cr ystallization from the melt or that rapidly grow from solution may be highly flawed crystals. Desolvation processes commonly result in crystals that appear to have the same particle size as the crystals from which they were formed; however, upon examination by polarized light microscopy, these crystals are usually composed of microcrystalline aggregates. 3) Twinned crystals have different growth sectors that are related by symmetry to one another. The sectors have different orientations that can be related to one another through applications of twinning laws to relate the orientation matrices of the different growth sectors. However, solving twinning structures by single crystal diffraction is not always a straightforward task. 36) Crystal structure determination using single crystal diffraction is the most reliable technique. However, when suitable single crystals are not available, techniques utilizing powder X-ray diffractometry 37) and ab initio methods can be used.
Methods utilizing powder X-ray diffraction (PXRD) patterns
Methods in this category include traditional methods and direct-space approaches. The traditional approach for crystal structure solution from PXRD data is to extract the integrated Bragg intensities of individual ref lections directly from the PXRD pattern. 38) Once the integrated intensities are known, an electron-density map (assuming X-ray radiation) is constructed using the same techniques that have been developed for single-crystal diffraction data. To extract the integrated intensities, various modifica-tions of the Pawley 39) or Lebail 40) methods are commonly used. Variants of this basic idea have been applied successfully to organic systems with up to 31 non-H atoms. 41) Although traditional techniques for structure solution from PXRD data have been applied successfully in several cases, 42,43) these techniques have certain intrinsic limitations 44) and organic molecular crystals represent a particularly challenging case. Peak overlap can create major difficulties in extracting intensities from PXRD patterns, which limits the complexity of structures that can be solved by traditional methods. 45) Traditional methods have been used to determine the structure of inorganic and/or non-pharmaceutical crystals and include the Patterson method, 46) direct methods, 47) and methods of entropy maximization and likelihood ranking. 48) Direct-space approaches postulate structural models in direct-space, 43,49,50) independently of PXRD data. The suitability of these models is assessed by direct comparison of the PXRD patterns calculated from these models with the experimental PXRD patterns. Most direct-space approaches are stochastic in nature, and so it is recommended that calculation of the structure solution be repeated several times from different random starting populations. Possible methods that can be used to locate the global minimum within direct-space structure solution include Monte Carlo, 50,51) simulated annealing, 49, 52) genetic algorithm techniques, 53) and techniques employing a systematic search approach using a grid-based search with lattice energy calculations. 54) Besides determination of crystal structures from PXRD, simulated annealing can also be used for improved predictions in ab initio crystal structure determination. In general, this approach can be used in conjunction with other techniques for improved predictions. 55) Directspace approaches were utilized to determine crystal structures of a series of organic compounds. 56) However, in this case, unit cell parameters and space groups were obtained from published work, and only the steps of unit cell refinement and structure solution were performed. Moreover, not all the compounds tested were pharmaceutical crystals. The reader is directed to appropriate references for underlying theory and mathematical treatment on powder-indexing, 57) pattern decomposition and unit cell refinement using the Pawley method, 39) the Monte Carlo method, 58) the Metropolis importance sampling technique, 59) simulated annealing, 60) and structure refinement using the Rietveld method. 45,61,62) Because the PXRD pattern is collected from a sample of crystallites of random orientation, structure solution by this method circumvents the problem of crystal quality and twinning through the nature of this experimental approach. 63) Hence, it is also not necessary to apply the twin laws by the powder method.
Crystal structure determination from PXRD patterns can, however, be complicated. For powder diffraction patterns, the reflections from different crystal planes are averaged over directions and projected onto a single variable, the diffraction angle (2q). This averaging makes the reconstruction of the underlying crystal structure much more difficult than for singlecrystal diffraction patterns. 43) Space group determination on powders is more ambiguous than with single crystal diffraction, because of limitations of the regions in the pattern where systematic absences are free from peaks due to other reflections. Also, poor-quality PXRD patterns preclude their successful indexing. Furthermore, preferred orientation affects the relative intensities of given peaks and hinders the correct solution of the pattern. 64) Determining crystal structures from PXRD data is an important and emerging discipline. There is still considerable potential for the continued development and improvement of the methodologies in this field. 43) Also, during crystal structure determination by PXRD patterns, other analytical techniques, such as vibrational spectroscopy and solid-state nuclear magnetic resonance spectroscopy, can provide additional structural information complementary to that obtained by PXRD patterns, as in the case of N-(p-tolyl)-dodecylsulfonamide 65) and acetohexamide form B. 66)
Ab initio crystal structure determination
In more challenging cases, where suitable experimental data are not available, crystal structure determination can be guided by lattice energy calculations instead of powder pattern comparisons, to generate initial models for subsequent Rietveld refinement. Success using this technique is limited 67) and such ab initio prediction of crystal structures still remains an admirable long-term goal 68) because of the complexity of the task. 69) Such methods are still plagued by difficulties, including the location of global energy minima, force field accuracy, description of the electrostatic interactions, and inclusion of the entropy term that contributes to the free energy. The ability to determine the crystal structure from the molecular structure of a compound is the ultimate goal of computational crystallography.
Importance of studying the structure of pharmaceutical cr ystals
An organic molecule, which may be chiral or achiral or a pharmaceutical salt, can, in the crystalline state, exist as polymorphs and/or molecular adducts. Differences in crystal packing forces (i.e. intermolecular forces) lead to differences in long-range periodicity of the molecules. This difference in long-range periodicity in turn leads to differences in energy, and hence, differences in physical reactivity, between the various crystalline forms of the organic molecule. For conformationally f lexible molecules, intramolecular forces may also contribute to such differences. The various solid forms of a drug substance can also differ in their chemical reactivity. 70) Hence, the physical (and also chemical) properties of the organic pharmaceutical crystal will depend on its solid form. Table 3 shows the differences in physical properties that can be shown by the different crystalline forms of an organic molecule.
These differences in physical properties arising from differences in structure of the various crystal forms of an organic pharmaceutical compound has implications in candidate-selection for drug development, processing, formulation, and performance of drug products, regulatory aspects, and intellectual property issues. Differences in solubility and dissolution rate of the various crystal forms can affect drug performance, especially that of BCS Class II drugs. 71) Existence of different solid forms of a drug can also lead to phase transformations during its processing and formulation (e.g. milling, granulation, drying, compaction), 72,73) as observed in theophylline, 74) chlorpropamide, 75) carbamazepine, 76) phenobarbital, 77) lactose, 78) chlorpromazine hydrochloride, 79) uricosuric agent FR76505, 80) cefixime trihydrate, 81) and pentamidine isethionate. 82) These phase changes can affect the stability of the product and, in some cases, even the bioavailability of the drug. 83) An understanding of the relationship between the solid state properties and crystal structures of the likely phases may be utilized for optimizing operational and formulation strategies and for designing suitable stability protocols to avoid later problems. 17,84) Additionally, each of the several pharmaceutical excipients utilized in drug formulations can also exist in different solid forms, and the solid nature of the excipients may also influence the final physical form of the tablet, 14,70) such as tendency to stick, 85) or polymorphic conversion of the active ingredient. 86) Among chiral drugs, it is known that the pharmacological, toxicological, phar-macodynamic, and pharmacokinetic properties differ markedly between its opposite enantiomers and racemates. 87) The molecular environments in each of these solids are unique and impart different physicochemical properties to the crystals. 27) Among pharmaceutical salts, which can also exist in different crystal forms, the presence of ions influences the physicochemical properties of the crystals, including solubility, dissolution rate, stability, and hygroscopicity. 26) The structure of a crystal also affects its mechanical properties, thereby affecting its processability. Thus, theophylline monohydrate, because of the greater number of intermolecular hydrogen bonds in its crystal structure, possesses higher mechanical strength and is also less brittle than anhydrous theophylline. 88) The presence of water molecules in the crystal structure of the monohydrate of 4-hydroxybenzoic acid facilitates its plastic deformation as compared with its anhydrate. 89) The term slip refers to the translational motion of lattice planes relative to each other. Such planes are termed slip planes and a family of slip planes, together with the slip direction, is termed a slip system. Knowledge of crystal structure and slip systems can be utilized to model the tableting and compaction behavior of molecular crystals such as the anhydrate and dihydrate forms of L-lysine hydrochloride 90) Physical properties that differ among the various crystalline forms of a drug substance 17) merazine. 91) The Young's modulus of aspirin has been determined from its crystal structure 92) and the mechanical properties of aspirin, sulphathiazole, carbamazepine, and polymorphs of primidone (forms A and B) can be predicted 93) by applying lattice dynamics to the atom-atom potential model while also taking into account the crystal morphology. Even though it is possible to predict the mechanical properties of a compound from its crystal structure, crystal engineering of pharmaceuticals has not yet resulted in the design of crystals with desired mechanical properties. By developing a molecular basis for the origins and magnitude of mechanical properties, the required controlled modification might by achieved by analysis of the structural and constituent molecular information. 94) The knowledge thus gained might be utilized to design crystals with desired mechanical properties. 95) Differences in the physical and mechanical properties of the various crystal forms of a drug substance also affect scale up and transfer from laboratory quantities and procedures through pilot plant and full production 91) as equipment changes, variations in heating/cooling rates, variations in stirring procedures, 96) and seeding, 97) can all inf luence the result of a crystallization procedure and the solid form obtained. 14,73) Characterization and understanding of the crystal properties is also important for quality control and regulatory purposes. 98) Information about the various cr ystal forms of a drug substance is required by the United States Food and Drug Administration (USFDA) in a New Drug Application and a set of decision trees has been provided to assist in the presentation of data for different crystal forms of a drug substance to the USFDA. 99) Guidelines have also been set up the International Committee of Harmonization to address the existence of different crystal forms of a drug substance. 100) Furthermore, the different crystal forms of a drug and processes for preparing them are patentable. 3,14) Among the frequently cited uses for patenting different crystal forms are improved formulation, handling and stability, reduced hygroscopicity, and improved solubility and bioavailability. 3)
Conclusions
Most marketed pharmaceuticals consist of molecular crystals. Selection of the most suitable crystalline form of a drug in the initial stages of drug development is crucial to save time and cost associated with the drug development process, and, in recent years, much research has focused on achieving this goal. Isolation and thorough characterization of the maximum number of solid forms of a drug substance reduce the possibility of surprises resulting from inadvertent phase changes during processing or from crystallization of previously unknown forms.
For a given crystalline drug substance, the intermolecular and intramolecular interactions in its lattice, manifested by the molecular arrangement, packing and conformation, determine its observed physicochemical properties, including mechanical properties, which may in turn impact even the pharmaceutical properties of the drug product. To better understand, control, and possibly predict, these properties of pharmaceutical crystals, a thorough understanding of the underlying crystal and molecular structures of the maximum number of solid forms of a drug substance is desirable. While single crystal Xray diffraction unambiguously establishes the crystal and molecular structure, the continuous development of sophisticated computational tools for the ultimate prediction of crystal and molecular structures provides valuable alternatives when single crystal diffraction is not successful. | 2019-04-28T13:08:54.661Z | 2005-01-01T00:00:00.000 | {
"year": 2005,
"sha1": "0f8fff89ec8500690f436a97a3eacb17a68601ef",
"oa_license": "CCBY",
"oa_url": "https://www.jstage.jst.go.jp/article/kona/23/0/23_2005008/_pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "6ca1ff018c5e59d5c24f908f55069757f59ad1c9",
"s2fieldsofstudy": [
"Political Science",
"Materials Science"
],
"extfieldsofstudy": [
"Materials Science"
]
} |
251006164 | pes2o/s2orc | v3-fos-license | The effects of subterranean estuary dynamics on nutrient resource ratio availability to microphytobenthos in a coastal lagoon
be regulated by the annual oscillation of the local STE, itself driven by groundwater recharge variability.WestudySTEout fl owsamplesgatheredmonthlyforayearin theRiaFormosalagoon,examiningthetem- poraldynamicsofsalinity,E H ,pH(Totalscale),dissolvedoxygenandnutrient(PO 43 − ,NO 2 − ,NO 3 − ,NH 4+ ,andSi(OH) 4 ) concentrations under the local hydrological regime. The objectives were threefold: (1) to determine the annual vari- ability ofnutrient content andN:P:Sistoichiometryin SGD intothelagoon;(2) toidentifythe main driversofvariabil-ity in SGD composition and stoichiometry and their interactive effects; (3) to discuss links to, and implications for, ecosystem function that could help de fi ne expectations of cause-effect relationships and be useful for environmental management of the lagoon and similar systems elsewhere. We fi nd that the terrestrial groundwater recharge cycle drives the expansion and contraction of the subterranean es-tuaryon annualtimescales,causingthepHofSGDto fl uctuateinoppositiontocontinentalgroundwaterlevel.Thean- nual dynamics of the STE and the resulting pH oscillation determine the annual variability of nutrient composition ratio in SGD and shape benthic primary production dynamics. When saltwater intrusion occurs, the pH within SGD increases, enhancing nitri fi cation and desorption of exchangeable phosphorus, while silicate fl uxes increase with sea- waterretreat.Theresultisthatnutrientresourceratioavailabilityforcoastalprimaryproductiondependsonthefresh groundwater level. This implies that ecosystem function in such systems is more tightly related to the dynamics of linked groundwater reservoirs than previously thought.
H I G H L I G H T S G R A P H I C A L A B S T R A C T
• The pH of subterranean estuary (STE) outflows covaries with annual variability in continental recharge. • Landward movement of the saltwater wedge toe drives pH higher in the STE, promoting nitrification and phosphate release. • Groundwater recharge dynamics drive annual variability in nutrient composition ratio of submarine groundwater discharge (SGD). • Coastal lagoon primary production is interconnected with the oscillation of the saltwater/freshwater interface in STE. • The continental groundwater recharge cycle regulates coastal lagoon benthic primary production dynamics.
A B S T R A C T A R T I C L E I N F O Editor: Jurgen Mahlknecht
Keywords: Groundwater recharge dynamics Subterranean estuary Submarine groundwater discharge Semi-arid climate Coastal lagoon Benthic primary production Nutrients Causal links between subterranean estuary (STE) dynamics, their climatological drivers, and the ecology of coastal ecosystems have remained elusive. Yet, establishing these connections is essential for fully integrated management of coastal ecosystems. We test, in a semi-arid climate, whether the composition of submarine groundwater discharge (SGD) to a lagoon can be regulated by the annual oscillation of the local STE, itself driven by groundwater recharge variability. We study STE outflow samples gathered monthly for a year in the Ria Formosa lagoon, examining the temporal dynamics of salinity, E H , pH (Total scale), dissolved oxygen and nutrient (PO 4 3− , NO 2 − , NO 3 − , NH 4 + , and Si(OH) 4 ) concentrations under the local hydrological regime. The objectives were threefold: (1) to determine the annual variability of nutrient content and N:P:Si stoichiometry in SGD into the lagoon; (2) to identify the main drivers of variability in SGD composition and stoichiometry and their interactive effects; (3) to discuss links to, and implications for, ecosystem function that could help define expectations of cause-effect relationships and be useful for environmental management of the lagoon and similar systems elsewhere. We find that the terrestrial groundwater recharge cycle drives the expansion and contraction of the subterranean estuary on annual timescales, causing the pH of SGD to fluctuate in opposition to continental groundwater level. The annual dynamics of the STE and the resulting pH oscillation determine the annual variability of nutrient composition ratio in SGD and shape benthic primary production dynamics. When saltwater intrusion occurs, the pH within SGD Science of the Total Environment 851 (2022) 157522
Introduction
Submarine Groundwater Discharge (SGD) is 'all flow of water on continental margins from the seabed to the coastal ocean, regardless of the fluid composition or driving force' (Burnett et al., 2003). It usually consists of a mixture of fresh groundwater and seawater recycled through permeable sediments (Moore, 1999(Moore, , 2010, and often within subterranean estuaries (STEs) at the coast . In contrast to channeled surface inputs (e.g., rivers), outside of karst-dominated seaboards SGD occurs mainly as diffusive flow wherever a coastal permeable aquifer with a positive head relative to sea level connects to the ocean (Johannes, 1980). This makes SGD difficult to quantify as a nutrient transport vector, and early research largely ignored its contribution to marine biogeochemical budgets (Burnett et al., 2003). More recently, however, radium tracer studies have shown that SGD is the dominant water flux within coastal regions on a global scale (Kwon et al., 2014) and has therefore a considerable impact on coastal nutrient biogeochemistry worldwide Zhou et al., 2019). This is because SGD acts as a large nutrient source at local to regional scale, particularly when the freshwater portion originates in contaminated aquifers (e.g., Cho et al., 2018;Rocha et al., 2015;Leote et al., 2008;Ullman et al., 2003).
Nutrient fluxes associated with SGD have been evaluated to date at >50 study sites spanning 4 continents ). Yet, our understanding of the nutrient stoichiometry of SGD, its temporal variability and its controls is absent, mostly because STEs are still poorly understood . More than the magnitude of SGD-borne nutrient inputs to the coastal ocean, it is important to understand whether these are sourced internally within the land-ocean continuum (mineralization of organic matter within the coastal aquifer or coastal system), or externally to it (e.g., contamination of aquifers by nitrogen fertilizers). Indeed, autochthonous (internal) nutrient loads barely influence multiannual trends of whole-system net primary production, but allochthonous (external) nutrient inputs cause net nutrient enrichment over time and drive 'new' primary production, causing eutrophication and environmental degradation (e.g., Howarth and Marino, 2006). Nutrient loading may also shift species composition (Sugimoto et al., 2017): if net ecosystem production, or its potential rate, is limited by access to nutrients, the temporal dynamics of nutrient ratio resource availability in coastal waters may be followed by an increase in biomass of opportunistic species that take advantage of shifts in relative nutrient concentrations (Lecher and Mackey, 2018;Rocha et al., 2002). These shifts might trigger harmful algae blooms (HABs) and benthic hypoxia caused by organic matter accumulation , which threaten ecosystem services and cause great economic loss (e.g., Alorda-Kleinglass et al., 2021).
Estimates of nutrient loading driven by SGD have often been obtained by extrapolating the composition of the terrestrial groundwater endmember to the outflowing SGD mixture (e.g., Jiang et al., 2021;Lee and Kim, 2007;Taniguchi et al., 2008;Wu et al., 2013). For nonconservative species, such as nitrate, this approach is difficult to justify. It is generally taken because nitrogen concentrations may be very significant in fresh groundwater due to anthropogenic pollution, hence posing a significant threat to coastal ecosystems (Bowen et al., 2007;Valiela et al., 1990;Zhang et al., 2020). However, in the 'real world', the composition of SGD is modulated by the biogeochemical history of the water flows associated on the one hand with the transit path followed from land to the STE, and on the other to the biogeochemical reactions occurring within the STE, where fresh groundwater mixes with saltwater recycled through the coastal aquifer (Moore and Joye, 2021;Robinson et al., 2018;Santos et al., 2009a;Charette andSholkovitz, 2006, 2002). How the combined effects of these factors regulate SGD composition and impact ecosystem metabolism has not been shown.
The Ria Formosa in Southern Portugal is a leaky lagoon system with persistent signals of functional eutrophication, the drivers of which are still debated after decades of study (Domingues et al., 2017;Barbosa, 2010;Newton et al., 2003). SGD within the system was first detected and described by Leote et al. (2008). Almost a decade later, Rocha et al. (2016) showed how SGD inputs to the lagoon could be quantitatively separated into fresh groundwater inputs and saline recirculation flows using a combination of environmental tracers, including stable isotopes in water (δ 2 H, δ 18 O), radon ( 222 Rn) and salinity. The study provided evidence for the episodic nature of fresh groundwater inputs into the lagoon, but showed that seawater recirculation occurs permanently and drives an estimated gross load of~350 Ton N y −1 into the system. Using direct discharge measurements, combined with 222 Rn budgets where endmember attribution was achieved by water stable isotope signatures, Rocha et al., 2016 also showed that fresh, land-borne SGD could add a further~61 Ton N y −1 to the lagoon and was therefore capable of driving new production and/or shifting species composition. Hugman et al. (2017) estimated similar nitrogen loading (350 Ton N y −1 ) into the lagoon by SGD using numerical flow and transport modeling.
Despite the large number of studies carried out all over the world , coastal primary production dynamics have yet to be mechanistically linked to the variability in annual SGD composition and its hydrological and biogeochemical drivers. This causal chain has been elusive partially because changeable mixing of marine and continental waters in the coastal aquifer generates significant biogeochemical dynamics in the STE, resulting in SGD with highly variable composition throughout any one year (Ibánhez and Rocha, 2017;Jiang et al., 2018;Ibánhez et al., 2021). However, SGD surveys generally follow 'seasonal' schedules (i.e., summer, winter), are biased toward the best weather periods, or focus on either the hydrology, the biogeochemistry, or the ecology, paired combinations of these, and very seldom on all. This results in a lack of sufficiently detailed datasets to tackle the problem at the necessary level of granularity. The extensive data collected at high temporal resolution covering both annual and interannual (2005)(2006)(2007)(2009)(2010)(2011) timescales therefore make the Ria Formosa an ideal case study to determine the functional drivers of nutrient stoichiometry in SGD.
Here, we take advantage of this globally unique dataset to test a hypothesis for the first time in the real world: Can the composition of submarine groundwater discharge (SGD) to a lagoon be regulated by the annual oscillation of the local STE, itself driven by groundwater recharge variability and, in turn, are the resultant compositional dynamics reflected in ecosystem function?
To test this hypothesis, we revisit our Ria Formosa dataset and first examine the temporal dynamics of salinity, E H , pH, dissolved oxygen, nutrient concentrations (N, P, and Si) and stoichiometry in SGD flows; we then assess the annual variability of SGD composition and SGDdriven nutrient fluxes in the context of the local hydrological regime; finally, we compare these to synoptic benthic primary production measurements to ultimately understand the impact of SGD on primary production over the course of a year. Our objectives are: (1) to determine the annual variability of nutrient content and N:P:Si stoichiometry in SGD into the lagoon; (2) to identify the main drivers of variability in SGD composition and stoichiometry and their interactive effects; (3) to discuss links to, and implications for, ecosystem function that could help define expectations of cause-effect relationships between SGD and primary production dynamics for environmental management of the lagoon and similar systems elsewhere.
General characterization
The Ria Formosa (Fig. 1) extends from 36°58′N 8°20′W to 37°30′N 7°32′W in the south of Portugal. It is a shallow (average depth~2 m) coastal lagoon separated from the Atlantic Ocean by a chain of five barrier islands and two sandy spits. The system was formed by rapid infilling of palaeovalleys during the early Holocene marine transgression (10000-8000 cal yrs. BP), followed by back barrier infilling from 7250 BP (Sousa et al., 2019). The regional climate is semi-arid, with an average annual temperature of 17°C, cooler winters (mean 11°C) and warmer summers (mean 24°C). Mean annual precipitation is~480 mm, but the watershed (740 km 2 ) only receives effective precipitation of 152 mm year −1 due to high potential evapotranspiration (Salles, 2001). Except for the River Gilão ( Fig. 1), which intermittently discharges almost directly into the Atlantic at the eastern limits of the system, the other four minor rivers and fourteen streams discharging into the lagoon are ephemeral and dry out in spring/summer.
Hydrogeology
The lagoon is connected to the Campina de Faro coastal aquifer system (M12, c.f., Fig. 1,~86.4 km 2 ), composed by three distinct units (Almeida et al., 2000). The oldest and deepest is a cretaceous aquifer, formed by limestone, marls and dolomites resting on Jurassic sedimentary formations. A sandy limestone Miocene aquifer overlays this cretaceous unit, thickening from north to south and reaching a depth of >200 m close to the littoral zone (Geirnaert et al., 1982). On the surface lies an unconfined (phreatic) aquifer composed of detritic sediments (fine sands), sandy limestones and sandstones from the Miocene, Pliocene sands and gravels, alluvial deposits, and sand dunes from the Quaternary. These deposits reach an average depth of 20-30 m across the system (Almeida and Silva, 1987) with a Fig. 1. Location of the sampling points within the wider geographical setting of the Ria Formosa lagoon (top, inset), including meteorological stations from which data was gathered for this study, SNIRH stations São Brás de Alportel (31 J/01C), Estói (31 J/04UG) and Quelfes (31 K/02UG); the lower inset shows the seepage meter array site in the Ancão peninsula, SNIRH monitoring wells and boreholes Ludo (610/167), Vale das Almas (610/6), Areal Gordo (611/217) and Horta do Barrote/Pechão (611/91) screened in the unconfined Campina de Faro aquifer system (M12), as well as the site of monthly microphytobenthos (MPB) Chl a measurement by Brito et al. (2009). maximum thickness of~60 m in some areas north of Faro and connect with modern sand dunes and alluvial deposits within the lagoon (Silva, 1988). The northern limit of the aquifer system lies against outcrops of permeable and dolomitic limestones of the Jurassic and cretaceous formations. The maximum length (measured N-S) is~6.6 km, found north of our seepage meter deployment station (Fig. 1), while the maximum width (E-W) is 24 km. Recharge occurs directly by precipitation (Almeida et al., 2000;Silva et al., 1986). Infiltration may occur over the land surface or through the Jurassic and cretaceous outcrops at the northern limit of the aquifer unit (Almeida and Silva, 1987). Engelen and van Beers (1986) propose that the intermediate Miocene unit is confined and discharges into the Atlantic Ocean, bypassing the lagoon. There is substantial evidence, however, that it is hydraulically connected to the surface unconfined aquifer. This connection is supported by co-contamination of the deeper aquifer by nitrate (NO 3 − ) pollution derived from ammonium sulphate fertilizers employed in agriculture on the coastal plain (Almeida and Silva, 1987), and by stable isotope signatures of groundwater taken from both units .
SGD to the Ria Formosa
SGD comprising significant freshwater contributions was first detected in the Ria Formosa in 2006-2007 and described as a potentially important source of nutrients, in particular nitrogen, to the lagoon (Leote et al., 2008). Lobo-Ferreira et al. (2007) Brzezinski, 1985;Brzezinski et al., 2003).
Sampling and chemical analysis
Monthly sampling campaigns were conducted on a beach at the inner side of the Ancão peninsula throughout 2006 (Fig. 1). The tide is semidiurnal with average amplitudes of 2.8 m for spring tides and 1.3 m for neap tides. The local sediment is classified as medium sand, with a median grain size (d 50 ) of 0.44 mm, a clay+silt content ranging from 0.8 % to 1.6 %, mean porosity of 0.35, and an hydraulic conductivity of 5.5 × 10 −3 cm s −1 , or~4.8 m d −1 (Rocha et al., 2009). Details on the sampling strategy, methods, and nature of the data gathered during these field surveys may be found in Leote et al. (2008), Rocha et al. (2009) and Ibánhez et al. (2013Ibánhez et al. ( , 2011. Briefly, four to seven seepage meters (Lee, 1977) with attached water collection bags (two per meter) were deployed along the beach profile each month at spring low tide and allowed to settle in for 24 h prior to sampling. Water samples were then collected at subhourly intervals, following the precautions listed in Cable et al. (1997). Sampling covered a minimum of two and a maximum of four successive tidal cycles every month. Discharge rates measured in-situ (annual range: 1.7-190.6 cm d −1 ) were intrinsically related to tidal height, with peak discharge observed at low tide (Leote et al., 2008;Rocha et al., 2009).
Seepage water was analysed in-situ for salinity, oxidation-reduction potential (4 M Ag/AgCl reference electrode), oxygen concentration, and pH with an YSI 600 multi-parameter probe (Yellowspring Instruments®). Probe sensors where individually calibrated following manufacturer instructions prior and again after each survey to check for potential drift. For nutrient analysis, water samples were filtered through polyethersulphone (PES) membrane filters (Rhizon SMS, Eijkelkamp Agriresearch Equipment®; pore diameter of 0.1 μm) directly into nonadditivated sterile vacutainers. The combination of ultra-filtration through a hydrophilic membrane directly into a sterile environment in isolation from the atmosphere and environmental contamination vectors preserves nutrient concentrations with a high degree of accuracy, while substantially mitigating the risk of cross contamination or bacterial adulteration of the analytes (Seeberg-Elverfeldt et al., 2005;Jiang et al., 2017). The PES samplers were intensively flushed with sample water prior to the collection of the filtrate to avoid NH 4 + sorption artifacts as described in Ibánhez and Rocha (2014). Concentrations of nitrite (NO 2 − ), ammonium (NH 4 + ), SRP and DRSi were determined following the spectrophotometric methods described by Grasshoff et al. (1999) within 48 h of collection. The NO 3 − concentration was quantified by the cadmium reduction approach of Jones (1984). The method detection limits (MDL, in μM) for the determination of each analyte, calculated following IUPAC recommendations (Analytical Methods Committee, 1987;Long and Winefordner, 1983) were: 0.08 ( Measurements of ORP and pH (NBS) were temperature compensated. For each sample, the pH measured in-situ (NBS) was corrected analytically for ionic strength, using the Davies extension of Debye-Hückel theory on the behaviour of electrolyte solutions (Davies, 1938). Results are expressed in the total hydrogen ion concentration scale (Dickson, 1993). The median of each monthly set of values measured on separate SGD samples (26 < n < 55) was used to characterize the central tendency of the distribution of pH, O 2 concentration and ORP measured in SGD each month, and the median absolute deviation (MAD) employed as a robust estimator of associated uncertainty (Hampel, 1974).
Calculation of SGD-driven nutrient fluxes
SGD-driven nutrient fluxes were quantified by multiplying the nutrient content of water collected from the seepage meters by concurrent measurements of volumetric discharge. For each seepage meter, the nutrient flux (F, mmol m −2 d −1 ) was calculated as: where Nu is the total number of samples collected from each seepage meter deployed in any monthly field campaign, C i is the nutrient concentration in a specific seepage water sample, V i is the volume of the seepage water sample, T is the sample collection timespan for each volume and A the surface area covered by the seepage meter. Subsequently, nutrient fluxes derived from different seepage meters located at identical distances from the lowwater mark (LWM) along the cross-shore transect were averaged. The standard deviation of the average was employed to perform end-to-end uncertainty analysis, carried out following general rules for the propagation of unknown uncertainties, taken here as the standard deviation (± 1σ) of an estimated mean value (Taylor, 1997). The total daily SGD nutrient flux (mmol m −1 d −1 ) was then obtained by spatial integration of the mean nutrient fluxes derived as above for different positions with respect to the LWM along the cross-shore transect. These were then normalized to the length scale covered by the seepage meter array to obtain areal fluxes (mmol m −2 d −1 ). The fresh and saline volumetric fractions of SGD (fSGD and sSGD, respectively) were estimated using a linear salinity-mixing model. In the breakdown of seepage into fSGD and sSGD, the seawater salinity end member was taken as the average of the salinities above the 97.5th percentile measured in outflow, rather than the highest salinity observed within the chambers during deployment to avoid an artificial 'freshening' of the fractional composition caused by the effect of evaporation on porewater salinity (Geng et al., 2016;Santos et al., 2009b).
Groundwater recharge and inland water table fluctuations
Meteorological data for the region and water table levels for the superficial unconfined aquifer of the M12 system were taken from the online Assessing the accuracy of any recharge quantification method is difficult. Because we sought an indicator of actual recharge throughout the year, and not a precise determination, we prudently estimated 'potential' aquifer recharge rates (R P ) by following two different approaches. In the first, we used monthly water table heights for each borehole to estimate potential recharge by the water table fluctuation (WTF) method (Healy and Cook, 2002), as: Here, Sy is the specific yield for the aquifer, and dh/dt the temporal variation between water table height at the high points during the year and the antecedent height, obtained by extrapolation of the recession curve. For this study, and since the Campina de Faro aquifer is mainly composed of detritic sediments, including fine sands, we took the specific yield Sy to be constant during the year and equal to 0.25 (Johnson, 1967). The resultant potential recharge rates are plotted in Fig. 2 for the period 2005-2008. In parallel, potential monthly recharge for the year of study (2006) was estimated from the difference between the mean monthly precipitation estimated from the three meteorological stations covering the coastal plain ( Fig. 1) and local potential evapotranspiration estimated according to Thornthwaite's formula (Thornthwaite and Mather, 1957). The annual variation of potential recharge obtained this way compared favourably with the long term annual monthly means (Fig. 3a), determined from concurrent data of precipitation and pan evaporation measured from 1980 to 2000 in São Brás de Alportel (31 J/01C).
Transgressive oscillation of the STE
The annual variability of SGD rates (fresh and saline) at the STE mouth depends directly on the change in hydraulic head driving land-sea water flow and indirectly on the dynamics of the STE (Michael et al., 2005;Robinson et al., 2006). Here, we describe the amplitude of oscillations of the STE throughout the year with reference to the toe of the saltwater wedge, employing the Glover solution to delimit its position (Glover, 1959). This assumes a sharp saltwater-freshwater interface, dynamic equilibrium between fresh and salty groundwater following the Ghyben-Herzberg relationship, a stagnant saltwater mass, isotropic media and an aquifer body that is confined at the top and of infinite depth. These assumptions do not describe our aquifer closely, since it is unconfined, has average thickness of~30 m and sits on an impermeable layer (see Section 2.2). Hence, we follow the re-interpretation of the Glover solution for the case of an unconfined surface with a positive freshwater head gradient toward the sea, a horizontal seepage face and an impermeable layer at depth that intersects the saltwater-freshwater interface to form a saltwater wedge that extends some distance inland (Cheng and Ouazar, 1999). This approach finds only small discrepancies between the saltwater toe positioning given by the Glover formulae for the two conceptual models of aquifer.
Because the flow is unconfined, and the slope of the water table is small (in our case, L, the cross-section length of the aquifer, is measured in kilometres for an average aquifer depth, D, of just 30 m, implying L >> D), we may take the Dupuit-Forchheimer approximation of Darcy's law (Dupuit, 1863;Forchheimer, 1930) to simplify by one dimension and treat SGD as a 2D flow problem. This approximation relies on the following assumptions: a) Flow is horizontal over any vertical cross-section of the aquifer b) The velocity is constant over the depth of the aquifer. c) To calculate the velocity, we employ the slope of the free surface (dh/dL) as the hydraulic gradient d) The slope of the water table is relatively small, i.e., Δh << L. This is reasonable to accept for our case, since the amplitude of water table fluctuation ranges from 0.29 m to 1.87 msee Table 1 for a cross-sectional length of the aquifer ranging between 2300 and 12000 m, respectively, see Fig. 1).
The formulation of Darcy's law with the Dupuit-Forchheimer approximations to calculate the mean discharge to the sea per unit length of aquifer shoreline Q fw (m 3 m −1 d −1 ), for a cross section length of the aquifer L and a mean saturated aquifer depth D is (e.g., Cushman and Tartakovsky, 2016): where K is the hydraulic conductivity (4.8 m d −1 ), the mean depth D of the aquifer is 30 m and dh/dL is the hydraulic gradient (m m −1 ). For the latter, we are interested in the component of the flow vector directed southward ( Fig. 1), so we use the north-south distance between each of the four piezometric stations and our seepage meter array location for L (note that L is measured landward from the point of seepage (x = 0) to the piezometric stations inland (x = −2300 to x = −12,000 m), so that L < 0 and there is no negative sign appearing in Eq. (3)), while dh is taken monthly from the piezometric time-series at each station. The average fresh groundwater discharge from the M12 unconfined aquifer to the lagoon calculated this way was 0.126 ± 0.017 m 3 m −1 d −1 for 2006, comparing very favourably to direct measurement by our seepage meter array (0.115 ± 0.079 m 3 m −1 d −1 ). We are therefore confident that the saltwater-freshwater interface position in this aquifer is quite well described by the variation on the Glover approach developed by Cheng and Ouazar (1999) and that our seepage meter array produces flow magnitude data that are consistent with theoretical predictions based on continental water table fluctuations. In order to delineate the position of the saltwater wedge toe, and hence the STE, we first calculate the horizontal gap through which freshwater flows through the seafloor to the coastal zone, S g,fw , as: where γ is the excess of the specific gravity of seawater over that of freshwater, given by (ρ sw -ρ fw )/ρ sw , where ρ fw and ρ sw are the densities of freshwater and saltwater, respectively taken as 1000 and 1025 kg m −3 . The position of the saltwater wedge toe, Sw L , can then be estimated to be at a distance from the coastal boundary (positive inland) where the freshwater seepage gap develops (L = 0) as: We then calculate the monthly position of the saltwater wedge toe in relation to all 4 piezometers and express it with reference to its location when the water table height measured at each piezometer is equal to the annual mean. This method yields a curve that illustrates the annual oscillation of the saltwater wedge around a reference point for every piezometer site, with positive values indicating saltwater intrusion and negative values indicating saltwater retreat. We then normalize the extension of that oscillation to the largest departure in length, either inland or seaward, as measured from the central reference point for each annual curve. The result yields normalized intrusion and recession curves oscillating between a maximum of +1 (inland) and a minimum of −1 (seaward), which can be compared to water table height oscillations measured at the respective piezometer sites treated in the same way ( Fig. 3c).
Spatial heterogeneity in STE outflow composition
High spatial and temporal variability of seepage outflow rate and composition determinations are to be expected when SGD is monitored by seepage meters (Taniguchi et al., 2019). Out of an abundance of caution, we therefore approach the treatment of nutrient composition in the STE outflow for the purpose of studying relationships with primary production in two different ways. Apart from the monthly nutrient fluxes, which provide an integrative measure of the spatial and temporal variability of individual measurements, we also evaluate the nutrient stoichiometry of monthly sample sets using frequency distribution analysis. This approach allows us to assess whether the uniformization of data resulting from the spatial and temporal integration process necessary to obtain monthly fluxes does not obscure natural variability patterns.
To this end, we investigated the frequency distribution of samples in which any of the nutrient species was impoverished in relative terms ('nutrient-limited', as it were), i.e., with a concentration below the R-B ratio, and how the frequency of this scenario varied with the fraction of freshwater contained in SGD along the year. To do so, the salinity range covered by the water sample pool (n = 391) was first divided into classes (value intervals), with a minimum of 20 samples per class. This ensured uniformity of the distribution, balancing the requirements of both interval quantity and sample size within each range for subsequent frequency analysis. For each class, the relative frequency with which an individual element (N, P, Si) fell below the idealized R-B ratio (N:P = 15, Si:N = 1, and P:Si (or N) = 0.07) was calculated by: where F N is the relative frequency of a putative nutrient-poor sample occurring within a salinity range-defined class, and N N , N Si and N P represent the frequency with which N-poor, Si-poor and P-poor samples were found within the same salinity range. The same process was followed in turn for Si and P. Subsequently, the discrete frequencies of relative nutrient depletion found for each salinity interval were fitted by Dirichlet regression (Aitchison, 1982) to model the frequency of nutrient-depleted samples per chemical element over entire salinity range covered by the sample pool. To fit the data, a system of quadratic regression equations, defined as follows, was applied: where F N S , F DSi S , and F P S are the model-fitted frequencies for N, Si and P being the limiting nutrient, S is salinity, and A, B, and C are parameters generated by the best fit of the quadratic regression system to the data. The system of equations was solved by the approach minimizing the standard error (Nelder and Mead, 1965) in MATLAB®.
Microphytopbenthos production (as Chlorophyl a)
Benthic chlorophyll (Chl) a was measured during 2006 close to our seepage meter array on the same sand spit (Fig. 1) by Brito et al. (2009). To evaluate the possible link between monthly SGD-driven nutrient fluxes and microphytobenthos (MPB) production, we used logistic regression (Verhulst, 1938, in Cramer, 2002. We model the measured MPB (in μg Chl a g −1 sediment) as a function of monthly SGD nutrient flux (F), using the logistic growth expression where MPB(L) is the carrying capacity, k the rate of growth, and b a constant. Boundary conditions were MPB = MPB(L) when F → ∞ and The model was fit to MPB data as a function of monthly SGD-driven nutrient fluxes (DIN, SRP and DRSi) determined in our study by minimizing the sum of squares of differences between model output and the data of Brito et al. (2009) while varying k, using the Microsoft™ Solver routine in Excel. Initial and carrying capacity MPB values (MPB(L 0 ) and MPB(L)) where first fixed to the annual minimum and maximum MPB found on site by Brito et al. (2009) of~2 and~20 μg Chl a g −1 , until k was found, and then allowed to fluctuate within the bounds established by the standard deviation of the mean of determinations throughout the year (±10 %). Goodness of fit was assessed by regression analysis comparing model outputs and MPB data at the 95 % confidence level.
Precipitation, recharge, and STE oscillation
The twenty year-long datasets collected at S. Brás de Alportel (31 J/01C), suggest monthly precipitation over the Ria Formosa peaks in December (20-year mean of 211 ± 194 mm) or November (idem, 149 ± 132 mm) and is lowest in August (2.4 ± 3.3 mm). The (a) Annual variability of potential recharge estimated as the monthly effective precipitation, taken as the difference between precipitation (P) and pan evaporation (E) or evapotranspiration (ET). The dashed curve shows the 20-year historical means (± S.D.), obtained from the mean monthly differences between precipitation (P) and pan evaporation (E) measured at SNIRH station São Brás de Alportel (31 J/01C) for the period 1980-2000. The solid curve shows the regional P-ET estimate for 2006, with precipitation taken as the average of monthly rainfall (± S.D.) measured at SNIRH stations São Brás de Alportel (31 J/01C), Estói (31 J/04UG) and Quelfes (31 K/02UG) and evapotranspiration estimated monthly using Thornthwaite's expressions (Thornthwaite and Mather, 1957). (b) Time series analysis of the water level in the four wells screened in the Campina de Faro surface aquifer for the period January 2005 to December 2008, revealing the degree of correlation between lagged monthly water table height for the four-year time series of piezometric head in all stations, using an autocorrelation function (ACF) model. Highly significant (p < 0.001) correlation between water table heights within the aquifer are found at seasonal lags of 6 and 12 months, with negative coefficient for the former and positive for the latter. (c) Annual (2006) long-term average pan evaporation rates peak in July (257 ± 25 mm) and are lowest in January (45.7 ± 11.9 mm). During the four-year period that frames our SGD dataset (2005)(2006)(2007)(2008), the average precipitation recorded by the three meteorological stations in the watershed (Fig. 1) (Fig. 2). Recharge estimated by the water table fluctuation method followed the annual precipitation pattern (Fig. 2), but the more inland stations (610/167-Ludo and 611/91 Horta do Barrote/Pechão) registered higher amplitude fluctuations on water level (Table 1) than the southernmost stations (610/6 and 611/217). This is reflected for example on the recharge rates calculated for November 2006, which reached 38 and 53 cm month −1 respectively at 610/167 and 611/91, while only 7.8 and 2.8 cm month −1 were found respectively for 610/6 and 611/217. The closer proximity of 610/167 and 611/91 to the boundaries of adjacent regional aquifers (M11, Quinta João de Ourém, and M10, São João da Venda/Quelfes) suggest that recharge of the aquifer may also occur by inflows from these, as well as through more rapid infiltration through the Jurassic and Cretaceous outcrops at its northern limit, as suggested by Almeida and Silva (1987). Overall, the data support an overall North-South direction of groundwater flow and fast recharge of the M12 aquifer during the wetter winter months (November-January) followed by a slower fall of the water table throughout the Spring-Summer period, up to October (Fig. 2).
The comparison between historical P-ET data obtained in S. Brás de Alportel with that obtained for 2006 using Thornthwaite's expression for evapotranspiration ( Fig. 3a) suggests that, normally, direct recharge by infiltration through the surface soil would only reliably occur during the first and last two months of the year (January and February, November, and December). However, in 2006 this period of effective precipitation could include March and October while December was quite dry (~30 mm) in historical terms. Apart from establishing historical relevance, though, we needed to investigate whether the behaviour of the water table during 2006 was representative of the normal aquifer dynamic, and this way speak of seasonality of STE oscillation, rather than random variability. To establish this point, we measure the correlation between lagged monthly water table height for the four-year time series (2005)(2006)(2007)(2008) of piezometric head in all stations, using an autocorrelation function (ACF) model (Fig. 3b). Highly significant (p < 0.001) correlation between water table heights within the aquifer are found at seasonal lags of 6 and 12 months, with negative coefficient for the former and positive for the latter (Fig. 3b). This shows that groundwater table falls to its lowest levels in June/July and rises to their highest in November/December every year. This dynamic is consistent with the potential recharge data (Figs. 2; 3a) and indicates a strong water balance cyclicity dictated by the seasonality of precipitation over the basin. This strong seasonal signal on water table fluctuation implies that the saltwater-freshwater interface also shifts, expanding and contracting within the coastal aquifer, establishing a cyclic dynamic for the STE (Fig. 3c). During 2006, this oscillation resulted in a phase of saltwater wedge retreat (SWR) seaward until March/April, coinciding with freshwater flushing of the coastal aquifer borne from high recharge. This phase was followed by landward intrusion (SWI) until October driven by the continuous drop in water table height before the SWR phase restarted from November (Fig. 3c). The oscillating behaviour of the saltwater wedge driven by the annual continental recharge pattern can perhaps be better visualized in Fig. 3d. The plot shows how the transgressive migration of the STE occurs on an annual cycle (an oscillation) that is consistent both with historical data on effective precipitation and with P-ET estimates for the year 2006.
Saltwater wedge oscillation and pH of STE outflows
The transgressive movement of the STE through the two phases (retreat and intrusion, expansion, and contraction) is reflected on the dynamics of pH measured in STE outflow at our seepage meter array (Fig. 4). Monthly effective precipitation measures correlate well with monthly pH measured in STE outflow, with higher pH observed during the SWI phase and lower during the SWR stage, if either historical (Fig. 4a) or contemporary (Fig. 4b) data for effective precipitation is used. During the period in which the saltwater toe is located seaward of the fulcrum point (January-June, Fig. 3c, d) the STE outflow pH follows a different dependence on effective precipitation than that observed during the SWI period (July-December) but for the historically anomalous December of 2006 (Fig. 4a). However, this dichotomous dependence of pH on effective precipitation is re-established if we compare the 2006 pH data with P-ET estimates for the same year (Fig. 4b). In addition, the antithetical behaviour of STE outflow pH and continental recharge is quite clear when both are plotted monthly for 2006 (Fig. 4c, inset). Combined, our observations suggest a dynamic, but strong, connection between coastal aquifer recharge (inferred from effective precipitation) on the coastal plain and the chemistry of the STE outflows, over yearly time scales.
The cyclical nature of the annual variability of median pH in STE outflows is illustrated as a function of its transgressive oscillation in Fig. 5a. Note the sudden increase of pH in December is linked to a fall in effective precipitation and therefore recharge for that month by comparison to historical levels (Fig. 4b), implying that pH increases result from contraction of the STE inland and the opposite occurs during saltwater wedge retreat. However, the same dynamic is not reflected on redox potential (E H ). This has important geochemical consequences. Fig. 5b shows the Pourbaix diagram for iron in our system, with the pH-pE pairs (median ± MAD of all our monthly data) superimposed. The results highlight a linear dependence of redox potential on pH, with higher pE/pH ratios measured during the freshwater flushing stage of the STE excursion (SWR stage, Fig. 3c, d, Fig. 4), but indicate that pE of STE outflows changes little throughout the year. Assuming rapid equilibration of the iron system, particularly with regards to the stability of amorphous iron hydroxides, the diagram also indicates that within the STE, it is very unlikely that P would be in free-solute form because of the absence of iron adsorbent phases, with possible exception of January, where the median pH of the solution drops below the conditions for thermodynamic stability of Fe(OH) 3 and favour predominance of Fe in its reduced form (Fe 2+ ). However, this is also when the mean redox potential of the outflow is the highest. This is consistent with oxygen data, which show oxygenation levels in STE outflows range between 81.7 and 36.4 (±3) percent saturation, measured in January and December respectively. Regardless, the pH, but not the E H , of the STE outflow is affected by continental recharge cycles, via the effect they have on the dynamics of the saltwater-freshwater interface within the coastal aquifer (Fig. 5a).
Magnitude and variability of SGD rates in 2006
Total SGD (tSGD = saline + fresh groundwater discharge) rates had a bimodal dependence on precipitation throughout the year (Fig. 6a). Perhaps counterintuitively, the highest tSGD rates (505 ± 157 L m −2 d −1 ) during 2006 were measured in July, for a monthly precipitation of 0.9 mm. Generally, tSGD rates decreased from the driest months to a minimum in February (68 ± 9.5 L m −2 d −1 ) for a monthly precipitation of 58 ± 2 mm, and then increased again into the wetter months to November, when tSGD = 377 ± 87 L m −2 d −1 for a precipitation rate of 194 ± 21 mm. This data confirms that factors other than precipitation alone drive SGD rates through the coastal aquifer into the lagoon throughout the year. In June and July (Fig. 6b, c), SGD into the Ria Formosa contained mostly recirculated seawater (freshwater contribution <2 % of tSGD). The spring-summer period split the year in two: earlier, rates of fresh groundwater discharge increase from January to March (6.1 to 29.4 L m −2 d −1 and from 8.2 to 19.6 % of tSGD), and later progressively from June (3.9 L m −2 d −1 and 2 % of tSGD) to November (36.3 L m −2 d −1 and 10 % of tSGD). Both fresh groundwater seepage rates (fSGD, Fig. 6c) and relative freshwater content in SGD (Fig. 6b) reach annual peaks in March and November, while annual lows are observed during most spring-summer months (April to August). Fresh groundwater discharge in December dropped significantly from November (fSGD = 8.2 L m −2 d −1 , 6 % of tSGD) as did rainfall (Fig. 2).
The change in monthly fSGD rates throughout the year was consistent with mean recharge dynamics estimated for the aquifer by the water table fluctuation method (Fig. 2). Higher fSGD rates (March, November) also coincide with months of positive effective precipitation (Fig. 3a). It is interesting to note also that the highest SGD rates, either for fresh groundwater (March, November) or salty groundwater (July) happen at the approximate time the STE reaches its maximum excursion seaward or incursion landward (fresh SGD) or the fulcrum point in between, when peak saline SGD rates are measured (Figs. 3c, d; 6b, c).
Nutrient composition in SGD
The salinity and NO 3 − concentration of the seeping water varied significantly during the year (Fig. 7a). Nitrate concentrations in SGD ranged between 180.3 μM and the minimum detectable concentration, correlating significantly with salinity. The mixing plots show the data fall into two different subsets corresponding respectively to a 'fall' term, with higher slope of the mixing line (R1, August to December), and a 'spring' term, with lower slope of the mixing line (R2, January to July). On the other hand, the concentrations of both NO 2 − and NH 4 + in SGD were much lower than those of NO 3 − and showed no obvious conservative behaviour along the sample salinity range (Fig. 7b, c) (Fig. 7d). The intercepts (S = 0) for DIN (354 ± 15 and 99 ± 3.5 μM, respectively during the "fall" and "spring" terms) are much lower than the estimated fresh groundwater endmember (2.1 mM, see Section 2.3), indicating that N is not conserved along the entire extent of the groundwater-seawater mixing line.
The DRSi concentrations in SGD varied between 0.3 μM and 55.3 μM (Fig. 7f) and were not consistently conservative within the salinity range throughout the year, with the most obvious outliers being the colder months of January and December. When these two months are excluded, the remaining data fall into a significant linear correlation (df = 336, Fig. 5. Annual variation of the monthly pH of STE outflows and its geochemical impact (a) cyclic nature of the annual change of STE outflow pH as a function of salt wedge toe oscillation: note December 2006 position results from a contraction of the STE resulting from an unusually dry month in historical terms; (b) Pourbaix diagram for the Fe-CO 2 -H 2 O system in the STE with monthly pE-pH pairs overlain (C T (total inorganic carbon) = 1.6 × 10 −3 M, soluble Fe species = 1 × 10 −9 M. Solid phases (s) are Fe(OH) 3 (amorphous), Fe(OH) 2 (s)). The pH of STE outflow again oscillates within tight physicochemical bounds, with the highest pH corresponding to August and the lowest to January. r 2 = 0.462, p = 0), with intercept at zero salinity of 69 ± 3 μM, statistically indistinguishable from the mean groundwater concentration of 65 ± 7 μM measured from 2010 to 2013 (Section 2.3). This suggests a continental (fresh) groundwater origin for the silicate arriving at the beach aquifer, and since Si diagenesis is not affected by redox reactions, some additional silica mobilization in transit to explain the January and December data. Indeed, the highest DRSi concentrations could be measured in both the more saline (S > 36, during January and December) and the more brackish SGD samples (S < 20). Combined with the convex shapes of the mixing curves in January and December (Fig. 7f), this suggests the beach aquifer could be a net contributor of DRSi to SGD during the colder months. Dissolved P was low throughout the year, with maximum measured SRP concentrations of 8 μM and >40 water samples (> 10 % of the sample pool) below minimum detectable concentrations (Fig. 7e). This is consistent with the pE-pH stability data (Fig. 5b).
Seasonal variability in SGD nutrient loading and stoichiometry
Monthly DRSi fluxes, driven by SGD (Fig. 8a) varied between a minimum of 6.0 ± 0.4 mmol m −1 d −1 in June and a maximum of 40.6 ± 8.9 mmol m −1 d −1 in March, correlating well with monthly fSGD rates (Fig. 6c, Students t-test, p < 0.02). This is consistent with earlier data indicating that Si loading is controlled mostly by the fresh groundwater endmember (Fig. 7f). NO 3 − fluxes were significantly higher during the period of August to November (Fig. 8b) SRP fluxes (Fig. 8c) measured from July to December were on average 4 times higher than those determined between January and June (means of 0.4 and 1.65 mmol P m −1 d −1 , respectively). Overall, the salient features of the annual variability of SGD-driven nutrient fluxes were the difference between the first part of the year (until June) and the second (July onwards) for SRP and DIN, the relatively low SGD-driven nutrient fluxes in July, when the saline component of SGD is highest (with SRP being the exception), and the monthly NO 2 − SGD flux and the Si:P SGD flux quotient being significantly correlated with the fSGD/sSGD ratio (Students t-test, p < 0.05 and p < 0.01 respectively). As a result, SGD contributed to enrichment of N over P from February to May and from August to November (Fig. 8d) and over Si in February, May, and August to November (Fig. 8e) while in July a deficit of N and Si relative to P was observed.
SGD-driven nutrient fluxes and primary production
In 2006, the mean water column Chl a was lowest in July (0.37 mg m −3 ) and peaked in April (4.1 mg m −3 ) and again in September (4.2 mg m −3 ) (Brito et al., 2012). The MPB Chl a, measured over the period 2006-2008 at the sand spit station (Station 'Ponte' in the original paper) every fortnight from April to November ranged from 2 to 25 μg Chl a g −1 (Brito et al., 2009). For 2006, MPB Chl a rose steadily from 8.7 μg g −1 (dry sediment) in April to 18.2 μg g −1 in August, dropped to 17.1 μg g −1 in September and increased again after that to a peak in November (22.3 μg g −1 ). For the period 2007-2008, MPB Chl a peaked above 15 μg g −1 in April July and December 2007, and January 2008.
The microphytobenthic production could be modelled as a logistic function of monthly SGD-driven fluxes for DRSi, DIN and SRP (Fig. 9a, b, c) in a statistically significant way (p < 0.0005, p < 0.02, p < 0.005, respectively with df = 6). Starting MPB levels (at SGD nutrient flux = 0) were found to fit the growth curve well at 2.65 μg Chl a g −1 for all the nutrients. Carrying capacity (μg Chl a g −1 MPB) was 20.93 for DRSi, 19.46 for DIN, and 18.71 for SRP, fluctuating around the mean peak MPB of~20 μg Chl a g −1 found in-situ and well within the maximum within-survey sample variability of ±5 μg Chl a g −1 reported by Brito et al. (2009). The best-fit growth rate constant k was 1.86 d −1 (±0.24 S.E.) for DRSi, 1.59 d −1 (±0.30 S.E.) for DIN, and 40.6 d −1 (±8 S.E.) for SRP. The proportions between nutrient-specific growth rate constants, taken as the reciprocals (i.e., for N:P, k P /k N ) reflected the ideal R-B ratios relatively well: 0.9 ± 0.2 for k N :k Si (N:Si~1), 22 ± 5 for k P :k Si (Si:P~15), and 26 ± 7 for k P : k N (N:P~16), consistent with MPB production being regulated by nutrient resource ratios in SGD and P being most often the limiting nutrient (Fig. 10c, d).
Nutrient stoichiometry in SGD
The frequency with which each individual nutrient in SGD water deviates the most toward scarcity from the idealized R-B ratio (i.e., is the most depleted with regards to N:P:Si = 16:1:15) is plotted as a function of the freshwater portion in the seeping water in Fig. 10, separating the 'spring' from the 'fall' term on the basis of the behaviour of NO 3 − with salinity ( Fig. 7a), as well as the putative location of the salt wedge toe with regards to its equilibrium position in the year (Fig. 3c, d). The 'fall' period would correspond to saline intrusion and expansion of the STE inland and toward the surface, while the 'spring' term corresponds to saltwater retreat during the wetter months of the year and the ensuing contraction of the STE. The best fit for the Dirichlet regression model (Eqs. (7)- (10)) is then overlain on the various discrete histograms. All possible scenarios, i.e., where either N, P, or Si were the most depleted in individual SGD samples occur. However, the frequency with which each nutrient fell into the 'limited' category with both the freshwater portion of the SGD outflow mixture and with regards to whether samples were collected during the first (spring) or the latter (fall) part of the year. Because brackish STE water contains a significant concentration of NO 3 − (Fig. 7a), samples taken from January to July with >20 % of fresh groundwater always carry an excess of N over the other nutrients (Fig. 10a). This is reinforced during the latter part of the year (August-December), given that the amount of NO 3 − contained in the fresher groundwater (S~15) seeping out of the beach face is larger (Fig. 7a). During the latter part of the year a ratio of >18 % fresh groundwater to total SGD is enough for N to be always above the R-B ratio (Fig. 10b). The frequency of N-depletion in samples decreases rapidly with increasing freshwater fraction in SGD and this in turn determines the relative impoverishment of the complementary nutrients, P (Fig. 10c, d) and DRSi (Fig. 10e, f) at higher freshwater contents. From January to July, when some SGD samples contain the highest portion of fresh groundwater measured in-situ (>30 %), P is always scarce in comparison to N and DRSi (Fig. 10). Conversely, when SGD is composed exclusively of recirculated seawater (fresh groundwater portion = 0 %), N-depletion occurs in~60 % of the samples, followed by P-depletion (~30 %) and finally by DRSi depletion (Fig. 10a, c, e). In between these extremes, the frequency with which P-depletion occurs increases with mounting fresh groundwater contribution, rapidly until the makeup of the outflow is~18 % fresh groundwater, in parallel with the decrease in the frequency of N depletion and a slight increase in the frequency of DRSi-scarcity. Beyond the~18 % fresh groundwater content breakpoint, N-depletion never occurs, and DRSi-depletion occurs only in 4 % of the samples. From August to December, the frequency with which the compositional range of the samples falls below the R-B ratio is quite like that shown for the previous term with regards to N-scarcity, but very different for P-and Sidepletion (Fig. 10d, f). Water samples with high fresh groundwater fractions frequently had a lower Si:N ratio: Si-depletion (N:Si > 1) occurred iñ 60 % of the samples when SGD was composed of >30 % of fresh groundwater, while P-depletion (N:P > 16) occurred in~40 % of the samples, compared to respectively 0 % and 100 % during the earlier part of the year. However, the SGD fractional composition breakpoint of~18 % fresh groundwater was still observable with respect to the nutrient stoichiometry in the outflows. From 0 % freshwater content to this breakpoint, the occurrence of the different scarcity scenarios was quite like that observed during the earlier part of the year, but with fresh groundwater content increasing beyond this point, the frequency with which P-depletion occurred decreased while Si-depletion increased, in stark contrast to observations during the spring term. This is also observable in the monthly-integrated flux ratio of Si to P (Fig. 8f). This implies that the STE outflow composition in terms of NO 3 − (Fig. 7a) is an important driver of nutrient stoichiometry of SGD, but that the annual variation of the Si:P compositional ratio stems from a distinction of biogeochemical histories of SGD happening during the earlier and later parts of the year.
Discussion
Cause-effect relationships that link changes to benthic primary production with nutrient composition of SGD and subterranean estuary dynamics are not well developed. To effectively discuss our results, we must first answer two questions. The foremost is whether our understanding and description of the spatiotemporal dynamics of the STE at the Ria Formosa site is realistic and physically solid; the next is whether scaling up individual measurements of flow and SGD composition taken from seepage meters covering a limited area to monthly quantities obscures the underlying trends.
Describing the oscillation of a subterranean estuary
Groundwater system dynamics in barrier islands are very complex. Consequently, the morphology of freshwater lenses and dynamics of associated transition zones (the subterranean estuary, or STE) are difficult to describe mathematically. The physical problem does not have explicit analytical solutions without considerable simplification (Fetter, 2001(Fetter, , 1972. Our objective is not to provide one solution, but instead to offer a mechanistic framework to understand the link between subterranean estuary dynamics, the composition of STE outflows at the seepage face and their impact on nutrient ratios and benthic primary production. Our SGD measurements clearly showed that there was saltwater as well as freshwater outflow and a well-developed seepage face, implying the saline groundwater body is not static and ruling out the Ghyben and Herzberg principle (Ghyben, 1888;Herzberg, 1901) to define the shape of freshwater lens (Hubbert, 1940). To describe oscillations of the STE at the annual scale, we first accept that these occur around a point of equilibrium (a fulcrum for the oscillation), in response to seasonal changes in recharge throughout the year. We then accept that the oscillation of the STE throughout the year can be reasonably described by taking the monthly steady state solution to the problem of the position of the saltwater wedge toe based on the water table height in the main aquifer. In support of this approach, we first show that the water table in the Campina de Faro aquifer reacts quickly (<1 month) to precipitation (Fig. 2), that the annual fluctuation of piezometric levels follows the effective precipitation over the system (Fig. 3a) and these fluctuations are cyclic on an annual basis over multi-annual timescales (Fig. 3b). This allows us to accept that the transition zone between fresh and salt groundwater also fluctuates on a similar time scale, and hence take the steady-state monthly position of the saltwater wedge calculated as function of the hydraulic gradient on land as an indicator of the location of the subterranean estuary throughout the year. Detailed studies yielding the position of the freshwater-saltwater interface for the case of flow being essentially horizontal (Dupuit assumptions) have been carried out amongst others by Glover (1959), Bear and Dagan (1964), Strack (1972) and Van Der Veer (1977), with the latter accommodating the simultaneous flow of the two fluidshowever, its explicit solution requires an unknown quantity, dependant on the discharge at the shoreline and on the accumulated recharge at the seepage face. Comparisons between all suggested solutions yield very little difference on the position of the tip of the saltwater wedge itself however (Vacher, 1988a). We therefore adopt the formulation of Glover (1959) following the assessment of Cheng and Oazar, (1999). (a, c, e): spring term (January-July) and (b, d, f): fall term (August-December) as discussed in the main text. Statistical information for the six fitted polynomial curves (Eqs. (7)-(10)): R1: R 2 = 0.93, P < 0.001, R2: R 2 = 0.92, P < 0.001, R3: R 2 = 0.85, P < 0.001, R4: R 2 = 0.91, P < 0.001, R5: R 2 = 0.82, P < 0.001, R6: R 2 = 0.90, P < 0.001.
We extend the precaution in our approach by expressing the position of the saltwater wedge as a relative quantity, referenced to the 'equilibrium' position as calculated by its location derived from the mean annual piezometric head, and compare results for four different piezometric series, which yield very similar results (Fig. 3c). Plotting the mean relative position of the saltwater wedge toe against the potential recharge (the excess precipitation) in normalized form (Fig. 3d) yields an annual cycle when 20-year mean excess precipitation records are used, and the notion that relative positioning depends essentially on recharge (see deviation in December 2006, because of the anomalously low precipitation) with the STE extending inland most of the year. We take these results to mean that our approach, though necessarily approximative, is realistic for the purpose of discussing our data.
Scaling of seepage measurements to monthly SGD fluxes
The large spatial and temporal heterogeneity in discharge magnitude and composition of SGD across the beach raises the question of whether scaling up results from the seepage meter array provides representative data for the month without loss or distortion of information. This is an important question -this on-site variability is caused by the combination of tidal and inland pressure gradients acting in opposition (Li et al., 1999), making both the rate of discharge and the relative proportions of fSGD to sSGD vary in the outflowing mixture with beach slope and distance from the tide line during a tidal period (Li et al., 2008). This ratio is also spatially variable in parallel to the coastline due to density effects and local variations in matrix permeability. Therefore, seepage salinity and nutrient stoichiometry become spatially and temporally heterogeneous at the surface length scale of the beach. As shown by Welti et al. (2015), this variability may induce the patchiness of microphytobenthos observed in the Ria Formosa site by Brito et al. (2009), as indeed it regulates subsurface biogeochemical zonation (Ibánhez and Rocha, 2016;Waska et al., 2019). Three separate pieces of evidence ensure our monthly data is realistic and representative of on-site SGD magnitude and composition: Firstly, Michael et al. (2003) studied the differences between deployment of small and large numbers of seepage meters at the 50 m 2 scale and found that arranging them in transects drastically reduced uncertainty in scaling up of fluxes. Ten seepage meters (i.e., 1 per 5 m 2 ) arranged in transects provided the most accurate results. This is the coverage we have for our seepage meter array, and this ensures that the most accurate results possible were obtained at the beach length scale to account for spatial heterogeneity. Temporal heterogeneity is accounted for by the long-term deployment of the seepage meters and the number of samples taken at sub-hourly intervals for never less than two full tidal cycles. The discharge rates are also consistent over time with data obtained from piezometers installed within the beachdata from both methods are compared and discussed in Rocha et al., 2009; Secondly, our seepage meter array produces SGD measurements per unit length of coastline that are consistent with whole-basin groundwater tracer ( 222 Rn) budgets with source attribution by stable isotopes in water, at both seasonal and tidal scales, in 2009 and 2010 . The results for total Rn-derived SGD per metre of coastline into the Ria Formosa obtained then (9.6 ± 9.1 m 3 m −1 d −1 ) are directly comparable to the monthly SGD measurements we obtain from seepage meters in 2006, which vary between 0.67 ± 0.1 and 3.54 ± 1.1 m 3 m −1 d −1 (see Fig. 6b).
Thirdly, as mentioned before, the average fresh groundwater discharge from the M12 unconfined aquifer to the lagoon, per day and per linear metre of shoreline, estimated (see Section 3.4) during 2006 from monthly piezometric data was 0.126 ± 0.017 m 3 m −1 d −1 , statistically identical to the 0.115 ± 0.079 m 3 m −1 d −1 obtained by direct measurement by our seepage meter array.
In addition, we can also assess whether information on nutrient structure of SGD is obscured when scaling individual concentrations (Fig. 7) up to monthly transport rates (Fig. 8a-c). We compare the information given by the annual change in nutrient ratios in monthly fluxes (Fig. 8d-f) with that provided by a frequency analysis of relative limitation/excess of N, P and Si relative Redfield ratios throughout the year as a function of freshwater content (Fig. 10) in individual samples. We find (Fig. 10a, b) that i) the frequency with which N becomes limited with respect to P is drastically reduced with increasing freshwater content of sampleswhich is essentially the message provided by the integrated fluxes (see Fig. 8d for the N:P monthly flux ratio, and Fig. 6b for the fraction of fresh groundwater in total SGD throughout the year); ii) the differences between the spring and fall periods found in nutrient concentration dependence on salinity and particularly in DIN (Fig. 7) is also apparent when looking at potential P limitation (N (or Si):P > 15) and Si limitation (Si:N < 1), at both the individual and the annual scale -the availability of P relative to the other nutrients increases as SGD becomes more saline (Fig. 10c, d), which corresponds to the scaled up monthly flux annual pattern ( Fig. 8d and f); finally, iii), Si:P ratios change from spring to fall particularly when SGD contains more freshwater (>~18 %), which is coherent with the finding that Si:P ratios in SGD (Fig. 8f) depend on freshwater fraction composition (Fig. 6b), which varies during the year as a function of excess monthly precipitation, as illustrated by the comparison between 2006 and the historical means (Fig. 3a). In combination, the data supports the hypothesis whereby nutrient resource ratio availability in SGD is controlled by recharge, and therefore by the annual dynamics of the transitional zone, or subterranean estuary. This conclusion, arising from the analysis of all our data at two different scales, also reinforces the coherence of our scaling up method and adds confidence to ensuing interpretations.
Groundwater discharge composition and dynamics
Our results show that the highest rates of saline groundwater discharge into the Ria Formosa lagoon occur during the driest months of the year. Fresh groundwater discharge magnitudes are higher during the wetter months of the year (January-March, October-November), but freshwater constitutes at most 20 % of the total amount of SGD seeping out into the lagoon (Fig. 6). In the broad sense, the fresh groundwater content of SGD co-varies with the annual continental recharge cycle and the total SGD flux follows in counter cycle to it. The same pattern has been observed by Kelly and Moran (2002), who found SGD into the Petaquamscutt estuary (Rhode Island, USA) reached seasonal highs in summer and lows in winter, by Moore et al. (2006) in the Okatee estuary (South Carolina, USA), who found SGD rates and associated nutrient fluxes in summer higher by a factor of 3-4 than those in winter, and by Charette (2007), who describes an increase of saline SGD during dry periods in the Pamet River estuary (Massachussets). Because SGD is regulated by the hydraulic gradient at the land-ocean interface, this implies that the piezometric head of saline groundwater is positive relative to mean sea level during the drier periods.
Our site is located on the lagoon side of the barrier island strip complex, but during months of positive net recharge (Fig. 3a), the freshwater component of SGD is isotopically linked to the continental aquifer system , which implies that the freshwater lens on the island is connected to the continental aquifer system, either through the Holocene permeable continuum underlying the marsh facies within the lagoon or via a connection to the underlying Miocene unit (Engelen and van Beers, 1986). As shown theoretically by Vacher (1988a) and discussed with case studies elsewhere by Vacher (1988b), Urish and Ozbilgin (1989), Nielsen (1999) and Masterson et al. (2014), the higher saltwater head and greater upper beach storage on the side facing the ocean will cause an asymmetry in the shape of the freshwater lens within the barrier island. The annual dynamics of freshwater lens morphology in these systems is largely controlled by recharge (e.g., Comte et al., 2010). Consequently, two effects of the annual recharge dynamics (Fig. 3) over the Ria Formosa on the morphology of the subterranean estuary (the transition zone) and associated freshwater lens are expected. At the barrier island scale (e.g., Gulley et al., 2016), the thickness (vertical extension) of the transition zone, or the STE, is expected to widen drastically during drought periods, roughly in a 2:1 proportion to the relative reduction in effective precipitation (White and Falkland, 2010). Periods of evapoconcentration (net negative potential recharge, e.g., April/ March to September/October in the Ria Formosa, viz. Fig. 3a) will ultimately lead to the disappearance of the 'pure' freshwater lens and expansion of the transition zone (the STE) into space previously occupied by freshwater. This will imply that the hydraulic gradient established across the barrier island from the ocean to the lagoon will increase salty SGD to the lagoon. It is interesting to note in this regard that the potential recharge, in negative figures (Fig. 3a), which can be seen as moisture demand from soil, increases from~− 10 to~− 145 mm from April to July, with 45 % of this excursion taking place from June to July. These changes would imply an expansion of the STE by a factor of~2 between April and June (White and Falkland, 2010) and roughly the same again between June and July, i.e., in a single month. The hydraulic gradient would hence increase during the whole period by a factor of~3-4. This increase would result in SGD intensifying by the same proportion between April and July, which is in fact what we observe: SGD rises between 1.15 ± 0.12 and 3.54 ± 1.1 m 3 m −1 d −1 , i.e., by a factor of 3 ± 1, between April and July. Apart from the dampening of short-term variability on the data that comes necessarily form the monthly analysis we conduct, that this uptick in SGD is more evident between June and July might be explained by additional factors that increase demand on the groundwater locally, including tourism. Visitors to the Faro region are~4.6 million a year,~45 % of which stay between late June and early September, so the peak season starts effectively in July, placing immediate demand on shallow groundwater in the barrier islands to satisfy demand for water to employ in surface cleaning, dishwashing as well as waste disposal by local hospitality industry. July is also the driest month of the year on average (57 % mean humidity) and the month with the longer daily period of sunshine (average 12 h daily) and could see the most extreme high temperatures during the day (up to 45°C). All these factors contribute to an upconing of the salty groundwater lens and hence in SGD observed. In addition, at the local beach scale, hypersaline beach porewater will sink through the upper beach (Geng et al., 2016) accelerating forced convective circulation (Stringer et al., 2010) and thus enlarging the upper saline plume, or USP (Li et al., 2008;Robinson et al., 2007). The combination of these effects of annual recharge dynamics provides a mechanistic explanation for our observations that may be conceptually illustrated as in Fig. 11.
Briefly, during the 'wet' season, the saltwater wedge toe moves seaward, the subterranean estuary contracts, net recharge contributes to the formation and expansion of a freshwater lens under the barrier island, the positive ocean-lagoon hydraulic gradient promotes freshwater SGD into the lagoon, porewater becomes less saline at the higher beach profile and the USP contracts with the expansion of the freshwater channel (Fig. 11a). Conversely, during the 'dry' season (March/April to September/October), the saltwater wedge toe moves inland, the soil moisture deficit resulting from evaporation leads to the contraction and disappearance of the freshwater lens under the barrier island, the subterranean estuary expands, the positive ocean-lagoon hydraulic gradient drives an increase in salty SGD into the lagoon, porewater becomes more saline at the higher beach profile enhancing density driven circulation in the beach aquifer, and the USP expands with the retraction of freshwater (Fig. 11b). The biogeochemistry of the STE responds as expected (Magaritz and Luzier, 1985) to the perturbations caused by the annual recharge cycle. The pH of STE outflows increases during the dry season and decreases during the wet season, given the major influence of seawater on the dissolved inorganic carbon and alkalinity balance of coastal aquifers (Cai et al., 2003;Mercado, 1985). This behaviour imprints a seasonal variability of STE biogeochemistry and SGD composition that maps groundwater recharge dynamics (i.e., the 'hydrological year'), rather than insolation and temperature (i.e., the Julian year). This realization in turn implies that the ecology of similar coastal systems, connected to nearby aquifers, might be more responsive to groundwater level variance as regulated by both natural and anthropogenic drivers than previously thought. Fig. 11. Conceptual diagram of the oscillation of the freshwater lens and transition zone (the STE) under the barrier islands of the Ria Formosa lagoon in response to effective precipitation. A: 'wet' season. Saltwater wedge toe moves seaward in response to (positive) recharge, the subterranean estuary contracts, freshwater lens expands under the barrier island, the positive ocean -lagoon hydraulic gradient enhances freshwater SGD into the lagoon, porewater becomes less saline at the higher beach profile and the USP contracts with the expansion of the freshwater channel. B: 'dry' season. The saltwater wedge toe moves inland, net evaporation leads to the contraction and eventual disappearance of the freshwater lens under the barrier island, the subterranean estuary expands, the positive ocean-lagoon hydraulic gradient drives an increase in salty SGD into the lagoon, porewater becomes more saline at the higher beach profile, and the USP expands with the retraction of freshwater. The changes in composition of the STE outflows in the form of SGD are highlighted for the 'wet' season (A) and the 'dry' season (B) by boxes with arrowsarrows pointing upward describe an increase in fluxes.
Coupling between STE oscillation and SGD-driven nutrient fluxes
On an annual timescale, fluxes of nitrate and phosphorus to the Ria Formosa lagoon via SGD are highest in summer and fall (July-December) and generally much lower in winter and spring (January-June). The magnitude of fluxes and their composition (Fig. 8a-c) can be explained by the annual transgressive oscillation of the subterranean estuary around its point of equilibrium (Fig. 3c, d) and its drivers (Figs. 2, 3a) rather than by production-respiration processes, which would impact the oxidationreduction potential of SGD, or by the magnitude of the freshwater component of SGD, but is not supported by our observations (Figs. 5b, 6). As the saltwater wedge moves inland (Figs. 3c, 5a) during periods of net negative recharge (Fig. 3a), the pH of SGD increases (Fig. 4c), and SGD-driven fluxes of N and P increase dramatically following the maximum pH attained in July (Fig. 8b, c). This observation is coherent with our conceptual understanding of the dynamics in response to recharge of freshwater and brackish lenses under barrier islands (Fig. 11). Because the expansion of the transition zone beneath the island during periods of potential negative recharge leads to the salinization of aquifer solids and the disappearance of a pure freshwater lens follows periods of soil moisture deficit, the pH of groundwater will increase on par with salinity given the enrichment of saltwater in bicarbonates (Appelo, 1994), while saline SGD will increase driven by the saline groundwater table gradient. This has clear implications for SGDdriven fluxes of P and N to surface waters, and, eventually, also DRSi, since the dissolution of biogenic silica in these environments is expected to be regulated by salinity and pH (Loucaides et al., 2008).
The mechanics of P flux to coastal waters is usually described in association with the dissolution of Fe-oxides, and eventually conversion of these to Fe sulphides in anoxic environments (Kemp et al., 2005;Slomp and Van Cappellen, 2004). However, in oxic environments like ours, benthic nutrient release is less influenced by redox state (Fig. 5b), which has been shown to be controlled essentially by O 2 availability. Spiteri et al. (2006) also showed that the pH gradient (and not the E H ), regulated the speciation of iron oxides and controlled phosphorus sorption dynamics in the STE of Waquoit Bay (Massachussets, USA). Once pH reaches peak annual values (>8.5) in July-September (Figs. 4c, 5a), P release from the Ria Formosa STE is clearly enhanced (Fig. 8c). The elevation of porewater pH, especially when high pH is attained (>8) enhances the mobility of phosphorus by breaking surface Fe\ \P bonds (Gao et al., 2012). This effect has been shown to support P demand in lakes (Xie et al., 2003) as well as tidal regions (Andersen, 1975;Seitzinger, 1991), and is also here shown to be important in supporting benthic primary production (Fig. 9c).
Because exchangeable NH 4 + is weakly bound to negatively charged surface adsorption sites in natural sediments, an increase in the alkalinity and ionic strength of STE fluids with salinization of the coastal aquifer will also necessarily drive changes to the mobility and speciation of nitrogen, since cation exchange reactions can promote NH 4 + mobilization (Rosenfeld, 1979) and the ratio between NH 3 and NH 4 + increases by~one order of magnitude for each pH unit increase (Emerson et al., 1975). Hence elevated pH further accelerates the desorption of exchangeable NH 4 + pools by promoting the formation of NH 3 , which reduces NH 4 + concentrations in both porewater and mineral surfaces and further enhances desorption. Under oxic conditions in situ, an increased availability and mobility of NH 3 , which is the primary substrate for bacterial nitrification (Suzuki et al., 1974), combined with increased salinity and pH driven by salinization, would stimulate nitrification (Isnansetyo et al., 2014;Jones and Hood, 1980;Kemp and Dodds, 2002). Potential enhancement of nitrification in the coastal aquifer driven by the changing chemistry brought by salinization is consistent with the increase in NO 3 − fluxes from the STE verified from July onwards (Fig. 8b) and explains the apparent bi-modal dependence of DIN concentrations on salinity of SGD throughout the year (Fig. 7a, d). The results therefore support the hypothesis whereby an expansion inland of the STE brought by net negative recharge will result in an increase of SGD-borne transport of P and N into the lagoon, because of the stimulating effect of pH and salinity increases on nitrification and phosphorus desorption.
Somewhat in contrast, dissolved reactive silicate fluxes (DRSi) are significantly correlated with the fraction of freshwater in SGD. This merits more research but indicates that an important part of the reactive silicate reaching the Ria Formosa lagoon has a continental, rather than marine, origin. This is not a new idea: Conley (2002) proposed that the fixation of silica in the form of phytoliths by land plants could be comparable to that wrought by marine diatoms. Biogenic silica produced by land plants and freshwater diatoms and mobilized by interactions with seawater was shown to be potentially underrepresented in global silicate balances (Loucaides et al., 2012(Loucaides et al., , 2008, and this topic has received increased attention by the SGD research community (Oehler et al., 2019) following the seminal paper of Kim et al. (2005), which brought to light the potential role of saline SGD (i.e., seawater recirculated through coastal aquifers) in the global Si biogeochemical cycle.
SGD-driven nutrient fluxes and benthic primary production
The evidence here supports the hypothesis whereby the seasonality of SGD in terms of magnitude and composition (Fig. 8) controls the annual dynamics of primary production in the lagoon (Fig. 9). Microphytobenthos production would be the overwhelmingly dominant form of primary production in the system, assimilating most of the sediment-water nutrient fluxes (Fig. 9) by way of its prime location at the sediment-water interface and indeed MPB was estimated to represent up to 99 % of the total Chl a in the system by (Brito et al. 2011). Our hypothesis is also consistent with the absence of correlations between water column Chl a levels and nutrient concentrations from 1991 to 2010 (Brito et al., 2012), between MPB Chl a and tidal range, wind speed, water temperature, salinity or nutrient concentrations (Brito et al., 2009), and, moreover, would be fully consistent with the cyclic components explaining the periodicity of MPB in the lagoon derived from Fourier analysis by Brito et al. (2009). This study shows that 91 % of the variance of MPB in the Ria Formosa can be explained by three components: within-day variance, corresponding to spatial patchiness (61 %), variance with a period of 14-18 days (25 %) and finally a cyclic component with a periodicity of 3 yr −1 . These periodicities are entirely coherent with the annual variability of SGD magnitude and composition and its drivers. Patchiness of benthic Chl a is an inherent result of the within-day temporal variance of SGD cross and along-shore as explained before. The periodicity of 14-18 days is coherent with the oscillation of tidal pressure gradient acting on the beach aquifer over a neap-spring cycle. Acting in opposition to inland hydraulic head, these induce regular fluctuations on the magnitude and composition of SGD at the sediment-water interface, affecting the distribution and production of MPB on a fortnightly timescale. The 3 yr −1 cyclicity is clearly explained by the seasonality of groundwater recharge (Fig. 3a), and its effects on the oscillation of the STE (Fig. 3d), that drives pH in groundwater (Figs. 4,5a) and SGD composition in terms of its salinity (Fig. 6). The interaction between these drivers actually divides the Julian year into three distinctive periods: winter-spring (i.e., January-March), and fall-winter (i.e., September-November) when both fSGD rates and the fSGD/rSGD ratio are relatively high, intercalated by the spring-summer months (April to August) when these were at their annual lows (Fig. 6), with considerable effects on the variability of nutrient structure in SGD (Fig. 8), which evidently conditions MPB (Fig. 9). Given that it is the recharge cycle on land that drives the oscillation of the STE, the production cycles within the lagoon should be analysed against the backdrop of the hydrological year (i.e., 1st October to 30th September) and not the Julian year (1st January to 31st December) as usually done by ecologists. This phase difference between the perceived (seasonal temperature and insolation) and the actual drivers of primary production (effective precipitation) in the Ria Formosa might have contributed to the weak understanding to date of the main drivers of productivity in a lagoon that shows some uncommon traits, including nitrogen limitation of phytoplankton growth in the water column during summer, in spite of luxury consumption of silicate by the local species assemblage (Domingues et al., 2015), which is consistent with the N:P and Si:P ratios of <16 and < 15 respectively found in SGD reaching the lagoon this month (Fig. 8d, f). Groundwater recharge dynamics, leading to a wet season with the STE moving seaward over the period October/November to March/April and a dry season with expansion of the STE inland from March/April to October/November (Fig. 11) may therefore fully explain locally expressed symptoms of eutrophication that have to date confused assessment of the water quality status in the lagoon based on the traditional insolation/production relationships (Newton et al., 2003;Barbosa, 2010). These include occasional occurrence of nuisance macroalgae blooms, mainly in winter (Sprung et al., 2001), low physiological status of marsh plants (Padinha et al., 2000), localized decreases in oxygen saturation levels (Mudge et al., 2008(Mudge et al., , 2007 and episodic water column nutrient enrichment (Cabaço et al., 2008;Newton and Mudge, 2005).
The biogeochemistry and nutrient stoichiometry of SGD in the Ria
Formosa lagoon is controlled by the continental hydrological cycle, or, more precisely, by the oscillation of the groundwater table in the coastal aquifer and its effects on the dynamics of the STE. This makes the dynamics of primary production in the lagoon, and consequently its ecology, dependant on the hydrological year and not on insolation. This is a cause-effect relationship that will prevail in other systems subject to the same drivers elsewhere in the world, and will imply a significantly more direct and short-term impact of climate and environmental changes on ecosystem function as precipitation and groundwater recharge patterns are disrupted by ongoing warming, as well as a more direct dependence of the stability of coastal lagoon ecosystems on groundwater mining, which tends to scale linearly with warming and rising demand on coastal ecosystem services. 2. The dynamics of the groundwater freshwater lens and transition zone is mapped onto the biogeochemistry of STE outflows. The main physicochemical driver is pHwhich increases in SGD with salinization and expansion of the STE into aquifer areas that were previously occupied by the freshwater lens during 'dry' periods and decreases as the saltwater wedge retreats in response to increased groundwater recharge during 'wet' periods. This oscillation of the STE is likely to occur in similar systems throughout the world. The broader implications include stronger links between coastal ecosystem function and precipitation dynamics on the one hand and groundwater resource management on the other, making the issue of true holistic management of the coastal zone considering the subterranean connection in the land-water continuum ) more pressing. 3. Nitrogen and phosphorus respond differently to silicate when the STE oscillates, with Si availability linked more tightly to the freshness of SGD and the saltwater wedge retreat stage while N and P surge primarily as a function of saline intrusion that promotes nitrification and P desorption. This suggests a mechanism by which the biogenic silicates originating in land plants can effectively contribute to the oceanic silicon cycle via the link provided by oscillating STEs. The combined effect of annual STE dynamics and coastal aquifer biogeochemistry results in large increases of the N:Si ratio of SGD during the saline intrusion phase, while the drier months of the year could see N limitation of primary production occurring.
Declaration of competing interest
The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper. | 2022-07-24T15:06:01.005Z | 2022-07-01T00:00:00.000 | {
"year": 2022,
"sha1": "b52602256f7dd5d4e252561b4b690e49056714e3",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1016/j.scitotenv.2022.157522",
"oa_status": "HYBRID",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "ffd3c2e0e45c51c7cfaa1a4ade89b2d1f92c9a8b",
"s2fieldsofstudy": [
"Environmental Science"
],
"extfieldsofstudy": []
} |
5428630 | pes2o/s2orc | v3-fos-license | Sleep duration and incidence of lung cancer in ageing men
Background Previous studies have suggested an association between sleep duration and cancer. However, the information on sleep duration regard to risk of lung cancer is scanty. Methods Analysed data comprised prospective population-based cohort of 2586 men (aged 42–60 years) from Eastern Finland. Baseline survey and clinical examinations took place 1984–1989, and diagnosed lung cancers were obtained until the end of 2011 through linkage with the Finnish Cancer Registry. Self-reported sleep was categorized as ≤6.5 h, 7–7.5 h, and ≥8 h. Subjects with prior history of cancer or psychotropic medication (hypnotics or sedatives) were excluded from the analyses. Cox proportional hazards models with adjustments for possible confounders were used to examine the association. Results Significant association between sleep duration and increased lung cancer risk was observed after adjustments for age, examination years, cumulative smoking history, family cancer history and Human Population Laboratory Depression scale scores (HR 2.12, 95% CI 1.17-3.85 for ≤6.5 h sleep, and HR 1.88, 95% CI 1.09-3.22 for ≥8 h sleep). Associations were even stronger among current smokers (HR 2.23, 95% CI 1.14-4.34 for ≤6.5 h sleep, and HR 2.09, 95% CI 1.14-3.81 for ≥8 h sleep). After further adjustments for alcohol consumption, physical activity, body mass index, marital status, education years, night work, employment status, asthma and chronic bronchitis, the association remained significant both in the whole study population and among smokers. When cumulative smoking history was replaced by current smoking in the adjustments, the increased risk was limited to those who slept <6.5 h. Conclusions Sleep duration of less than 7–7.5 hours or more than 7–7.5 hours associates with increased lung cancer risk. The physiological factors underlying the association are complex, and they may relate to melatonin excretion patterns, low-grade inflammation in cancer development process or disruptions in circadian rhythmicity.
Background
Lung cancer is estimated to account for 8.3% of men's cancer cases globally [1]. Among all types of cancers worldwide, lung cancer incidence is highest [2] and in Finland it comes second [3] after prostate cancer. Previous studies [4,5] have identified numerous factors to increase risk of lung cancer, such as smoking history, previous lung disease (bronchitis, asthma, pneumonia, chronic obstructive pulmonary disease, cystic fibrosis, or obstructive sleep apnoea), environmental exposures, and genetic predisposition.
In addition to established risk factors, it has been suggested that sleep duration may associate with cancer development. Both short and long sleep duration are proposed to be a determinants of increased cancer risk [6]. One possible factor underlying the association is melatonin hormone, which represents the most stable and reliable biomarker of the central circadian pacemaker [7,8], and may play a role in lung cancer tumor growth [9,10]. This suggestion is supported by the observations of severe disruptions in circadian rhythms among lung cancer patients [11][12][13][14][15]. These circadian rhythm disruptions consists of a loss of rhythmicity in neuroendocrine and immune parameters [11], as well as disturbed daily sleep-activity cycles [12][13][14][15]. Our focus is on altered sleep duration, which can be one of the symptoms of a circadian rhythm sleep disorders [7].
Most of previous studies have proposed nightly sleep less than 6-7 hours to increase risk of cancer in general [16], and specifically prostate cancer [17], breast cancer [18][19][20] and colorectal adenoma [21]. One recently conducted prospective follow-up study found U-shaped association between sleep duration and colorectal cancer incidence in postmenopausal women [22]. Furthermore, circadian rhythm sleep disorders [8,23] have shown the relation between sleep and increased cancer risk.
Our focus is on the relationship between lung cancer and altered sleep duration. To the best of our knowledge, there are no previous prospective cohort studies on this topic. To assess the possible association between sleeping hours and increased lung cancer risk, we conducted the prospective cohort study among 2586 ageing men from Eastern Finland.
Study population
The prospective cohort Kuopio Ischemic Heart Disease Study (KIHD) participants were a randomly selected sample from general population in Eastern Finland [24]. Baseline examinations during 1984-1989 were conducted in 2682 men (82.9% of those invited) aged 42-60 years, living in Kuopio or surrounding rural area. Men having cancer history (n = 51, 1.9%), or using hypnotics or sedatives (n = 45, 1.7%) at baseline were excluded, leaving total of 2586 respondents. Participants provided written informed consent after full explanation of study, and the Research Ethics Committee of Kuopio University approved the study protocol.
Outcome
Diagnosed lung cancers (n = 81, 3.1%) occurring from two years after baseline until end of 2011 were included. To rule out the reverse-causation we restricted our sample to those who had had cancer at least two years at the baseline. Median follow-up time was 23 years (25 th -75 th percentiles were 18-25 years). Lung cancer diagnoses were ascertained by the individual social security number linkage with the Finnish Cancer Registry. Diagnoses were classified according to International Classification of Diseases (ICD-8, −9, and −10).
Baseline measures Sleep
Self-administered questionnaires were recorded by participants and checked by an interviewer. Sleep duration was asked: 'How many hours do you usually sleep at night?' Response alternatives were: <6 h, 6.5, 7, 7.5, 8, 8.5, 9, 9.5, and >10 h. Crude lung cancer incidence rate ratios were lowest in 7 h (1.21 cases/1 000 person years) and 7.5 h (0.90 cases/1 000 person years) of sleep, and thus sleep duration was divided into three categories: ≤6.5 h, 7-7.5 h, and ≥8 h. Comparable reference sleep duration category is presented in previous cancer and sleep studies [25]. Other sleep questions were: 'Do you have nightly breathing disruptions?' (yes/no), and 'How often do you have difficulties to fall asleep, or staying asleep?' (never or seldom/sometimes/frequently).
Depressive symptoms
Depressive symptoms were assessed with the 18-item Human Population Laboratory Depression Scale HPL; [26]. Scale items included mood disturbance, negative self-concept, energy loss, poor appetite, concentration difficulty, and psychomotor agitation. The scale was developed especially for screening general population samples, and it also conceptually resembles other brief symptom checklists such as the Centre for Epidemiological Studies Depression Scale CES-D; [27,28]. The HPL Depression score is generated by assigning one point for each true or false answer that is indicative for depression (range 0-18). To avoid collinearity, insomnia was excluded from the scale.
Health and sociodemographic background
Participants completed questionnaires concerning medication (hypnotics, sedatives and antidepressants), and history of physician-diagnosed illnesses (cancers, chronic bronchitis, asthma, and family cancer history). To assess physical activity, the 12-Month Physical Activity questionnaire [29] was applied. The checklist included the most common physical activities (walking, jogging, skiing, bicycling, swimming, games) of Finnish middle-aged men. For each activity performed, the subjects were asked to record the frequency, average duration and intensity. The energy expenditure from physical activity was expressed as kcal/day. Body mass index (BMI) was computed as the ratio of weight (kilograms) to the square of height (meters). Current smoking behavior was assessed with three questions: 1) Have you ever smoked? Yes/no, 2) Do you smoke daily during last year? Yes/no, and 3) When have you smoked last time? (Less than month is defined as current smoker). Cumulative smoking history (packyears) was estimated as product of years smoked and number of tobacco products smoked daily at the time of examination. Alcohol consumption (g/wk) was assessed with a structured quantity-frequency method using the Nordic Alcohol Consumption Inventory for drinking behavior over previous 12 months [30]. Respondents were asked for total years of education, marital status (married or living with spouse vs. living alone), working time (day shift vs. night shift), and employment status (employed vs. unemployed or retired).
Statistics
According to variable type and distribution, baseline variables were displayed by the sleep categories with Kruskall-Wallis or χ 2 test p-values. Moreover, correlation matrix for the continuous variables was made. Kaplan-Meier survival curves (Mantel-Cox log-rank test) differed significantly (p-value 0.04) between sleep duration categories allowing Cox proportional hazards model application. Covariates were selected based on factors affecting sleep [31] and lung cancer development [4,5]. To compute hazard ratios (HR) and confidence intervals (95% CI), we first built a Model a that was adjusted for age and examination years. Model a was further adjusted for cumulative smoking history (pack-years) (Model b ), and family cancer history and HPL scale scores (Model c ). Model c was further adjusted for alcohol consumption, physical activity and BMI (Model d ), education years, marital status, working time and employment status (Model e ), and asthma and chronic bronchitis (Model f ). Stratified analyses for whole study population and smokers were formed for following reasons: 1) Smoking is an established risk for lung cancer [32], and 2) We observed substantially more new lung cancer cases among smokers within ≤6.5 h, 7-7.5 h and ≥8 h sleep groups (cancer cases 18, 18, and 32, respectively) than among nonsmokers (cancer cases 3, 6, and 4, respectively). Analyses were conducted with SPSS software (IBM Company, SPSS Statistics version 19.0, United States). We carried out proportional hazards analysis with adjustments for possible confounders. As a result, approximately twofold risk for lung cancer was observed in the ≤6.5 h and ≥8 h sleep groups both in the whole study population and among smokers (Table 3). To assess the effect of current smoking, we performed Cox proportional hazards analysis adjusted for age, examination year and current smoking in the whole study population. Association between ≥8 h sleep and lung cancer lost significance (HR 1.54, 95% CI 0.92-2.58), but remained in the ≤6.5 h sleep (HR 1.82, 95% CI 1.01-3.28). We further examined other sleep variables. Nightly breathing disruptions and frequent insomnia were not associated with incidence of lung cancer (age-adjusted HR 1.36, 95% CI 0.75-2.47 and HR 1.16, 95% CI 0.72-1.89, respectively). These conditions cumulated in men with sleep of 6.5 h or less (Table 1).
Summary of main findings
Sleep duration of less than 7-7.5 hours or more than 7-7.5 hours associates with increased lung cancer risk in ageing men irrespective of age, cumulative smoking history, family cancer history, night work, health behavior, sociodemographic characteristics and previous inflammatory lung diseases. However, adjusting for current smoking instead of cumulative smoking history, limited the increased risk to the men with nightly sleep 6.5 h or less.
Comparison with previous literature
In this prospective population-based study we observed an association between sleep duration and moderately increased lung cancer risk. Comparable association has been found in the risk of colorectal cancer [22] and in all-cause mortality [31]. Recently conducted metaanalysis concerning sleep duration and breast, prostate, endometrial, thyroid, ovarian and colorectal cancer risk [25] suggests that sleep duration and cancer are not connected to each other. Nevertheless, in the subgroup analyses (3 studies included in) the researchers found higher colorectal cancer risk in the long sleep group. However, the number of sleep duration and cancer risk studies is small, as well as the number of studies concerning different cancer types.
Smoking is well-known risk for lung cancer [32], which was clearly demonstrated also in our study as a substantially higher incidence of lung cancer cases among smokers. Smoking is also connected with depression [33] and sleep disorders [34]. Regular smoking impairs the nightly sleep structure due to the biological effects of nicotine [35], which was observed in our study too.
In addition to smoking, diet and body weight affect both sleep and cancer. In the sleep perspective, the ideal body mass index goes hand in hand with sleep of approximately 7 hours [36], whereas cancer can induce cachexia including loss of appetite, weight loss and hypermetabolism [37].
Overall, a number of possible pathways underlying the association between sleep duration and cancer have been proposed. They relate among other things to clock-gene deregulation induced tumor genesis and progression of cancer [38], immunosuppression due to deprivation or restriction of sleep [39], and altered melatonin secretion patterns, such as timing [40,41], amount [42,43], and secretion duration [44]. Melatonin has oncostative properties in tumors, including antioxidant effects, modulation of cell cycle and apoptosis, inhibition of telomerase activity and metastasis, stimulation of cell differentiation, and prevention of chronodisruption CD; [45].
Lung cancer patients have frequently CD [13,14] with severe alterations of neuroendocrine and immunological factors [11]. However, shift work with CD has been classified as a probable, group 2A carcinogen by the International Agency for Research on Cancer [46]. Shift work can lead to CD including physiological, endocrinological, and sleep-wake cycle alterations, which may increase the risk for breast, endometrial, prostate and colon cancer [8,23].
In the sleep perspective, circadian rhythm sleep disorders (CRSD) include a variety of conditions such as: time zone change, shift work sleep disorder, irregular sleepwake rhythm, free-running disorder, delayed sleep phase disorder, and advanced sleep phase disorder [7,47]. CRSDs relate to both timing and duration of sleep, in other words, 'The essential feature of CRSDs is a persistent or recurrent pattern of sleep disturbance primarily due to alterations in the circadian timekeeping system or a misalignment between the endogenous circadian rhythm and exogenous factors that affect the timing or duration of sleep [7]'. We were interested in altered sleep duration, which can be one symptom of a circadian rhythm sleep disorders [7].
Inflammatory processes are one etiological factor in the lung cancer development [48]. Sleep and immunity have a complex relation, where poor sleep may suppress immunity [39], and in turn, chronic low-grade inflammation may induce sleepiness, fatigue and reduced quality of sleep [49].
Strengths and limitations
Our study comprised regionally representative sample of ageing men with high participation rate. The follow-up information on lung cancer diagnoses was inclusive. All cancer diseases diagnosed in Finland since 1953 have been registered to the Finnish Cancer Registry, which coverage is virtually complete without loss to follow-up [50]. We were able to measure various covariates, such as body mass index and depressive symptoms, which affect sleep and/or lung cancer risk. Also the exclusion of those having cancer diagnosis at baseline or within the two years following improved assessment of association, because exposures were measured before disease onset. To avoid confounding effect, exclusion of hypnotics and sedative users were made at the baseline.
Nevertheless, following limitations in the study need to be considered while interpreting the results. (i) Our observations cannot be generalized to women and younger men. (ii) We were not able to measure all known lung cancer risk factors, like environmental exposures, as well as the changes in sleep duration and health behavior during the follow up time. (iii) Information on sleeping hours can lead to misclassification, because self-reported hours tend to be greater than objectively measured hours [51]. Furthermore, sleep was measured as a single time point measurement. (iv) The number of new lung cancer cases was small during follow-up time.
(v) We were not able to assess the effect of melatonin intake. However, the use of melatonin was low in Finland during 1997-2007 [52].
Conclusions
Sleep duration of less than 7-7.5 hours or more than 7-7.5 hours associates with increased lung cancer risk irrespective of age, health behavior, previous inflammatory lung diseases and sociodemographic status. However, adjusting for current smoking instead of cumulative smoking history, limited the increased risk to the men with nightly sleep 6.5 h or less. The physiological factors underlying the association are complex, and may relate to melatonin excretion patterns, low-grade inflammation in cancer development process, or disruptions in circadian rhythmicity. | 2017-06-21T21:29:40.359Z | 2014-03-31T00:00:00.000 | {
"year": 2014,
"sha1": "05255969e5cf8dace292e5c7ba45567591fded2e",
"oa_license": "CCBY",
"oa_url": "https://bmcpublichealth.biomedcentral.com/track/pdf/10.1186/1471-2458-14-295",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "94bdd421d9c14a337dde527288b69f2b675c1707",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
251253538 | pes2o/s2orc | v3-fos-license | Sociodemographic disparities and contextual factors in obesity: updated evidence from a National Survey of Risk Factors for Chronic Diseases
Objective: To assess the association of sociodemographic and environmental factors with the obesity occurrence in Argentina from a sex- and age-comparative perspective and a multilevel approach. Design: Cross-sectional study based on secondary data from the National Survey of Chronic Diseases Risk Factors (CDRF) 2018, Argentina. Two-level logistic regression models stratified by sex and age were used. Setting: The nationwide probabilistic sample of the CDRF survey and twenty-four geographical units. Participants: 16 410 adult people, living in Argentine towns of at least 5000 people, nested into 24 geographical units. Sex and age groups were defined as young (aged 18–44 years), middle-aged (45–64 years) and older (65 years and older) men and women. Results: Single men (all age groups) and divorced/widowed men (aged 45 years or older) had a lower obesity risk compared to married ones. In the middle-aged group, men with higher education showed a lower risk than men with incomplete primary education. In young women, a marked social gradient by educational level was observed. A low-income level coupled with highly urbanised contexts represents an unfavourable scenario for young and middle-aged women. Having a multi-person household was a risk factor for obesity (OR = 1·26, P = 0·038) in middle-aged women. Contextual factors linked to the availability of socially constructed recreational resources and green spaces were associated with obesity among young adults. Conclusions: Socio-environmental determinants of obesity seem to operate differently according to sex and age in Argentina. This entails the need to address the obesity epidemic considering gender inequalities and the socio-environmental context at each stage of life.
In particular, the process of nutritional transition implies remarkable shifts in physical activity and diets of the populations with a rapid increase in the prevalence of overweight and obesity. These nutritional changes occur in the context of broader societal drivers (such as sociodemographic changes linked to urbanicity and a rapid economic development), which could mediate differential exposure to the causes of all forms of malnutrition (9) . The Social Determinant of Health approach (10) explains that attitudes, beliefs or behaviours represent only the most downstream determinants in the causal pathways influencing health, which are shaped by more upstream determinants; overall, they reflect the economic and social resources and opportunities for improving health (11) . Specifically, it has been explained that the upstream determinants that influence obesogenic behaviours may simultaneously manifest as the form of tangible characteristics in the built or natural environments (which determine what is available), as well as less tangible features in our economic, political and sociocultural environments (12) . Based on this framework, we assumed that upstream social factors underlie obesity-related behaviours. Additionally, from a multilevel perspective (13) , we recognise that certain contextual characteristics of the built and natural environments (defined at a higher level) could operate together with certain individual social features (related to structural mechanisms of social stratification such as income, education, occupation and sex) (10) as socio-environmental factors related to obesity occurrence (defined at a lower level). Especially in developing countries, there is a lack of understanding about the simultaneous individual-and contextual-level factors that drive regional and other subnational disparities (e.g. by sexes and age groups) in obesity burden.
Overall, estimates indicate that the age patterns of obesity differ between men and women (14) . Although the ways in which the nutrition transition affects the diets of age groups differently is not well described in the scientific literature, it is recognised that several dimensions of social disadvantage exert differential effects across the life course, impacting on food security and, in turn, on body composition or nutritional status (15) . It has been suggested that, for example, the capacity to resist adverse societal influences could play a role in the result of malnutritionincluding overweight (9) , which could be important among more socially vulnerable population groups (possibly elderly or young women). However, the evidence on the mechanisms explaining age-and sex-related social disparities in obesity is still unclear. In the Latin American and the Caribbean region, the differentials by age groups deserve special attention given their ongoing and accelerated process of demographic ageing (16) .
In Argentina, updated and further contextualised evidence is needed to identify socio-environmental determinants of obesity by specific population groups at a national scale. Most of the national population-level studies on adult obesity cover up to the year 2013 (17,18) ; other works on lifestyles and sociodemographic factors related to obesity in this country, though interesting, do not have national representativeness (18)(19)(20)(21) . Interestingly, some studies have identified diverse socio-environmental patterns and demographic factors accounting for the sex-specific geographical pattern on obesity-related chronic diseases in this country (22,23) . Thus, areas with higher mortality risk of CVD in men have been directly associated with the smallest urban scale coupled with a higher level of poverty, in contrast to the effect observed for women (22) . Besides, specific geographical patterns with disadvantageous socio-environmental features were identified in Argentina accounting for the differential burden of cancer mortality between sexes in this country (23) .
The National Survey of Chronic Diseases Risk Factors (CDRF) has been conducted every 4-5 years since 2005 in Argentina from a probabilistic population-based sample. This survey includes sociodemographic and health information from an urban target population aged 18 years or older living in Argentine towns with at least 5000 people. For the first time, the 2018 edition of this survey included anthropometric measurements (not self-reported data, as previously); the present study analyses the obesity scenario using this representative country dataset. To our knowledge, ours is the first research work that examines up-todate information on obesity in this country to explain social disparities and contextual factors underlying obesity distribution among different population groups by sex and age. In this population-based study, we assessed the association of individual-level social characteristics and environmental factors, simultaneously, with the obesity occurrence, using a multilevel modelling strategy over the latest available CDRF survey, 2018.
Study design and data sources
This study is based on secondary data collected from the CDRF survey carried out in 2018 by the National Health Ministry of Argentina and the National Institute of Statistics and Census (known by its acronym in Spanish, INDEC). The CDRF is a nationally representative face-toface survey conducted by trained interviewers, based on a rigorous probabilistic sampling design. The 2018 survey included anthropometric measurements of height and weight taken by trained health personnel. The instruments (portable electronic weighing scale and portable height measuring board) and the techniques used for the height and weight measurements follow the STEPS protocol of the WHO (24) endorsed by the Ministry of Health and Social Development of the Nation, Argentina.
The 2018 CDRF databases consist of a population-based sample of 29 224 persons aged 18 years and older living in towns of at least 5000 people of Argentina. The sampling design of the CDRF was probabilistic and multistage. At the first stage, sampling selection was based on the Master Urban Sample of Dwellings of the Argentine Republic (MUDAR). The MUDAR has a complex sample design and is used by the INDEC as a framework for the selection of private dwellings for all its national surveys. In the CDRF, sampling units were selected from the MUDAR list by means of a stratified probabilistic design (by sociodemographic variables) and a systematic sampling (proportional to the total number of occupied dwellings). For the definitive sample of dwellings in the CDRF, a systematic selection of segments of five contiguous dwellings (within the MUDAR list and cartography) was applied to obtain the final sample of dwellings. At the survey moment, the interviewer selected with equal probability a person aged 18 years or older, assisted by a random algorithm. The final size of the sample was 29 224 individuals for the application of the first part of the questionnaire (selfreported data), covering all jurisdictions in the country. For the second step that includes anthropometric measurements, a probabilistic subsample was made with 75 % of previously selected dwellings (n 16 577 individuals). More methodological details of the CDRF survey have been published in official reports of the National Health Ministry (25) .
In our study, a subset of 16 410 persons with anthropometric measurements was extracted, after excluding 167 observations with incomplete data. For multilevel analyses, the individual-level dataset (n 16 410) was nested into the area-level information about environmental characteristics (n 24 geographical units, corresponding to administrative divisions). Two area-level variables were selected: the index of socially constructed recreational resources (SCRR) and the index of green spaces. The SCRR index is a score (calculated at provincial scale) that considers the availability of certain environmental conditions and cultural attractions such as urban aesthetics, urban heritage sites, cultural centres, shopping malls, sports centres, among others. The green space index (score) represents the availability of green spaces measured through land coverage by open green spaces or natural areas. Both indexes were part of the latest available Quality of Life Index database (26) , which incorporates data for Argentina at the county level from several sources (official reports, field studies and satellite imagery) into a Geographic Information System. This dataset and its methodological aspects were published elsewhere (23,26) .
Statistical analyses: multilevel modelling Due to the hierarchical structure (spatial clustering) of the data (16 410 subjects nested into 24 geographical units), two-level logistic regression models were used to estimate the association between selected individual-and area-level covariables and obesity occurrence (dichotomous outcome, yes/no). Three age groups were defined as young (aged 18-44 years), middle-aged (aged 45-64 years) and older (65 years and older) considering the criteria of the INDEC and the MeSH descriptor of middle-aged provided by the US National Library of Medicine (27) . Thus, the sex and age groups defined were set up as strata in multilevel model adjustments. Obesity was defined as having a BMI ≥ 30 (yes/no) following the WHO criterion (1) . BMI was calculated by using measured anthropometric data.
The analyses were performed in sequential steps, from a variance component model (multilevel 'empty' model, without covariates) to an adjusted two-level model that included all the individual-and area-level variables. Model selection was based on the Akaike information criterion as well as the interpretability and comparability of results among the different population groups. Several individual-level variables that inform about sociodemographic characteristics of participants were considered in the models: marital status (married; divorced/ widowed; single), household type (one-person household; a couple without children; multi-person household including a couple with children or other persons at home), education (highest level of education attained: incomplete primary education or lower; primary education; high school; higher education), income level (higher, intermediate or lower if self-reported incomes are at the highest, second-to-four or first quintile of the income distribution, respectively) and geographic location of residence (city of residence classified by population size as: small cities of 5000-149 999 people, or big/middle-sized cities of 150 000 or more people). Since previous findings have indicated a joint effect of poverty and urbanisation on health statistics in Argentina (16) , additive interaction terms between income and geographic location were also considered. Finally, lifestyle-related variables such as physical activity (intense/intermediate/low), tobacco consumption (non-smoker/former smoker/smoker), and fruit and vegetable consumption (above or below the five portions/ day recommendation by the Food Guide for the Argentine Population) (28) were included as adjustment variables. These data were gathered by a structured questionnaire administered by trained personnel. This instrument included questions about tobacco consumption (current consumption frequency and previous consumption of at least 100 cigarettes, among others) and the usual frequency of fruit or vegetable intake (times/week and daily portions by self-report), and a section with the International Physical Activity Questionnaire (IPAQ) for physical activity assessment (29) .
The area-level variables included in the model were the aforementioned contextual indexes (continuous variables at the provincial scale) of SCRR and of green spaces; both were calculated as population size-weighted average of the indexes reported at the county level (26) . The highest values for these indexes reflect the best situation of the availability of SCRR or green spaces.
The linear predictor of the general mixed logistic model was where y ij is the response variable (obesity yes/no), x 1ij to x pij and w 1j to w lj are explanatory variables with fixed effects (linear coefficients), corresponding to the individual-or area-level covariates, respectively. The term ξ j is a random intercept term representing the clustering variance structure and, therefore, accounting for the geographical variability in the estimation process. All models were adjusted by lifestyle-related variables. Thus, female-only and male-only models were fixed and the estimated association measures (OR) plotted. Then, six models were constructed for each sex and age group combination (young, middle-aged, and older men and women). All analyses were performed using Stata v14.
Results
This study examined the obesity occurrence among 16 410 adult people (58 % women and 42 % men) living in Argentina, using data from the CDRF survey. In 2018, over half (51·0 %) of the participants were young adults, and about 30·1 % and 18·9 % were middle-aged and older adults, respectively. Table 1 presents the individual-level characteristics of the study participants, for the total sample and stratified according to age group and sex. As shown, about 46·8 % of the participants reported a low level of physical activity, reaching values of 54·7 % and 61·8 % in older men and women, respectively. The percentage of people with low fruit and vegetable intake was around 90 % in all age groups and both sexes. The percentage of smokers (21·8 % in whole sample) was higher in men than women, especially in those under 65 years of age. In older adults, the percentage of former smokers was more than double in males (45·4 %) compared with the female group (19·4 %). Most participants were married (49·5 %) and lived in a multi-person household (61·2 %). However, the distribution of subjects by marital status or household type shows differences between sexes. These differences were stronger in the elderly and were particularly related to the categories married or divorced (marital status variable), and one-person household (household-type variable). About 36 % of the total sample had completed primary education (with higher prevalence in men than women, especially in the 18-44 years age group), while 17·3 % reported having higher education. In all age groups, the percentage of people with higher education was higher in women than in men. Over half (59·9 %) of the participants had an intermediate income level. Particularly at higher-income levels, men always show higher values (%) compared to women, being the difference more noticeable in younger groups. Regarding geographical location, approximately 55-60 % of participants from all age group and both sexes lived in big/middle-sized cities (Table 1).
Participants (in the whole sample, regardless of sex) were homogeneously distributed across the categories of normal weight (30·6 %), pre-obesity (34·6 %) and obesity (33·3 %), with just 1·5 % of people with underweight (Table 1). Underweight was more prevalent in younger groups, and especially among young women (2·9 %). Over 65 % of the total participants (71·4 % in men and 65·2 % in women) had a BMI of 25 or more (pre-obesity or obesity) ( Table 1). Figure 1 presents the weight status distribution by sex and age group. As this figure shows, the most frequent category of the weight status was preobesity (BMI 25-29·9) in men (all age groups) and obesity (BMI ≥ 30) in women (middle-aged and older age groups). In the whole sample, obesity prevalence was 33·3 %. Both in men and women, the higher values were concentrated in middle-aged groups (Table 1). However, there were differences in obesity prevalence between sexes, mainly among older adults (41·9 % v 36·9 % in women and men aged 65 years or more, respectively) ( Table 1). Figure 2 depicts the adjusted OR of obesity estimated by the femaleor male-only multilevel models (for all ages together). Middle-aged adults were 1·7 to 2 times more at risk of having obesity than young men or women. The elderly category also showed a similar OR (1·8) in women. Other individual-level characteristics such as living as a couple or in a multi-person household, and living in a small city with a lower-income level, showed opposite tendencies in women (direct association with obesity) and men (inverse association). A lower obesity risk of being divorced/widowed or single (v. married) was found. In both sexes, there was a social gradient of the education level (people who are more advantaged in terms of education had lower obesity risk than those who are less advantaged); however, the risk reduction at higher educational level was stronger in women than men. Furthermore, higher SCRR and green spaces indexes were inversely associated with obesity in both groups (Fig. 2).
Although Figure 2 shows that differences between sexes, without discrimination by age group, are slight, interesting results emerged when analyses were performed by sex and age groups. Crude and adjusted OR estimates by multilevel models stratified by age group among men and women separately are displayed in Tables 2 and 3. After controlling for lifestyle covariates, it was found that compared to married men, single (in all age groups) and divorced/widowed men (in the older and middle-aged groups) had lower risk of obesity (Table 2). In the female-only model (Table 3), the association between obesity occurrence and being single (lower risk compared to married) was significant among young and middle-aged women. There was a direct association between obesity and living in a multi-person household (v. one-person household) in middle-aged women (OR = 1·26, p = 0·042). Besides, a social gradient by education level was accentuated in the young women group (Table 3). In particular, higher education showed a significant inverse association with obesity in all the age groups for women, as well as in middle-aged men (Tables 2 and 3). The association of income level, coupled to geographical location (interaction term), with obesity was significant only for women (Table 3). Specifically, we observed that the income level factor is not independent of geographical location among young and middle-aged women; in these population groups, a lower-income level coupled with a highly urbanised context seems to be an unfavourable scenario related to the obesity outcome. An inverse association with obesity was observed for SCRR and green space indexes (lower risk as score increases), which was significant in young groups of both sexes (Tables 2 and 3).
from measured data was about 7 % point higher than those based on self-reported BMI (30) . The high levels of obesity estimated for Argentina in 2018 are consistent with the growing burden of obesity reported in the Latin American region (3,14) . The changes in the nutritional profile of the Latin American populations in the last decades have largely been attributed to the process of the nutrition transition (3) , which occurred in parallel with several socio-economic and demographic shifts (e.g. changes linked to the urbanisation and globalisation processes) in most regions of the world (8) . Interestingly, a recent study in Argentina reports that sociodemographic factors (such as urbanisation and poverty level) play a major role in shaping diverse nutritional profiles across the territory, which configure a complex and heterogeneous socio-nutritional scenario (17) .
Overall, existing studies about the association between marital status and overweight indicate that the former appears to influence obesity more strongly among men than women (31,32) . The lower obesity risk in single men has been reported by other studies (19,33,34) . This could be related to the lifestyle of married men that may lead to a more stable eating pattern, compared to unmarried people (31) . Besides, a larger body size is likely to be valued as a sign of physical dominance and prowess for men (35) , if we consider the expected social roles for men entering marriage in some social groups. It is important to note that the lower obesity risk observed in single men in our study was also present among single females, particularly in the younger groups. In this case, this finding may reflect certain body weight norms and expectations in our society linked to the female beauty ideal, more solid at certain stages of life. An exaggerated thin body ideal has been recognised as a distinctive characteristic of the Argentine population (36) , especially among women.
We also found a significantly lower risk for the divorced/ widowed category in the middle-aged and elderly men groups (v. married). This could be explained, in part, by the fact that marital disruption (i.e. being widowed or divorced) has been associated with poor physical health outcomes (32) , including weight loss linked to psychosocial frailty or high-risk alcohol or tobacco consumption. Additionally, it has been proposed that the marital role provides support and resources which may influence eating and physical activity habits (31) . Thus, the lower obesity risk observed for divorced/widowed men older than 44 years of age (v. married ones) may indirectly reflect their potential lack of social support as observed, for example, in eating habits. The importance of social support for health is well recognised (37) , as well as the role of women as the ones mostly responsible for food preparation in families (35) . In Argentina, a national study on older adults highlights that women have a high physical and psychological burden associated with such activities as the care of elderly or family members (38) .
Education is a recognised individual-level socioeconomic factor related to obesity risk. Overall, there is a general agreement that the relationship between education and obesity is often more consistent among women (34,39,40) . Additionally, we found a stronger educational gradient in the obesity burden among the younger group of Argentine women. This finding is in line with previous studies carried out in this country in 2005, which reported a lower obesity prevalence with better education level among women aged 20-49 years (41) . Also in other Latin American countries, lower obesity prevalence was observed in women with higher education among this age group (42,43) . Particularly from a longitudinal study, a reversal of the inverse association between education level and obesity risk during ageing was observed, especially in women. These authors propose that women with higher education put more effort than men into controlling their body weight, to fit themselves in the labour market and to reach a high social position; then, approaching old age, women try to get rid of the social pressures towards thinness (44) . A sociocultural pressure on females to achieve the desired body image, especially among women of high socio-economic status suggested by other authors (45) , can explain, in part, our findings in Argentina.
Since education has a role as part of the complex phenomenon of socio-economic stratification, another aspect to considerer is that people with higher education may have greater job opportunities and, therefore, better access to physical activity facilities and healthy eating. Furthermore, education can be interpreted as a proxy for 'health literacy' (39,40) , which could improve an individual's capacity to adequately address health-related issues, including overweight (40,46) . These could be underlying mechanisms that explain, in part, the relationship observed between obesity and higher education, especially in middle-aged men and elderly women.
It has been highlighted that socio-economic inequalities within cities in developing countries are high and affect the social distribution of health outcomes (47) . In Argentina, previous evidence indicates that the socio-economic patterning of chronic disease risk factors, including obesity, was modified by urbanicity (48) , and that poverty and urban scale are associated contextual variables influencing the distribution of non-communicable diseases mortality (22) . In our study, a higher risk of obesity was observed among women under 65 years of age living in big or middle-sized cities with a low family income level. Income generally reflects the availability of economic and material resources and, thus, it influences dietary quality (49) . These results may reflect poor access to a healthy diet and low opportunities to adopt healthy behaviours, in women with a low-income level, especially if they live in urbanised areas. Interestingly, Oliveira et al. (50) indicated that there are several aspects (physical, economic, political and sociocultural) of the environmental factors that would independently affect men and women, and that obesogenic influence of the environment may differ in small and medium-sized cities, when compared to large cities.
According to the WHO, urban poverty and unhealthy living conditions themselves are social determinants of health that can affect disproportionately certain vulnerable subgroups such as women (51) . Moreover, considering their potential family caregiver role (38) , especially under unfavourable economic conditions, Argentine women may reduce their time for the care of their own health (impacting on their weight status). This could also explain, in part, the higher obesity risk in middle-aged women living in a multiperson household that we observed in our results.
Considering environmental factors, there is suggestive evidence about the relationship between green spaces and weight status or obesity-related health indicators. Several studies found evidence that these relationships varied by factors such as age and socio-economic status, although the mechanisms through which green spaces may influence health are not completely understood (52) . Evidence suggests that there is a link between green spaces and obesity, as the former would offer enhanced opportunities for physical activity (52,53) , even within urbanised contexts (54) . However, the findings on this matter are not conclusive, especially in developing countries. In Argentina, agriculture and tourism are motors of regional economic development; agricultural and touristic activities usually take place in locations where natural resources are highly available. Thus, we additionally argue that the environmental indicator used here may represent proxy variables of socio-economic conditions at the macro-level.
From a gendered perspective, MacBride-Stewart et al. (55) identify key dimensions to consider in the study of the interconnections between health and nature, including accessibility, availability and usability of green spaces, as well as the boundaries (symbolic/material) that construct differential relationships between natural spaces, sex and health (55) . Consistently, another work (56) also indicates that the perception and use of green spaces, as well as green space attributes, can explain the different associations with BMI that the authors observed among ageand sex-specific adult groups. Given our finding of an association between area-level variables and obesity occurrence, those variables may also be relevant aspects to explain age group differences in Argentina.
Particularly, the role of the contextual conditions summarised in the SCRR index (such as urban aesthetics/urban heritage, cultural amenities, shopping malls and sports centres) is underexplored in obesity research. However, the notion of SCRR could be closely related to the idea of built environments, defined as the human-modified space in which people conduct their daily lives; their influence on obesity has been more extensively studied (57) . In general, there is consensus that the built environment plays a key role as a barrier or enabler to physical activity and as a mediator in access to healthy food (54,58) . Interestingly, an exhaustive review from the perspective of developing countries (50) concludes that health-promoting built environments can have a profound influence not only on the population levels of physical activity but also on its wellbeing and equityboth socio-economic and age-sensitive. Thus, our findings of a lower obesity risk associated with contexts with greater SCCR in young people could be related to a mechanism mediated by the availability, perception and use of sports centres or other amenities that were part of the SCRR index. In a Brazilian study on obesity, the authors reinforce the idea that a favourable decrease in 'obesogenic' traits in the urban environment is possible when physical structures are planned to facilitate physical activity (50) . Alternatively, we propose that the distribution of the SCRR variable could depict a sort of 'regionalisation' (geographical division) underlying socio-economic and cultural characteristics of the populations or could represent different patterns of land use. Consistent evidence showed that a better mix of land use (residential, commercial, institutional, industrial, recreational and agricultural) is generally associated with less obesity burden (12) , although there is a general lack of insight into the pathway by which land use mix impacts obesity. Further research would be useful in these regards to better understand the observed age differences.
A major strength of this research is its updated source of data and large sample size based on a rigorous probabilistic sampling design which ensures its national representativeness. Indeed, the matching between the age group distribution in our sample and in the national population projections for 2018 by the INDEC is notable. Besides, as far as we know, this is the first study that uses measured anthropometric data to identify social inequalities in obesity distribution from the most recent CDFR survey. Moreover, it provides an analytical example of how the multilevel epidemiological framework can be used in this field of study. However, there are limitations to consider. First, we know that our study was based on an urban setting and, thus, rural populations were not represented. However, about 91 % of the Argentine population is living in urban areas (22) . Second, the reliability of income measures may be debatable in developing countries (59) , although several questions regarding household income were designed to minimise response bias in the CDRF. Also the use of BMI as a measure of obesity could be discussed, since this index should be preferably accompanied by other body composition measures in some specific population groups, such as athletes and the elderly. However, the WHO recognises that BMI provides the most useful population-level measure of overweight and obesity (1) . Finally, considering that exposure measurement error may occur in the characterisation of green spaces in epidemiological studies (53) , and that other unmeasured macro-level factors may affect conclusions, the interpretation of macro-contextual association measures should be conservative.
To conclude, our findings show high levels of obesity in Argentina in 2018 and an unequal distribution of their socio-environmental determinants. Specific social and environmental factors seem to operate differently according to sex and age groups in this country. Thus, comprehensive interventions against the obesity epidemic addressing gender inequalities and socio-environmental disadvantages at each stage of life are needed. Our results also showed a favourable relationship (inverse association) between obesity risk and contextual characteristics (such as availability of green spaces) among the younger groups. This target population could be considered in the design of interventions aimed at achieving healthy environments. Finally, further interdisciplinary research on sociocultural pathways linked to obesity within urban contexts in developing countries is needed. | 2021-12-21T06:22:51.681Z | 2021-12-20T00:00:00.000 | {
"year": 2021,
"sha1": "b6d40862be75d80323a13961d451e6e59166636e",
"oa_license": "CCBY",
"oa_url": "https://www.cambridge.org/core/services/aop-cambridge-core/content/view/E263D1F4BEFF46C8260CB59A4385B25A/S1368980021004924a.pdf/div-class-title-sociodemographic-disparities-and-contextual-factors-in-obesity-updated-evidence-from-a-national-survey-of-risk-factors-for-chronic-diseases-div.pdf",
"oa_status": "HYBRID",
"pdf_src": "Cambridge",
"pdf_hash": "7eb35ae8052473469fe7069d0277ae30dd30878e",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
261717181 | pes2o/s2orc | v3-fos-license | Medico legal aspects of Medical Malpractice living cases in Cairo and Giza governorates from 2014-2015
Introduction: Medical malpractice is all the behaviors and attitudes that result from unsuitable interaction of health professionals functioning within different disciplines, which might also affect the percentage of mortality within health authorities or specific societies. Aim of the study: The objectives of the present study were to evaluate the medico-legal aspects of medical malpractice living cases in Cairo and Giza governorates. Also, the study aimed to understand the most frequent underlying causes of medical errors and how they happened. besides, it aimed to know the outcome of medical malpractice living cases if these errors lead to permanent infirmity or completely cured without disabilities. Subjects and Methods: The current retrospective descriptive observational study recruited the medical malpractice living cases in Cairo and Giza governorates. We used all the available records and reports of Forensic Medical Authority from the 1st of Jan 2014-to the 1st of Jan 2015. That included 99 subjects with alleged medical malpractice living cases. The inclusion and exclusion criteria were determined according to the results of forensic examinations. Results: The findings showed a difference in the age of alleged medical malpractice living cases. Participants with ages less than a year represented 4% of the living cases, while 14.1% had ages ranging from 1-20 years. Cases with 20-40 and 40-60 years old represented 48.5% and 20.2%, respectively, while 13.1% were older cases (>60 years old). Most of the cases were females. The most affected age group was 20-40 years old, while the least affected group was less than a year old. Regarding the medical specialties, general surgery had the most alleged living cases. Conclusion: Tightening control over the various health institutions, to ensure that any case of medical negligence is monitored, is required. Also, the penalties must be increased for all health practitioners accused of cases of willful medical negligence.
Introduction
Providing a medical service is a complex process dependent on the successful interaction of health professionals functioning within different disciplines [1]. The specter of medical error is prevalent but seemed more evident over recent years.
During the past decade, healthcare quality and patient safety have emerged as high-priority targets for improvement [2].
Negligence in the medical profession may lead to mild or severe injuries, which sometimes might cause death [3].
Injuries caused by medical interventions received widespread interest, worldwide. That probably is due to damages that may occur as direct damages, such as lost earnings, medical expenses, and future medical and rehabilitation costs, or indirect damages, such as pain and emotional distress, and many others [4].
Subjects
The current retrospective descriptive observational study targeted the alleged medical malpractice living cases from the available records and reports of the Forensic Medical Authority in Cairo & Giza governorates. It started from (Jan 2014-Jan 2015) and included 99 living cases with alleged medical malpractice.
Permission to implement the study was obtained by an official letter introduced to the Head and the chief medical officer in the department of forensic medicine, the Forensic Medical Authority, Cairo governorate.
Methods
The data was obtained from all living cases suffered from alleged medical malpractice. The data were recorded and reported in the Forensic Medical Authority, Cairo governorate, in 2014-2015.
The obtained data screened the following: Data were computerized using origin 5 V.15 (https://www.originlab.com/origin). All information was verified for completeness and logical consistency and translated into English to facilitate data manipulation. Simple statistics such as frequency, percentage, arithmetic mean, and standard deviation were used.
Statistical analysis
Analysis of data was performed using SPSS v. 25 (Statistical Package for Social science) for Windows. Description of qualitative variables was in the form of numbers (No.) and percentage (%). Comparison between categorical data was done using the Chi-square to test the statistical difference between the two groups. The significance of the results was assessed at P-value < 0.05.
Results
The current retrospective study was conducted on data files from the department of forensic medicine, ministry of justice in Cairo to study medical malpractice living cases in Cairo and Giza governorates from 2014-2015. The total number of the studied living cases were 99.
Discussion
The current retrospective study aimed to determine the medical malpractice living cases in Cairo and Giza governorates during 2014-2015.
The The results of studying living cases in which the majority of them were in the middle age groups (2-4th decade 48.5%, 4-6th decade 20.20%) agree with a retrospective study published in 2014 from Cairo and Giza regions [6], noted that the majority of patients were younger than 45 years while neonates were 14.5% of all cases. Most of the cases were in the middle age group because the fertile females increase the number then followed by the 6th decade age group, which is more vulnerable to medical complications.
Of the alleged living cases, (44.40%) of them were males, and (55.60%) were females. Most of the cases were females because obstetric cases increase their number. These results agree with a study from Tanta university [4], which reported that females represented (64.7 %) of cases and males were (35.29 %). The findings disagreed with another study from japan [7], which demonstrated that 51.5% were males and 48.5% were females.
Regarding marital status, married cases were 59.6% and single cases were 40.4%. That might be due to the increased litigation in pregnant women and obstetric cases. Regarding the authorized prosecution variations of the studied living cases, Cairo had the most frequent cases (51.5%), followed by Giza (47.5%). That coincides with a paper published in the Egyptian Journal of Forensic Sciences [8], which said that the Cairo governorate had the most frequent cases, followed by Giza.
Regarding the place of reported medical errors, private hospitals had the most cases (72.73%), followed by public hospitals (17.17%). That might suggest that private hospitals need more restrictions and monitoring.
These results disagreed with a descriptive study from turkey [9], which noted that State hospitals were the first 42.4%, followed by social security hospitals (18.5%) and private hospitals (12.0%). Also, the results disagreed with another study from turkey [10], which noted that most cases were reported in state hospitals, followed by private hospitals and university hospitals.
Regarding the qualifications of the accused physicians in living cases, consultants had the most accusations in the study with a percentage (39.4%), followed by specialists (30.3%). Some cases accused the whole medical department of all its members, and that represented 17.2% of all cases. Resident doctors represented (11.1%) of cases. Most of the cases were with consultants because most patients, in general, seek consultant doctors because of their experience.
These results coincide with the study from Tanta university [4], which reported that (47.1 %) of defendant physicians were consultants, (35.3%) were specialists, (11.8%) of the whole medical team was accused, and (5.9 %) were residents.
Regarding the different medical specialties of the studied living cases, the general surgery specialty had the most medical errors at 24.2%, then ophthalmology at 16.2 %, obstetrics/gynecology at 16.2%, and orthopedics at 15.2 %. These results disagree with a study in Saudi Arabia that noted that obstetrics had the most cases at 27%, followed by general surgery and subspecialties represented 17% each, internal medicine at 13%, and pediatrics contributed at 10% of the claims [11].
Regarding types of medical errors in living cases, negligence represented the most accusations (34.78%), followed by complications within surgical therapy (30.43%), then wrong therapy represented (17.39%), mistakes in care and medication errors (8.7%) for each. That means that doctors need to follow specific protocols and more practicing about medical errors and how to avoid them.
These results agree with a study from Germany [6], which reported that most accusations were due to negligence (48.5%) followed by complications within surgical therapy (33.1%), wrong therapy (17.2%), medication error, adverse event due to drug therapy (12.5%) and mistakes in care (7.2%).
Also, these results agree with the study from Wuhan, China [12], which reported that most accusations were due to negligence (50.5%) followed by complications within surgical therapy (18.6%) These results disagree with the paper published in the Egyptian Journal of Forensic Sciences [8], which reported that improper performance of surgical procedures was the most frequent cause (32.3%), followed by the inadequate postoperative follow-up (20%).
Regarding the incidence of permanent infirmity in the studied living cases, (57.6%) of them had different permanent infirmities and 42.4% had no permanent infirmities, which means that most of the cases had serious complications. These results disagree with the retrospective study from Germany [13], which noted that most of the cases had no permanent infirmity, and also disagree with a paper published in 2003 [14], which noted that (39%) of cases had permanent infirmity and 61% had no permanent infirmity.
Regarding permanent infirmities of different specialties in the studied living cases, neurological and ophthalmological infirmities were the most types (26.32%) for each. Both specialties have the highest percentages of permanent infirmities among living studied cases because they are very sensitive specialties that work on very sensitive organs and systems in which any medical mistake may lead to permanent infirmities more than other specialties.
Regarding the relation between the age of living cases and results of forensic examination, the most positive cases were in the age group from 1 year to 2nd decade (35.7%). These results disagree with the study from Wuhan, China [12], which demonstrated that the most positive cases were in the age group 3rd to 5th decade.
Regarding the relationship between different specialties and the results of forensic examinations. All cases of Emergency specialty were positive (100%), followed by ENT (66.7%) and radiology (50%). These results disagree with the retrospective study from Germany [13], which noted that psychiatry had the most positive cases and the medical specialty which had the most negative cases was pediatrics (11.5%), and also disagree with the study from Wuhan, China [12], noted that general practice had the most positive cases followed by surgery and internal medicine.
Regarding the relation between the incidence of permanent infirmity and results of forensic examinations, living studied cases that ended with permanent infirmity, (33.3%) of them approved to be positive, and cases that not ended with permanent infirmity (9.5%) were positive. These results coincide with the study from Tanta university [4], which reported that cases with permanent infirmity (5.88%) of them were positive.
Conclusion
A significant number of the studied living cases are females. The most affected age group is the 2nd to 4th decade while the least affected age group is less than 1 year. Private hospitals had the most medical malpractice living cases. Regarding the medical specialties, general surgery had the most alleged living cases and most of the accused doctors were consultants. Tightening control over the various health institutions, to ensure that any case of medical negligence is monitored, is required. Also, the penalties must be increased for all health practitioners accused of cases of willful medical negligence. | 2023-01-25T16:07:40.531Z | 2023-01-22T00:00:00.000 | {
"year": 2023,
"sha1": "eddea94003747d8152810648b563a58ff4edb346",
"oa_license": "CCBY",
"oa_url": "https://fumj.journals.ekb.eg/article_281413_7bec32cdc38c10c3a4de09f24a4224e8.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "a6de030eafd458cc6823600a7cb35e6401775cc9",
"s2fieldsofstudy": [
"Law",
"Medicine"
],
"extfieldsofstudy": []
} |
238760172 | pes2o/s2orc | v3-fos-license | Numerical Simulations of the Internal Ballistics of Paraffin–Oxygen Hybrid Rockets at Different Scales
: Hybrid rockets are considered a promising future propulsion alternative for specific appli-cations to solid or liquid rockets. In order to raise their technology readiness level, it is important to perform predictive numerical simulations of their internal ballistics. The objective of this work is to describe and validate a numerical approach based on Reynolds-averaged Navier–Stokes simulations with sub-models for fluid–surface interaction, radiation, chemistry, and turbulence. Particular atten-tion is given to scale effects by considering two different paraffin–oxygen hybrid rocket engines and a simplified grain evolution approach from the initial to the final port diameter. Moreover, a mild sensitivity of the computed regression rate to paraffin’s melting temperature, surface radiation emissivity, and Schmidt numbers is observed. Results highlight the increasing importance of radiation effects at larger scales and pressures. A numerical rebuilding of regression rate and pressure is obtained with simulations at the time-space-averaged port diameter, producing a reasonable agreement with the available experimental data, but a noticeable improvement is obtained by considering the grain evolution in time.
Introduction
Research in hybrid rocket engines is gaining momentum in recent years owing to experimental and numerical advances, and to an increasing number of test flights [1][2][3][4][5][6]. Hybrid rockets, which are propulsion devices burning a solid fuel and a gaseous or liquid oxidizer, are considered a promising alternative to liquid or solid propulsion for applications, including sounding rockets, space engines, auxiliary power units, and boosters [7,8]. They can be preferred to solid rockets owing to their higher specific impulse, increased safety, and throttling capabilities, and to liquid rockets because they are cheaper and simpler, and with higher average propellant density.
Paraffin-based hybrids are currently under investigation by the community due to their higher regression rate with respect to conventional fuels. However, there is still a relatively incomplete understanding of the relevant physical phenomena, including fluidsurface interaction, radiation, combustion, and turbulence, which occur simultaneously inside paraffin-based hybrid rockets. In fact, numerical simulations of the internal ballistics of paraffin-based hybrid rocket engines are becoming increasingly important to raise their technology readiness level. The most used numerical approach in the literature is based on Reynolds-averaged Navier-Stokes (RANS) simulations employing different sub-models [6], which either rely on a prescribed fuel mass flow rate [9][10][11][12][13] or employ a parametric gassurface interaction model [14,15]. Radiation effects on the regression rate, which can be important in HRE operating conditions [16][17][18][19], are also usually neglected when dealing with paraffin-based engines.
The objective of this work is to provide an additional step toward obtaining a predictive tool for the internal ballistics of paraffin-based HREs. To this end, a sensitivity study is performed on various aspects of the modeling strategy recently adopted and validated against experimental data of a lab-scale engine [4]. In particular, the effects of a change in paraffin's melting temperature, surface radiation emissivity, and laminar and turbulent Schmidt numbers are evaluated. Moreover, results on a different test case considering a larger scale are provided, highlighting scale effects on the internal ballistics. Finally, simulations at different diameters are analyzed in order to characterize convection and radiation heat flux changes obtained from a change in scale.
The manuscript starts with the description of the theoretical and numerical model in Section 2, followed by the illustration of the engine configuration and firing tests chosen for validation in Section 3. Then, results and discussion are shown in Section 4, and conclusions are summarized in Section 5.
Theoretical and Numerical Model
Numerical simulations were performed by solving the RANS equations for axisymmetric, turbulent, compressible, and reacting flows [20]. The specific isobaric heat and enthalpy dependencies on temperature were expressed, for all species apart from paraffin, with the seventh-order polynomials taken from the Chemical Equilibrium with Applications (CEA) database [21], which also provides the species standard heat of formation. The fourth-order polynomials of temperature for transport properties reported in [21] were employed for all species apart from paraffin, and Wilke's rule [20] was used to obtain mixture molecular transport properties. A constant Schmidt number equal to 0.7 imposed the same molecular diffusivity for all species. The standard Spalart-Allmaras model [22] was used to evaluate the turbulent viscosity, with turbulent Schmidt and Prandtl numbers equal to 0.7 and 0.9, respectively. Properties of paraffin will be discussed later.
In this work, a global reaction mechanism was employed to model the combustion of ethylene, which is the main product obtained from paraffin thermal cracking [23,24]. Due to the lack of detailed data in the literature, a reaction mechanism developed for butadiene combustion [25] was adapted here for use in ethylene combustion. Seven reactions and ten species were considered ( Table 1). The chemical source terms were obtained through Arrhenius-type forward reaction rates and backward rates calculated as the ratio between forwarding rates and the equilibrium constant evaluated from thermodynamic data taken from [26]. The reaction rate constant for the thermal cracking of C 32 H 66 was taken as the one for the liquid C 16 H 34 [24] due to the lack of data in the literature. More details on the reaction mechanism can be found in [4]. Table 1. Chemical reactions involved in the global reaction mechanism used for paraffinoxygen combustion.
The RANS simulations were performed with an in-house computational fluid dynamics (CFD) solver that has been validated in different operating conditions against experimental data [4,27]. The finite-volume computational tool is second-order accurate in space, and employs a Roe Riemann solver [28]. The Strang operator-splitting technique [29] for time integration was adopted: a second-order Runge-Kutta scheme integrates the convective and diffusive terms, whereas an implicit integrator for stiff ordinary differential equations is used for the chemical source terms [30]. Note that a local time-step approach was adopted, as steady-state solutions were sought. The code has been recently validated against experimental data of paraffin wax-oxygen hybrid rockets in [4,31].
The paraffin taken into account in the present manuscript is C 32 H 66 . Differently from conventional fuels, when paraffin wax is heated, it does not pyrolyze but rather melts, producing a liquid layer over the fuel grain. However, since the typical chamber pressure of hybrid rockets is higher than paraffin's critical pressure, equal to 6.5 bar [32], the melted paraffin is assumed to be at supercritical conditions, where no surface tension or boundary for droplets can be defined [33,34]. It is, therefore, reasonable to assume that in these conditions, the entrainment of the supercritical species is part of the turbulent mixing process. The melted species was modeled with the simplified dense fluid approach described in [4,31], with thermodynamic and transport properties taken from [35]. As described in [4], the fluid-surface interaction sub-model is based on mass and energy balances, which reduce to where q w,conv and q w,rad are the convective and radiative wall heat fluxes, respectively, and the paraffin density, melting enthalpy, specific heat, melting temperature, and initial temperature are, respectively, ρ s = 920 kg/m 3 , ∆h melt = 169.83 kJ/kg, c s = 1946.03 J/(kg · K), T melt = 343 K, and T s,in = 298.15 K. The radiative heat flux was computed with in-house software, which has already been described and used in [4,19,[36][37][38]. The main details of the numerical approach are reported below for the sake of completeness. The software solves the radiative transfer equation with the discrete transfer method for generic axisymmetric geometries, gray/diffuse boundaries, and inhomogeneous gray/nonscattering media. The evaluation of the radiative heat was carried out only at the boundaries given its small relevance, compared to the whole thermal power generated within the thrust chamber. Radiation from hydroxyls and from soot was not considered in the employed model. Absorption of radiative energy was assumed proportional to the pressure and to the absorption coefficients of H 2 O, CO 2 , and CO, which are considered as the major and only participating species to radiation, weighted with their molar fraction. A discretization consisting of 256 rays for each calculation point and a step of 1 mm along each ray were used after performing convergence analyses for both parameters. A wall emissivity equal to 0.91 was assumed for the paraffin wax grain by using the emissivity model proposed in [39] with a refractive index of 1.43 according to [40]. The CFD and radiation codes were coupled through the repeated evaluation of the radiative wall heat flux, then of the regression rate, and finally of the resulting flow field, until convergence was reached.
Engine Configuration and Firing Tests
The engine configuration for the simulations of this work is the axisymmetric thrust chamber depicted in Figure 1, which has already been used in [4]. It is composed by a pre-chamber (0 < x < x 0 ), a chamber with the paraffin grain (x 0 < x < x 1 ), a post-chamber (x 1 < x < x 2 ), and finally a converging-diverging nozzle (x > x 2 ). The nozzle conical converging and diverging sections are connected by circular arcs with each other and with the cylindrical post-chamber. The cylindrical pre-chamber, post-chamber, and fuel grain have all the same diameter. Note that, in general, the radius of the pre-chamber and post-chamber is different from that of the cylindrical grain port, which changes during the burn. However, this difference influences only slightly the numerical regression rate [41]; hence, it was omitted in the present study, and a simplified geometry with a single radius (equal to that of the cylindrical grain port) was considered. The computational setup is completed by the boundary conditions outlined below. On the left-hand side of the setup, an inlet boundary condition imposing gaseous oxygen mass flow rate, uniform static temperature equal to 300 K, and uniform turbulent viscosity equal to 10 −6 Pa · s is set up for 0 ≤ r ≤ r inj , while an adiabatic wall is set up for r inj < r ≤ R. The influence of the inlet turbulence viscosity on the flowfield is assessed as minimal. On the top side, Equation (1) is computed on the paraffin grain section, and adiabatic walls are imposed in the pre-chamber, post-chamber, and nozzle. Finally, a supersonic outlet boundary condition is considered on the right-hand side. Depending on the test conditions, a first simulation is carried out from an initial uniform flowfield with oxygen at 12 bar and 3000 K until steady state, which serves as the initial condition for the following simulations.
Aiming to validate the predictive capabilities of the present approach, two sets of literature experimental firing tests were considered (see Table 2). Seven test cases were chosen from the lab-scale experimental tests carried out at the University of Naples "Federico II" [13] (named set 1), while five were from experiments performed at the hybrid combustion facility of Nasa Ames [42] (named set 2). Oxidizer mass flow rates ranged from 29 to 60 g/s for set 1, and from 2.05 to 4.4 kg/s for set 2. The injection pattern of the experiments in both sets was correctly replicated by the abovementioned inlet boundary condition, as it was composed of a single circular injector. The experimental setups were of different sizes, the latter being quite larger than the former. For all tests of set 1, A single simulation at the time-space-averaged port diameter was performed for each test case. In addition, simulations at the initial and final diameters were also performed for tests 3 and 4 of set 1. Table 2. Setup conditions for the two hybrid engines considered in this study. Set 1 is taken from [13], while set 2, from [42].
Set
Testṁ ox (g/s) R (mm) r t (mm) Computational meshes were created considering a proper grid clustering in the injection, near-wall, and throat regions to sufficiently resolve the mixing layer, the boundary layer, and the transition through sonic conditions, respectively, (see Figure 2). The mesh employed for the numerical simulations of tests in set 1 was composed of 160 grid cells in the axial direction and 100 grid cells in the radial direction. The resolution employed yielded a maximum cell height on the grain of 12.34 µm, and a corresponding dimensionless wall distance for wall-bounded flows y + of the order of 1. Additional details and adequate mesh convergence studies have been reported previously in [4]. For simulations of tests in set 2, a topologically similar mesh of 600 × 320 elements ensured approximately the same wall resolution as for set 1. Table 2).
Results
The results from the Navier-Stokes simulations show several peculiarities of hybrid rockets burning paraffin wax and gaseous oxygen. We start with a discussion of the results from the small-scale engine (set 1), followed by the analysis of the firing tests of the larger engine (set 2), concluding with the numerical rebuilding of experimental regression rate and chamber pressure.
Results on Set 1 Tests
The tests in set 1 are characterized by experimental oxidizer-to-fuel ratios from 0.77 to 1.26; hence, they are all in fuel-rich conditions (the stoichiometric O/F is equal to 3.44). Figures 3 and 4 show numerical results for test 4 of set 1 obtained by increasing the port diameter from the initial to the final one with the average diameter as an intermediate step. This was performed to simulate the time evolution during a firing test. A common feature for all diameters is the presence of a wide diffusive flame developing from the pre-chamber throughout the whole combustion chamber. The inhomogeneous presence of reactants and combustion products is observed at the nozzle entrance, where combustion still takes place. At the initial diameter, the recirculation zone is mostly confined upstream of the fuel grain, which entails complete combustion of the cold liquefied paraffin before reaching the injector plate; hence, hot combustion gases are filling the pre-chamber. On the other hand, as the port diameter grows, the vortex penetrates towards the grain and brings upstream more paraffin, which cools down the pre-chamber and allows the flame to be anchored at the interface between the oxygen injector and the injector plate wall. The amount of paraffin present in the pre-chamber is, in fact, strongly dependent on the extension of the vortex, and post-firing inspections of the injector plate confirmed the presence of unburnt paraffin [13]. Downstream of the reattachment point, appearing at varying axial locations on the fuel grain depending on the port diameter, the liquid paraffin accumulates with increasing axial distance, covering as a thick film the post-chamber and partially also the nozzle walls. At the final diameter, a lower mass fraction of both oxygen and paraffin is observed at the nozzle entrance. Moreover, the flame reaches the symmetry axis earlier with respect to the other two smaller diameters, yielding a more complete mixing and combustion in the core region. The mass fraction of molecular oxygen reduces along the length of the engine because of combustion; however, part of the injected oxygen directly flows away from the nozzle at the average diameter, indicating a lower combustion efficiency since this test is very fuel-rich (O/F = 1.3). Combustion products such as water vapor or carbon dioxide cannot reach the walls due to a thick layer made mainly of paraffin and carbon monoxide. This actually also prevents nozzle erosion, which was not observed in the experiments [13].
The regression rate is greatly influenced by the vortex extension over the grain ( Figure 5): As the vortex reattachment point moves downstream, the regression rate shows a corresponding local minimum or change in slope. Furthermore, due to the reduction of the total mass flux, the overall magnitude of the regression rate tends to decrease with increasing port diameter. Moreover, pressure axial nonuniformity vanishes with increasing port diameter, due to the reduced Mach number in the port caused by the larger contraction ratio (see Figure 6).
Model Sensitivity Analysis
In order to establish the effects of model parameters/constants on the computed regression rate and pressure, in this section, we consider only the simulation of test 4 of set 1 at its average diameter.
An increase in the melting temperature of paraffin T melt from the reference value of 343 K is expected to reduce the convective wall heat transfer, yielding a lower regression rate [14]. This is indeed observed by comparing numerical simulations at three different values of T melt (Figure 7). The sensitivity of the regression rate is shown to be −5% every 15 K. The chamber pressure also decreases for higher values of T melt , at a rate of approximately −0.65% every 15 K. The pressure rate of change is lower than the one of the regression rate because a higher characteristic velocity balances the lower total mass flow rate. Regression rate and pressure are unchanged for variations of the paraffin grain emissivity ε w (which equals absorptivity due to the gray wall assumption) in the range 0.91-1.0 ( Figure 8). This leads to the possibility of treating the paraffin grain as a black body for this fuel-rich case and shows that the results are not sensitive to increased grain wall emissivities in the range 0.91-1.0. As the turbulent Schmidt number is reduced, mixing is enhanced due to the increased turbulent diffusivity; hence, higher regression rate and pressure are obtained (Figure 9). Different turbulent Schmidt numbers entail 1-3% changes in average regression rates and chamber pressures; however, it is worth noting that typical values of turbulent Schmidt numbers used in the literature are in the range 0.7-0.9. On the other hand, regression rate and pressure are insensitive to changes in the laminar Schmidt number (Figure 10). In fact, molecular diffusion phenomena are not expected to play a significant role in this fuel-rich test case, as next to the wall, there is mostly paraffin. The small sensitivity of regression rate and pressure to the laminar and turbulent Schmidt numbers justifies the adoption of Fick's law and the assumption of constant Schmidt numbers in this work and allows us to avoid using other multicomponent diffusion models.
Results on Set 2 Tests and Effect of Radiation
The tests in set 2 are characterized by a larger scale with respect to set 1, by experimental oxidizer-to-fuel ratios from 1.70 to 2.69 (whereas in set 1, O/F ranges from 0.77 to 1.26), and by a reduced post-chamber length (for test L01 (x 2 − x 1 )/R = 1.1, while for test 4 of set 1 (x 2 − x 1 )/R = 4.6). Nevertheless, the same qualitative internal combustion phenomena of set 1 are observed: the axial injection mode yields flame anchoring at the interface between the oxygen injector and the injector plate wall with a recirculation zone in the pre-chamber, which is filled by unburnt paraffin (Figure 11). Due to the short post-chamber, mixing is not favored, and the flame does not reach the centerline. In addition, a strong CO presence is observed above the flame but not close to the wall, which is protected by a thick layer of cold paraffin up to the nozzle walls. Moreover, a large fraction of the oxygen does not burn but is just accelerated through the nozzle, decreasing significantly the efficiency of this fuel-rich regime. It is interesting to note that ethylene mass fraction is mostly null, indicating that when the cracking reaction of paraffin is triggered, the ethylene produced is burnt right away and does not accumulate (the same holds for set 1 results).
During the simulations of all tests in set 2, mild numerical instabilities involving the whole flow field are observed. This is due to either numerical or physical reasons. In fact, numerical results showed oscillations around a constant average flow field, which is taken as reference in the present work, and the deviations from it are considered as numerical error. The physical nature of these oscillations should be confirmed by time-accurate simulations, which are, however, outside of the scope of this work because of their too high computational effort. Note that in the firing tests of set 2, experimental pressure oscillations between 4 and 12% of the mean pressure are observed [42]. Their peak in the pressure spectrum is at 30, 100, and 350 Hz, corresponding to boundary-layer interaction with solid thermal lag (low-frequency hybrid instability), bulk mode, and acoustic half-wave in the combustion chamber. Such oscillations are not observed either physically or numerically in set 1. A significant effect obtained by increasing the size and the operating pressure of the engine from set 1 to set 2 is the higher relevance of radiation in the wall heat flux balance, which directly controls the paraffin's regression rate (Table 3). For set 2, this contribution reaches values as high as 88%, whereas for set 1, it is at a maximum of 62%. In fact, the radiation contribution is found to be proportional to the product between chamber pressure and port radius, except for the L04 case, which is the one characterized by the highest port mass flux, yielding an increased share of convection heat transfer. These results depend on the employed models for turbulence, combustion, and radiation, and need to be verified with experimental data on radiation heat transfer in paraffin-based hybrid rockets, which are lacking in the literature. Table 2). Table 3. Wall heat flux contributions on the grain surface for all tests at the average port diameter.
Set
Test p c R (bar·m) q w,tot (MW/m 2 ) q w,rad /q w,tot
Numerical Rebuilding
The integral average of the axial profiles of the regression rates and the post-chamber pressures (x = 0.28 m) are extracted from the numerical simulations at the average diameter and are compared to the respective experimental values in Figure 12. Very close agreement is found for chamber pressures lower than 20 bar, whereas an error of about 15% is observed for the higher chamber pressures of set 2. Average regression rates are generally in good agreement with experimental results, with maximum errors of 15%. Table 4 shows a comparison between numerical and experimental O/F, pressures, and combustion efficiencies for all tests. The combustion efficiencies are underestimated by the numerical simulations, only slightly (by 1-5%) for tests in set 1 and by larger amounts (6-22 %) for tests in set 2, also due to the larger errors in the pressure rebuilding.
Overall, the model, which does not require information from any firing test, shows an acceptable prediction of the experimental data of the two engines at different scales without any dedicated tuning. This is achieved by making simplifying assumptions that do not make the model computationally expensive and do not alter significantly its predictive capabilities. Table 4. Experimental and numerical oxidizer-to-fuel ratio, chamber pressure, characteristic velocity, and combustion efficiency for all tests in Table 2. In order to further validate the computational model and improve numerical predictions, it is important to verify that results at evolving diameters are coherent with the experimental data. To this purpose, simulations of test 3 of set 1, for which pressure probe data in time are available from [13], were performed at the initial, average, and final port diameters. Flow fields of test 3 are not shown because they are very similar to the ones already shown for test 4 of set 1 (Figures 3, 4, and 6). As previously observed for test 4, increasing diameters result in decreasing regression rates, due to lower mass fluxes in the port, and more axially uniform pressure profiles due to lower Mach numbers ( Figure 13). The O/F resulting from the simulations at the different diameters are 0.90, 1.22, and 1.24, showing that test 3 is in fuel-rich conditions for the whole burn duration, which is consistent with the absence of throat erosion observed in the experiments. In addition, the characteristic velocities obtained are 1218 m/s, 1388 m/s, and 1501 m/s, from the lowest to the highest diameter, respectively, corresponding to c * -efficiencies of 86%, 85%, and 91%. It is interesting to observe how the regression rate is influenced by the different contributions of convection and radiation. At the initial diameter, the dominant share of the wall heat flux is provided by the convective heat flux, with radiation actually helping in making the total wall heat flux more uniform in the axial direction ( Figure 14). However, the radiation wall heat flux becomes more significant by increasing the engine diameter (Table 5). This is quite an interesting result as, despite the significant change of the share of radiative/convective heat flux, the numerical pressure is rather constant, in line with the experimental measurement ( Figure 15). In addition, it is noted that the radiation contribution starts to become non-negligible for oxidizer mass fluxes lower than G ox of the order of 100 kg/(m 2 s), which is consistent with the value of 140 kg/(m 2 s) obtained by [7]. This gives further confidence to the numerical models used in this work. q w,conv D 0 q w,conv D ave q w,conv D final q w,rad D 0 q w,rad D ave q w,rad D final Figure 14. Effect of port diameter on wall heat flux for test 3 of set 1. Numerical times in Figure 15 are obtained by considering the average between two successive regression rates and diameters and were axially shifted by 1 s to match the experimental ignition transient. By considering just one simulation at the average port diameter, the average experimental regression rate of test 3 is underestimated by 11.8%. On the other hand, the integral average of the numerical regression rate obtained with the three successive simulations yields an underestimation of just 2.2%, due to the higher regression rate obtained at the initial diameter. This underlines the need to numerically simulate the evolution of the grain in time for more accurate numerical predictions.
Conclusions
A numerical approach based on Reynolds-averaged Navier-Stokes simulations including fluid-surface interaction, radiation, combustion, and turbulence, was used to analyze hybrid rocket engines at different scales. Due to the uncertainty of some of the model parameters, a sensitivity analysis was carried out for paraffin's melting temperature, surface radiation emissivity, and Schmidt numbers, showing a negligible or mild effect on the computed regression rate and pressure. Although no tuning or calibration was carried out, the results compare well with experimental data, and, owing to appropriate assumptions, the model retains its predictive capabilities without compromising the simplicity and inclusiveness of the model strategy.
A quite interesting result is that numerical predictions are in line with experimental measurements despite the wide range of the share of convective and radiative heat flux. This provides confidence that both convection and radiation models are well suited to numerically simulate the internal ballistics of paraffin-based hybrid rocket engines. In particular, the share of the radiative over total wall heat flux ranges from 10% to 88%, depending on the chamber pressure, port radius, and mass flux. As a further demonstration of the capability of the approach, it is found that by evolving the port diameter, the nonlinear time evolution of the experimental regression rate is correctly captured. | 2021-09-27T20:22:11.796Z | 2021-08-01T00:00:00.000 | {
"year": 2021,
"sha1": "f218bdb3b9b381cb87ab39c54c9913e76381fa52",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2226-4310/8/8/213/pdf?version=1628142545",
"oa_status": "GOLD",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "47296fe0b8ad00042713340e0a5ca1a630a75486",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": []
} |
257758090 | pes2o/s2orc | v3-fos-license | Risk Factors for Early Sexual Intercourse in Adolescence: A Systematic Review of Cohort Studies
This systematic review provides a comprehensive assessment of risk factors related to early sexual intercourse (ESI) among adolescents. We used PRISMA guidelines to identify eligible cohort studies published between January 1999 and December 2020. We searched on three databases: PubMed, Embase and LILACS. Studies were screened for quality and eligibility. Of 2787 identified studies, seven met our inclusion criteria. The studies examined a range of factors, which were organized into four dimensions – individual, family, social and environmental, and sociodemographic. Risk factors with strong associations for ESI were: adolescent and parental substance use, aggression and conduct disorders, family attachment, school achievement, family living situation, and maternal education. Three studies were birth cohorts. This review demonstrates the important roles of substance use, family attachment and academic factors in shaping adolescents’ sexual behavior. A strength of this review is its focus on longitudinal studies, enabling exploration of exposures collected before initiation of sexual intercourse. Supplementary Information The online version contains supplementary material available at 10.1007/s10578-023-01519-8.
Introduction
Adolescents face multiple physical, emotional and social changes associated with the process of maturation, which can pose challenges [1]. Among these experiences, early sexual intercourse (ESI) in adolescence is associated with problems in health and social development, e.g. a greater likelihood of sexually transmitted infections (STIs), substance use [2] and unintended pregnancies, leading to enormous costs to both individuals and to society [3]. Although there is lack of consensus about the age at which sexual intercourse is considered "early," many consider before age 15 to be precocious [4].
The onset of sexual intercourse at younger ages remains a concern in public health because it is related to a wide range of negative health outcomes in adolescence, which can last into young adulthood [5]. Unsafe sex, defined as sex without a condom, is associated with ESI, which can lead to short-and long-term consequences [6]. These include unintended pregnancy [7] and STIs [8]. In the mental health domain, ESI has been linked to substance use [9], eating disorders [10], low self-esteem [11], antisocial personality [12], depression [13], suicidal ideation, and suicide attempts [14]. Regarding the social sphere, ESI tends to cluster with multiple sexual partners [15], unsafe abortion or abortion-related death [16], physical aggression [17], teen dating violence [18], and poor school performance [19]. Some researchers believe that adolescents' intentions to engage in risky sexual behaviors are related to sensation seeking (a personality trait that makes an individual pursue new and exciting stimuli) [20], a phenomenon common during early adolescence [21].
The prevalence of ESI varies widely worldwide. Recent data from the Youth Risk Behavior Surveillance (YRBS) in the United States found that 20.4% of students had already initiated sexual relationships by the 9th grade, or around ages 14-15 [15]. Data from the Global School-Based Health Survey (GSHS) from eight African countries found the prevalence of ESI to be 27.3% among 15-year-old adolescents. In relation to first sexual intercourse, the prevalence was higher among youth aged 11 or less (11.8%) when compared with youth who were 12 (5.5%), 13 (3.9%) and 14 (6.1%) years old [22]. A comparison of risky sexual behaviors between two cohorts in Spain found that the age of first sexual intercourse decreased with time. Adolescents from the 2006 and 2012 cohorts had their first sexual intercourse at 15.1 and 14.7 years, respectively [23].
Although a robust body of literature reports different factors associated with engaging in ESI, many of these studies are cross-sectional or observational with little rigor in their design and follow-up [24]. This fact makes it difficult to establish causal relationships and, consequently, limits their use in public health decision-making, particularly in the design of interventions aimed at delaying the onset of sexual activity in adolescents. Hence, the main gap is the lack of systematization of the results of cohort studies that aim to unveil risk factors for ESI. Findings of cohort studies can contribute to a greater understanding of ESI and its impacts over time and support the development of preventive strategies. In that sense, the aim of this systematic review is to identify risk factors associated with ESI among adolescents worldwide.
Methods
This study includes a systematic review of the scientific literature based on the PRISMA protocol [25,26] (See Supplementary Material: PRISMA 2020 Checklist). The aim of this review was to identify risk factors for ESI in adolescence (ages 10 to 19 years) based on cohort studies (Table 1). In addition, this study was registered a priori in the international prospective register of systematic reviews (PROSPERO) (record number CRD42018089786).
Screening
Two researchers reviewed the titles and abstracts of all the studies that were identified in the search. Studies that did not meet the eligibility criteria (listed below) were excluded. The second step was to retrieve the full text of those that met the inclusion criteria. Disagreements were discussed with a third member of the research team to reach resolution.
Inclusion and Exclusion Criteria
To be eligible for inclusion, articles had to: (1) present findings from cohort studies; (2) include adolescents between 10 and 19 years old; (3) be in English, French, Portuguese or Spanish and; (4) be published between January 1999 and December 2020.
Studies were excluded if they did not meet the four inclusion criteria or were conducted with special populations e.g., street-involved youth, pregnant teenagers, sex workers, detained adolescents, youth undergoing intensive psychiatric treatment, adolescents recruited from medical clinics or men-who-have-sex-with-men. The reason for this was that we aimed to study risk factors for ESI in adolescence broadly, not in key populations. Key populations tend to report high rates of ESI, with different patterns compared to those of adolescents in general. Duplicate studies were automatically excluded by the Rayyan software.
Data Extraction and Quality Appraisal
From each article, we extracted the study authors, publication year, study country, sample size, study design, length of follow-up, statistical analytic techniques, independent variables, outcome measure, major findings, and study limitations. We used the Newcastle-Ottawa Quality Assessment Scale (NOS) for cohort studies to assess the quality of the articles [27]. Higher scores indicated studies of higher quality. The NOS has three quality parameters: selection (four points), comparability (two points), and outcome (three points) .
Data Analysis
The first author reviewed all articles pulled for full-text review. These were examined using a data extraction form and a study quality scale. A second independent reviewer checked the data extraction and quality-assessment score for each article, and any discrepancies were resolved through discussion. Any articles that did not meet inclusion criteria after full-text review were excluded. For included studies, we used extracted data to summarize results stratified and organized by risk factors.
Results
We identified 2,787 citations through our literature search ( Fig. 1). After excluding duplicate citations, 2,461 records were eligible for review. Subsequently, we excluded 2,428 (98.6%) citations after screening titles and abstracts. We obtained full text articles for the remaining 33 citations, and from these records, seven were included for quality assessment. A total of seven articles met the review criteria and were included in the final analysis.
Of the seven studies reviewed, four (57.1%) achieved high [7][8][9] and three (42.9%) moderate (56) NOS ratings. Three studies were birth cohorts [28][29][30] and four were part of longitudinal investigations: the National Longitudinal Study of Adolescent Health (Add Health) [28], the South Africa Tanzania Project [29], Young in Norway [31] and the Wisconsin Study of Families and Work [32]. Follow-up assessments in the birth cohorts ranged from 15 to 21 years and participants were followed up from six to nine times [33]. Among longitudinal studies, follow-up varied from one to seven years and participants were most commonly assessed only once after baseline . All studies reported loss to follow-up. Retention of ≥ 80% of the participants was reported by three studies (42.9%) [34, ]. Participants lost to follow-up were caused by: parents' refusal for their children to participate or to answer sexuality questions , students not being present in class during the data collection and an increased number of follow-up visits not in the original design . Characteristics associated with mothers of non-responders included being < 25 years old at the time of delivery, non-white, having low education, and having smoked during pregnancy .
In total, seven studies with 11,667 participants from five countries were included. New Zealand, Norway, and South Africa contributed one study each. Australia and the United States contributed two studies each. The articles included were published between 1999 and 2020, however the data presented were collected from 1972 [29] to 2009 [28] and samples ranged from 273 to 4,808 participants. All but one study, which focused only on females , included both sexes. Among non-birth cohorts, the age at baseline ranged from 11 to 14, with a mean of 12.3 years. Sexual intercourse before the age of 11 was excluded in one study [33] because of concerns about incest or other forms of sexual abuse. Most studies characterized sexual intercourse as penile-vaginal intercourse, though one study included anal sex in the definition [32]. One study reported oral sex as a secondary outcome [34]. Study outcomes also varied in terms of age of sexual intercourse. In five studies, the cutoff for ESI was defined as age 15. A single study did not establish a cutoff age for ESI, but found 16 years as the median age of first sexual intercourse, suggesting 15 could be considered the age of ESI for this sample . Taken together, six of the seven studies (85.7%) considered the age of 15 as ESI, and two treated ESI to be age 16 [31]. A summary of study results is shown in Table 2.
In this review, the studies included assessed 50 different risk factors for ESI outcomes ( Table 2). Of these, 36 (72.0%) were identified as statistically significant risk factors for ESI. Considering that these studies included a wide range of risk factors, we organized them into four dimensions: individual, family, social/environment, and sociodemographic. Risk factors for ESI in each dimension are reported in Table 3.
Individual Risk Factors
More than half of the studies (55.6%) identified individual risk factors for ESI. These factors were related to development, behaviors, beliefs and psycho-emotional factors, and mental health. The most common exposures were mental health problems (19.4%); attitudes and beliefs (13.9%, each) were the second most common exposures, and the developmental characteristics were the least frequent (8.3%) ( Table 3).
Developmental Characteristics
Although three studies have examined age at menarche, only one found a significant trend for ESI -when menstruation occurred at age 11 (OR 1.5 95% CI 0.19-12) [29]. Associations between early perceived relative pubertal maturation among boys and ESI were also observed (RR 1.46 95% CI 0.94-2.29) [33].
Behavioral Factors
Substance use was highly associated with ESI for both genders. Daily smoking was associated with 1.4 and 3.0 higher odds engaging in ESI when compared to non-smokers (boys: OR 1.4 95% CI 0.69-2.9; girls: OR 3.0 95% CI 1.5-6.1) [29]. Alcohol intoxication was also associated with ESI (boys: RR 2.22 95% CI 0.91-2.51, girls: RR 3.41 95% CI 1.28-9.08) [33], (boys: OR 1.5 95% CI 0.65-3.6, girls: OR 1.3 95% CI 0.53-3.4) . Also, intention to have sexual intercourse was associated with transition to first sexual intercourse (regression coefficient = 0.10) and the strongest predictor of intention was social outcome expectancies (regression coefficient = 0.43). Conversely, having self-efficacy to negotiate delayed sex was negatively associated with ESI (regression coefficient = − 0.08). Standardized regression coefficients were divided by the standard deviation of the predictors. In this case, a one unit change in a dichotomous predictor was interpreted as the magnitude of change associated with moving from one category to another [32].
Beliefs and Psycho-Emotional
Furthermore, all three studies that investigated religion found it was associated with ESI. Being less religious was a risk factor among adolescents with strong parental disapproval of sex (OR 1.63 95% CI 1.25-2.12) [31]. Likewise, being involved in religious activity played a role for boys in delaying the age of sexual intercourse (OR 0.39 95% CI 0.17-0.91) [29]. Higher self-esteem scores among girls (OR 2.4 95% CI 1.0-5.9) also predicted ESI . Among boys, high (RR 2.96 95% CI 1.22-7.21) and moderate (RR 1.93 95% CI 0.82-4.53) self-perceived romantic appeal (the respondent's belief that others would be romantically attracted to them) were found as risk factors for ESI [33].
Family Risk Factors
Family characteristics also had important associations with ESI (22.2%), e.g., parental substance use and family attachment (11.1%, each).
Parental Substance Use
Among adolescents who reported ESI, there was a higher prevalence of mothers (31.3%) and fathers (47.0%) who smoked during pregnancy (p < .001) [30]. For instance, a greater risk of ESI was found among youth whose parents smoked daily (
Family Attachment
Among adolescents who described that their parents strongly disapproved of sex, the lack of parental monitoring of television programming was associated with sexual initiation within one year (OR 1.35 95% CI 1.01-1.81) [31] (Table 3).
Social and Environment Risk Factors
More than a quarter of studies (30.6%) found associations between social and environmental factors and ESI. The majority of these studies included academic factors (school achievement, educational aspirations, being in trouble at school) (16.7%), followed by watching television (5.6%), suffering physical violence in an intimate relationship, peers' substance use and not having interests outside the home (2.8% each). Low achievement at school was also associated with a greater risk of ESI for both genders [29,31,33]. Low educational aspirations , plans to leave school early, and being in trouble at school were also predictors of ESI. Furthermore, spending time viewing television programs with high sexual content appeared to be associated with ESI (boys: B = 0.38, Wald = 7.07, p = .024; girls: B = 0.15, Wald = 6.90, p = .009) . Another study found that adolescents who spent two or more hours per day viewing television were more likely to initiate sex within one year (OR 1.35 95% CI 1.01-1.79). Among adolescents who described their parents as strongly disapproving of sex, the likelihood of initiating sex within one year was higher among those who watched television for two or more hours per day (OR 1.72 95% CI 1.24-2.40) [34].
Being male (vs. female) [32], older [31, ] and Black were associated with ESI. Not living with both biological parents [28,30,34] and having low or middle [29] socioeconomic status were also predictors of ESI. Additionally, low parental education , low maternal education , and mother's age at first pregnancy before age 20 were identified as risk factors for ESI. For one study no measured sociodemographic variables were associated with ESI [33]. We intended to conduct a meta-analysis of studies with comparable risk factors, but were unable to do so due to differences in risk factors, study populations, and definitions of ESI across the included studies.
Discussion
The goal of this review was to improve our understanding of the links between risk factors and ESI among adolescents worldwide. The exposures most often examined in the included articles were: overall mental health, adolescent and parental substance use, family attachment, school achievement and educational aspirations, living environment, and parents' education. Risk factors with strong associations with ESI were: adolescent and parental substance use, conduct problems, no religious beliefs, family attachment, school achievement, not living with both biological parents, and low maternal education. The relationship of ESI and attitudes, parental substance use, and academic environment exposures were found to be consistent within the literature. Findings associated with both age at menarche and depressive symptoms were less consistent. This systematic review, which covers research from 1999 to 2020, contributes to the field by updating and providing a comprehensive assessment of risk factors for ESI. Across ESI outcomes, the relation with sexual behavior in a broader context needs to be more adequately considered.
We identified that having self-efficacy to negotiate delayed sex was inversely associated with ESI [32]. In the same way, other studies have reported that low self-efficacy to negotiate delayed sex reflects adolescents' poor communication skills [35], which may reduce condom use [36]. Comparing two data waves of the National Survey of School Health (PeNSE) in Brazil, condom use at last sex intercourse among students aged 13-15 years decreased between 2012 and 2015, from 75.3% [37] to 66.2% [38]. Moreover, data from the National Longitudinal Study of Adolescent Health (Add Health) in the United States showed that adolescents with lower self-efficacy for condom use had more nonromantic sexual partners [39].
Our findings also suggest that, overall, substance use both in adolescents [29,33] as well as in their parents , even during pregnancy [30], were predictors of ESI. These findings are supported by several studies [5,40,41]. Additionally, substance use has been identified as a factor that increases conduct problems [42,43]. Considering that risky behaviors tend to occur together [44], implementation of prevention programs that addresses more than one outcome is critical.
This review suggests that some aspects of the school environment, such as achievement, aspirations, plans to leave school, and being in trouble at school are likely to influence adolescent ESI [29,31,33]. This result is supported by previous research that found that not skipping school and staying out of trouble were protective against ESI [45]. Literature suggest that students who do not feel connected to school are more likely to engage in maladaptive risk behaviors [46], and that adolescents' perceptions of the respect they receive and inclusion in school are essential for feeling connected in this environment [47]. A possible explanation could be that school facilitates strong social bonds among adolescents, their peers, and teachers to boost positive behaviors. Indeed, evidence suggests that adolescents avoid behaviors that are not valued by others who they respect in school [48].
Two studies indicated television viewing as a risk factor for ESI among adolescents [31,34]. Since most mass media (press, radio and television) programs explore sexualization and do not provide accurate information about sex [49], mass media consumption among adolescents should be carefully examined. In fact, one study found that the risks of ESI were accurately discussed in only 10% of the sexual content on television programs that are popular among adolescents [32]. Previous research on adolescents' TV watching habits identified that those who viewed more sexual content were more likely to initiate sexual coital and noncoital activities [50]. Taking into account that adolescents' daily lives have changed since the novel coronavirus (COVID-19) disease [51] became a pandemic [52], it is crucial to more closely consider the health impacts of mass media consumption among adolescents. On this topic, a study found that confinement imposed by COVID-19 led to up to 4 h/day of increased media exposure among children and adolescents [53]. Exposure to sexual media in television and movies contributes to the development of a range of beliefs and sexual attitudes in adolescents, such as self-objectification, body shame, decreased sexual assertiveness, coercive sex victimization, normalization of attempted/completed rape [54], and involvement in dysfunctional or violent romantic relationships [55]. Indeed, we identified physical violence in intimate relationships as a risk factor for ESI [56].
Our findings show that being male was associated with ESI. Gender differences may be due to the fact that masculinity among boys encourages earlier age of sexual initiation [36]. Low maternal education also played a role in ESI. Because maternal age at pregnancy was expected to correlate with maternal education, this may reflect lower socioeconomic status [57]. Our review also found that not living with both biological parents was a risk factor for ESI. One possible interpretation is that this type of living arrangement can introduce an unstable family dynamic for the adolescent. This is in line with another prior study that found that adolescents living with two parents had more stable lives and better subjective well-being compared to those living with one parent [58].
Strengths and Limitations
Strengths of this study include the longitudinal nature of the studies included in the review that has enabled us to explore a wide range of exposures collected earlier in life, before initiation of sexual intercourse. Our review operates from a multi-dimensional model, composed of four integrative dimensions -individual, family, social and environmental, and sociodemographic -in which to classify available findings.
Our findings should be considered while taking into account some limitations. First, the cutoff age for ESI was not the same across studies. Second, studies varied widely in terms of sample size and sampling strategies. Third, participants were lost to follow-up for a wide range of reasons: adolescents not present in school during the data collection, parents' refusal for their children to participate or to answer specific sexuality questions, and an increased number of follow-up visits that were not planned in the original study design. Last, all studies were conducted in high-income countries. Given sociodemographic exposures may impact family dynamic in many ways, future research should also be conducted in low-and middle-income countries.
Implications
The findings of our review highlight several implications for intervention, research, and policy. In terms of interventions, given our findings about the role of self-efficacy in shaping ESI and other risk behaviors, preventive interventions should focus on assertiveness, especially where there are issues related to gender differences, such as gender norms, sexual coercion and intimate partner violence [59]. Further, given our finding about co-occurring risk behaviors, programs targeting ESI among adolescents should consider integrating outcomes related to substance use, violence, and other outcomes. For instance, Life Skills Training, Positive Action, the All Star Program and Project PATHS are programs that target multiple outcomes [60][61][62][63]. Third, programs should consider reaching adolescents in the environments where they are likely to have the most impact. Our review suggests that school-based and peer interventions are likely to improve sexual health outcomes among this population, given the influence of schools and peer dynamics on ESI across studies. Similarly, our findings about the role of the media suggest a need for improvements in mass media programming targeted toward adolescents, as well as alternative options for adolescents to engage in during leisure time altogether. Additionally, parents need to be included in the prevention processes in order to be aware of how to monitor risky situations, manage their communication with adolescents, as well as how to understand the importance of adolescents' participation in health research.
In terms of policy, policymakers should encourage the development of evidence-based sexual behavior prevention interventions to improve health literacy among adolescents. Health literacy has been defined as the degree to which people can obtain, manage and understand health information and make proper health decisions. Prioritization of health literacy is a policy issue at the intersection of the fields of health and education [64]. Considering this approach, it is important that policies and programs address the following: substance use, improving adolescent knowledge and increasing their risk perception [65]; decision-making, to increase the ability of adolescents to make healthy decisions [48]; self-efficacy, to support their confidence in decisions made [66]; mental health, to help youth recognize and express healthy emotions and seek help when needed [67]; educational aspirations, to understand the adolescents' life perspectives and to improve their connectedness with peers and adults at school [68]; and media exposure, to alert adolescents of the risks and limitations of mass media consumption and stimulate them to critically assess the content exhibited [69].
Our review provides critical insight into risk factors for ESI across settings and studies. However, there are still research gaps. For example, given our findings about adolescents' vulnerability to sexual risk behaviors, further research on other associated public health issues, such as substance use and school attachment (achievement, aspirations, connectedness) are needed. Future research should also pay close attention to how sociodemographic characteristics are included and measured, to elucidate demographic differences in risk factors and tailor programs appropriately.
Conclusion
This systematic review of cohort studies identified a range of risk factors between adolescent health behaviors, considering their life circumstances, and ESI. Although only seven studies were identified, the results provide an integrated picture of the relationship between risk factors and ESI in the available literature. Given that public health issues are generally multifactorial and their consequences have societal impacts, we believe that it is critical to implement and evaluate sexual behavior prevention programs with a more global approach to health behavior, tackling social and environmental factors simultaneously.
Summary
This systematic review provides a comprehensive, assessment of risk factors related to early sexual intercourse among adolescents. We used PRISMA guidelines to review and identify eligible cohort studies published between January 1999 and December 2020. We adapted search strategies for three electronic databases: PubMed, Embase and LILACS. Identified studies were screened for quality and eligibility. In addition to basic study information, we extracted information on study setting, sample size, study design, follow-up, measures and statistical analyses, findings, and limitations. Of 2787 identified studies, seven met our inclusion criteria. The studies examined a range of factors, which were organized into four dimensions -individual, family, social and environmental, and sociodemographic. Risk factors with strong associations for ESI were: adolescent and parental substance use, aggression and conduct disorders, family attachment, school achievement, family living situation, and maternal education. Three studies were birth cohorts, with sample sizes ranging from 273 to 4,808. This review demonstrates the important roles of substance use, family attachment and academic factors in shaping adolescents' sexual behavior. A strength of this review is its focus on longitudinal studies, enabling exploration of exposures collected before initiation of sexual intercourse. | 2023-03-27T06:16:58.246Z | 2023-03-25T00:00:00.000 | {
"year": 2023,
"sha1": "ec5c3b17cf84c6a3831c8f056d76f7cf7dcffad7",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "PubMedCentral",
"pdf_hash": "f37ef9bbd33c26c9dd1d869f61d001972c1cbbdd",
"s2fieldsofstudy": [
"Psychology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
7872256 | pes2o/s2orc | v3-fos-license | Author's reply.
348 Indian Journal of Orthopaedics | Volume 51 | Issue 3 | May‐June 2017 locking plate for failed proximal femoral nail with non-union of proximal femoral fractures. Int Orthop 2016;40:1709-15. 3. Sommer C. Biomechanics and clinical application principles of locking plates. Suom Ortoped Traumatol 2006;29:20-4. 4. Gautier E, Sommer C. Guidelines for the clinical application of the LCP. Injury 2003;34 Suppl 2:B63-76.
Ipsilateral neck femur with shaft femur is a challenging condition in respect to implant selection. We have included this condition as we have demonstrated the successful outcome in this condition using distal femoral plate.
It is recommended to start the case in lateral position only when you are able to see a proper lateral view on the C-arm. Thickness of patient and quality of C-arm are the influencing factors. In supine position, managing these nonunions, we feel it difficult because of posterior sagging at fracture site.
The distal femoral locking compression plate (DF-LCP) is a fixed-angle device when locking screws are used. We have used 6.5 mm nonlocking cancellous screws in neck femur cases. Furthermore, the 95% angulations of locking screws help get purchase in already operated proximal femoral (PF) cases where the tracks of previous implants are a main issue for purchase. Multiple holes in distal part of plate give you many options to get adequate purchase in proximal femur.
We have included cases of nonunion and delayed union in the study while evaluating the DF-LCP. However, in concluding the study results, we talked only of nonunion, without mentioning of delayed union cases (14 cases of subtrochanteric delayed union -as mentioned in materials and methods). We accept it as a shortcoming.
As rightly mentioned by the authors in cases of nonunion, fixation in compression mode (absolute stability) is desirable, which they achieved using DF-LCP. However, authors have also talked about the use of long plates through minimally invasive surgical approach (relative stability). It seems that two philosophies (absolute and relative stability) were being combined in few of their cases. 3,4 This fact should have been elaborated in the study. We have used minimally invasive surgery approach in few cases as it is mentioned in the paper. We have opened the fracture site and dynamic compression plate holes of the plate were used to achieve the compression.
Lack of anterior curvature and limited screw options in proximal femur were the limitations enumerated by the authors of PF-LCP. Nonetheless, screws in proximal part of PF-LCP are at different angles (95°, 120°, and 135°), meant to engage entire head of femur for adequate fixation of PF fractures. We have not compared this aspect of plate with distal femoral plate. We have ruled out infections clinically and with blood investigations. We could not mention the average time of union as different PF fractures were managed using this technique.
Financial support and sponsorship
Nil.
Sir, We read with interest, the editorial article titled "Current state of orthopedic education in India." We do agree with the author opinion that the current MBBS qualified doctors are not being adequately trained in orthopedic surgery. 1 The orthopedics subject has been neglected by the students of MBBS. They skip the orthopedic surgery posting and they say orthopedics is not a major subject. The MBBS qualified doctors available at the primary health center are not well trained to treat the musculoskeletal injuries. 1 It is important to note that the road traffic accidents are being increased these days due to the increase in the number of vehicles. Unfortunately, most of the roads in rural India are smaller and congested. There is no separate lane, and very few people strictly follow the traffic rules and regulations. The frequency of head injuries is higher in the two-wheeler, and the rider will not use the helmets very often. 2 India being a developing country lacks the specialty hospitals in each and every place. Unfortunately, 72% of India's population lives in these rural places. 3 The road traffic accidents and the trauma cases in these rural areas will be first seen by the doctors at the primary health center. The radiological investigations facilities are not available in these rural sectors. The patients need to travel to cities or taluk level to consult the orthopedic surgeon. It is unfortunate that the ratio of orthopedician and the population in India is 1:62,500. 1 The "C" arm (image intensifier) is not available in each and every operation theaters even at the tertiary care hospital. 4 The "C" arm was once expensive equipment which has become basic these days. It is surprising to know that most of the orthopedicians in the rural practice operate without "C" arm (image intensifier). Performing the surgeries without the "C" arm is going to be very hard, and the procedure will not be perfect.
With the increase in the medicolegal issues related to the field of medicine, especially in orthopedics, there is no specific single method of treatment available, for a particular fracture/injury. The basic musculoskeletal emergency protocol in primary health center is the need of the day. The primary health centers lack the basic infrastructure of adequate number of splints and skeletal traction kits. The basic knowledge of applying them in a right way at an appropriate time is also lacking, mainly while transferring the patients to the tertiary care centers. This may be critical in preventing life-threatening complications such as fat embolism, which can happen in major long bone fractures such as fracture shaft of the femur.
It is true that the interns during their clinical rotation concentrate more on reading the PG entrance examination books than learning the clinical skills at the hospital. 1 You are right about the PG entrance examination, which can be taken immediately after the final year MBBS results. The students will have the recent memory of the subjects which they studied in their 4½ years. This also makes the internship interesting, which has to be completed before joining the PG studies. We agree with your opinion that the orthopedics should be made a separate subject in the final year MBBS. This is just like the pediatrics subject, which was with the internal medicine earlier, now is a separate subject. Similarly, orthopedics can be made separate from the general surgery. India being a populated nation has approximately one radiologist for every one lakh people, which is very less in comparison to the North America, which has one radiologist for every ten thousand population. 5 It is happy to know that the number of MBBS seats have increased in the recent years, and there is increased MD/MS/ | 2018-04-03T00:00:38.419Z | 2017-06-01T00:00:00.000 | {
"year": 2017,
"sha1": "de056190f17f3ffec81eca3575db444425f0c729",
"oa_license": "CCBYNCSA",
"oa_url": "https://europepmc.org/articles/pmc5439326",
"oa_status": "GREEN",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "de056190f17f3ffec81eca3575db444425f0c729",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
11413293 | pes2o/s2orc | v3-fos-license | Neuroproteomics and Systems Biology Approach to Identify Temporal Biomarker Changes Post Experimental Traumatic Brain Injury in Rats
Traumatic brain injury (TBI) represents a critical health problem of which diagnosis, management, and treatment remain challenging. TBI is a contributing factor in approximately one-third of all injury-related deaths in the United States. The Centers for Disease Control and Prevention estimate that 1.7 million people suffer a TBI in the United States annually. Efforts continue to focus on elucidating the complex molecular mechanisms underlying TBI pathophysiology and defining sensitive and specific biomarkers that can aid in improving patient management and care. Recently, the area of neuroproteomics–systems biology is proving to be a prominent tool in biomarker discovery for central nervous system injury and other neurological diseases. In this work, we employed the controlled cortical impact (CCI) model of experimental TBI in rat model to assess the temporal–global proteome changes after acute (1 day) and for the first time, subacute (7 days), post-injury time frame using the established cation–anion exchange chromatography-1D SDS gel electrophoresis LC–MS/MS platform for protein separation combined with discrete systems biology analyses to identify temporal biomarker changes related to this rat TBI model. Rather than focusing on any one individual molecular entity, we used in silico systems biology approach to understand the global dynamics that govern proteins that are differentially altered post-injury. In addition, gene ontology analysis of the proteomic data was conducted in order to categorize the proteins by molecular function, biological process, and cellular localization. Results show alterations in several proteins related to inflammatory responses and oxidative stress in both acute (1 day) and subacute (7 days) periods post-TBI. Moreover, results suggest a differential upregulation of neuroprotective proteins at 7 days post-CCI involved in cellular functions such as neurite growth, regeneration, and axonal guidance. Our study is among the first to assess temporal neuroproteome changes in the CCI model. Data presented here unveil potential neural biomarkers and therapeutic targets that could be used for diagnosis, for treatment and, most importantly, for temporal prognostic assessment following brain injury. Of interest, this work relies on in silico bioinformatics approach to draw its conclusion; further work is conducted for functional studies to validate and confirm the omics data obtained.
to this rat TBI model. Rather than focusing on any one individual molecular entity, we used in silico systems biology approach to understand the global dynamics that govern proteins that are differentially altered post-injury. In addition, gene ontology analysis of the proteomic data was conducted in order to categorize the proteins by molecular function, biological process, and cellular localization. Results show alterations in several proteins related to inflammatory responses and oxidative stress in both acute (1 day) and subacute (7 days) periods post-TBI. Moreover, results suggest a differential upregulation of neuroprotective proteins at 7 days post-CCI involved in cellular functions such as neurite growth, regeneration, and axonal guidance. Our study is among the first to assess temporal neuroproteome changes in the CCI model. Data presented here unveil potential neural biomarkers and therapeutic targets that could be used for diagnosis, for treatment and, most importantly, for temporal prognostic assessment following brain injury. Of interest, this work relies on in silico bioinformatics approach to draw its conclusion; further work is conducted for functional studies to validate and confirm the omics data obtained.
Keywords: proteomics, traumatic brain injury, controlled cortical impact, biomarker, prognosis and therapeutics, inflammation, oxidative stress inTrODUcTiOn Traumatic brain injury (TBI) is a major health concern for which diagnosis, care management, and treatment remain challenging. Each year, approximately 1.7 million people sustain a TBI in the United States, out of which 52,000 die, 275,000 are hospitalized, and 1.365 million are treated and released from emergency care. Importantly, TBI is a contributing factor to a third (30.5%) of all injury-related deaths in the United States (1)(2)(3). TBI also accounts for a larger percentage of injuries and casualties among United States military personnel serving in Iraq and Afghanistan compared to previous conflicts, mainly due to the prevalent use of improvised explosive device and the high incidence of mild blast-overpressure exposure resulting in TBIs of various severities (4). The growing number of TBI cases among military personnel and civilians has brought new urgency to research efforts aimed at developing improved TBI diagnostic, prognostic, and treatment assays.
Neurological examination by the Glasgow Coma Scale (GCS) has been traditionally employed to assess acute TBI. However, this clinical tool has a number of limitations making it often inaccurate to categorizing TBI severity in patients (5). In most cases, clinical TBI status may be difficult to determine due to confounding medical treatments such as sedatives, opioids, or neuromuscular blocking agents (6,7) and to illicit drug use and/ or alcohol intake (8)(9)(10). The evolving nature of brain injury progression can further complicate GCS assessment. Current imaging methods such magnetic resonance imaging (MRI) and computer tomography (CT) provide more objective information on the magnitude and localization of the injury. However, CT scans lack the sensitivity to detect mild to moderate diffuse brain injury, and the availability and feasibility of MRI acutely limit their broad clinical application (11,12). Taken together, these limitations and challenges illustrate the need for facile, rapid, and reliable alternate methods to assess brain injury.
Efforts to elucidate the complex mechanisms of TBI progression and seek sensitive and specific biomarkers that can aid in optimizing TBI prognosis at acute and chronic long-term time points have been the main focus in the field of brain neurotrauma. Biomarker research has produced several putative, candidate TBI markers that can be derived from brain tissue, blood, and cerebrospinal fluid (CSF). These biomarkers are often assessed via various immunoassays, such as Western blotting or enzymelinked immunosorbent assay (ELISA). For instance, our group has examined the accumulation of spectrin and its calpaincleaved breakdown products in the CSF and brain tissue following TBI (13)(14)(15). In addition, our recent studies using controlled cortical impact (CCI) model have demonstrated progressive brain pathologies in white matter involving myelin loss, delayed microvascular damage, and appearance of focal microbleeds that are temporally and regionally associated with punctate blood-brain barrier breakdown and upregulation of the glial and inflammatory biomarkers in the brain tissue starting from 24 h and progressing over 3 months following experimental TBI (16). S-100β, a calcium binding protein, has been used as well as severe TBI marker in serum (17)(18)(19) and CSF (20)(21)(22). Additionally, the concentration of myelin basic protein is elevated in human serum collected from pediatric TBI patients (18). Further, Tau proteins have been associated with elevated intracranial pressure, a symptom or component of TBI (23), and phosphorylated tau has been identified in serum up to several months after severe TBI (24). Nevertheless, despite the identification of these biomarkers via targeted approaches, many of them suffer from lack of TBI specificity and may not indicate TBI chronic temporal changes.
Recently, bioinformatics and in particular the application of neuroproteomic strategies to central nervous system (CNS) injuries has emerged as a promising biotechnology for identifying novel pathways and biological processes relevant to TBI pathophysiology, as well as pointing out which key genes/ proteins may serve as potential biomarkers and therapeutic drug targets (25)(26)(27)(28)(29). The potential of neuroproteomics platforms have been explored using acute paradigms of TBI (26,27,(29)(30)(31)(32)(33)(34)(35), spinal cord injury (36)(37)(38)(39)(40)(41)(42)(43)(44), and cerebral ischemia or stroke (34,(45)(46)(47)(48)(49)(50). Our group has previously reported TBI effects upon the global proteome where we combined cyanine labeling with SDS PAGE-capillary LC-MS/MS to study hippocampal tissue (30). Results from this work provided a framework for subsequent rapid and comprehensive sequence-specific biomarker discovery strategies that are currently used in our laboratory. This strategy employs tandem strong cation-anion exchange chromatography (first dimension) followed by 1D gel electrophoresis (second dimension) prior to LC-MS/MS of tryptic peptides extracted from the gel. This "bottom up" protein identification revealed 59 differentially expressed proteins (of which 21 were decreased and 38 were increased) in cortical tissue collected 48 h after CCI in rats (26). Nonetheless, studying TBI at multiple time points by MS/MS-based proteomics remains crucial, as it enables the correlation of relative protein expression to disease progression in experimental TBI animal models, which can be translated to clinical outcome metrics.
In this present work, we employed the well-established rat CCI model to examine the global proteomic differences between acute (1 day post-CCI) and subacute (7 days post-CCI) TBI using the cation-anion exchange chromatography-1D SDS gel electrophoresis (CAX-PAGE) LC-MS/MS platform (26,45). We then combined the temporal neuroproteomic response with systems biology strategies to infer time-dependent changes in cellular pathways caused by CCI. We also applied systems biology analyses to identify non-redundant pathways relevant to TBI at these different time points and conducted gene ontology analyses of the proteomics data to categorize the differentially expressed proteins by molecular function, biological process, and cellular localization. To the best of our knowledge, this study is among the first to assess the TBI protein dynamic changes associated with temporal resolution. Data from this work can be translated clinically to discover new potential biomarkers and therapeutic targets for TBI that could be used to better treat or prevent sequelae associated with this disease.
MaTerials anD MeThODs animal housing conditions
Animal handling and processing were conducted in compliance with the National Institutes of Health Guidelines for Animal Research (Guide for the Care and Use of Laboratory Animals) and according to the laws of the USA and regulations of the Department of Agriculture. They were also approved by Institutional Animal Care and Use Committee (IUCAC) at the University of Florida. Rats were maintained and housed under pathogen-free conditions with constant temperature and humidity control.
implementation of In Vivo controlled cortical impact in rats A digital electromagnetic CCI device (Benchmark™ Stereotaxic Impactor, MyNeurolab, USA) was used to model TBI in male Sprague-Dawley rats as described previously (16). A total of three groups were prepared with each group having seven rats. A total of 21 rats were divided into 3 groups, a craniotomy group, 1 day TBI, and 7 days TBI. For the TBI procedure, each rat was mounted in a stereotactic frame, and the right cortex (ipsilateral) was impacted with a 4-mm diameter impactor tip at a velocity of 3.5 m/s to a depth of 2.5 mm, at 1 day (n = 5) and 7 days (n = 5) post-CCI. Prior to surgery, rats were anesthetized (45 mg ketamine/kg, 5 mg xylazine/kg) by intraperitoneal injection. An adequate level of anesthesia was determined by monitoring the corneal reflex and withdrawal to painful stimuli for hind limbs. The injured rats were sacrificed by decapitation. From each group, we used five rats for our proteomics analysis, which has been validated to be statistically significant (26,35,(51)(52)(53). For our control animals, sham-injured rats were subjected to identical surgical procedures but did not receive an impact injury.
Brain Tissue collection and Protein extraction
Ipsilateral cortical brain tissue samples were rapidly dissected, washed with cold saline, sip frozen in liquid nitrogen then stored at −80°C until use. Brain tissues were pulverized into a powder over dry ice then solubilized in lysis buffer containing 1% (v/v) Triton X-100, 1 mM dithiothreitol (DTT), and one tablet protease inhibitor cocktail per 10 mL of total solution (Roche Diagnostics, IN, USA). The brain lysates were then centrifuged at 15,000 × g for 5 min at 4°C to remove insoluble debris. Protein concentrations were determined by detergent compatible (DC) protein assay (Bio-Rad, CA, USA).
cation-anion exchange chromatography-1D sDs gel electrophoresis
A well-characterized offline multidimensional protein separation platform compatible with applying high throughput proteomic technology for biomarker identification was applied to this study according to the already published protocol (26,27). In brief, this platform consisted of ion chromatography, which was composed of a sulfopropyl-strong cation exchange column (SCX-S1) and a quaternary ammonium-strong anion (SAX-Q1) modified sepharose prepacked ion-exchange columns (Bio-Rad). Purified cortical tissue protein lysates from each of the three groups (the craniotomy group, 1 day post-CCI, 7 days post-CCI) were pooled to achieve a requisite protein quantity of 1 mg per single CAX injection and to average inconsistent protein levels due to biological variability. Similar buffers and separation protocols were used as published (52,53). UV chromatograms were collected at a wavelength of 280 nm for each run. Each 1-mL fractions was concentrated using Millipore YM-10 centrifugal filters (Millipore, MA, USA). Also, 20 μL of 2× Laemmli sample buffer (Invitrogen, NY, USA) was added to the YM-10 collection filters prior to collection by centrifugation at 1,000 × g for 3 min. For technical reproducibility, this experiment was repeated three times to compare each of the five control samples to their corresponding CCI samples (five samples for 1 day post-CCI and five samples for 7 days post-CCI). Protein concentration was measured afterward as a conformational step. Protein fractions were then run side-by-side (i.e., craniotomy fraction 1, CCI day 1 fraction 1, CCI day 7 fraction 1) on an 18-well, 10-20% gradient Tris-HCl Bio-Rad Criterion (Bio-Rad, CA, USA) gels for differential comparison of sham controls and the five CCI samples at each 1 and 7 days post-CCI. Gel bands were visualized and assessed for their intensity as described below.
gel Band Visualization and Densitometric analysis Gel bands were visualized by Coomassie blue staining (Bio-Rad, Hercules CA, USA). Quantitative densitometric analysis of select gel band intensities was performed using Image J software (Version 1.6, National Institutes of Health. Bethesda, MD, USA). Differential bands were manually identified and selected as regions of interest based on molecular weight estimation. The relative intensity in protein band fold change was derived by comparing intensity ratios among the sham control group, CCI day 1, and CCI day 7 and was calculated by dividing the greater value by the lesser value and adding a negative sign to indicate decrease after CCI. Any bands with fold change greater than 1.5 were selected for subsequent differential protein identification.
reversed-Phase liquid chromatography Tandem Mass spectrometry
Nano-reversed-phase liquid chromatography tandem mass spectrometry (RPLC-MS/MS) was employed for protein separation and identification using the same parameters as recently published by our group (53). Nanoflow was performed on a NanoAcquity UPLC (Waters, Milford, MA, USA); the autosampler was used to load 2 μL of each sample onto a 5-μm particle size Symmetry 180 μm × 20 mm C18 trapping column at 4 μL/ min for 10 min. Then, the sample plug was loaded onto a 1.7 μM particle size BEH130 C18 100 μm × 100 mm analytical column at 300 nL/min. The mobile-phase consisted of solvent A (water with 0.1% formic acid) and solvent B (acetonitrile with 0.1% formic acid). Separation was achieved within a run time of 115 min at a flow rate of 300 nL/min. The first linear gradient was from 1 to 40% B over 90 min; the second linear gradient was from 40 to 100% B over 5 min and held for 5 min before returning to initial mobile-phase composition (1%B). Tandem mass spectra were collected on LTQ-XL (Thermo, San Jose, CA, USA) using a data dependent acquisition method in Xcalibur 2.0.7 (Thermo), in which data-dependent scanning was specified as a criterion to select the top 10 most abundant ions using 11 separate scan events at a given chromatographic time point (115 min) for subsequent analysis. The mass spectrometer was set to perform a full-scan and subsequently MS/MS scans on the 10 most intense ions in the full-scan spectrum MS (scan event 1) with dynamic exclusion were enabled. Dynamic exclusion temporarily puts a mass into an exclusion list after its MS/MS spectrum is acquired, providing the opportunity to collect MS/MS information on the second most intense ion from the full-scan spectrum MS (scan event 1). All MS/MS spectra were analyzed using Proteome Discoverer 1.3 (Thermo), SEQUEST (version: 1.3.0.339), and X! Tandem [version: CYCLONE (2010.12.01.1)]. Database search engines were set up to search a trypsin-indexed uniprot-Rattus + norvegicus.fasta (accessed on 2015, 35,126 entries). The search was achieved using the average mass for matching the precursor with a fragment ion mass tolerance of 0.8 Da and a parent ion tolerance of 2.00 Da and a maximum of two missed cleavage sites. Carbamidomethylation (+57 Da) of cysteine was selected as a static modification, while the oxidation of methionine was selected as a dynamic modification. Using the output from SEQUEST and X! Tandem, Scaffold (version: Scaffold_3.3.3, Proteome Software) was used to validate, organize, and interpret mass spectrometry data. Peptide identifications were accepted if they could be established at greater than 95.0% probability as specified by the Peptide Prophet algorithm. Protein identifications were accepted if they could be established at greater than 99.9% probability and contained at least two peptides.
neurosystems Biology and gene Ontology analysis and statistical Testing
Neurosystems biology analyses were conducted using the Elsevier's Pathway Studio v. 10.0 (Elsevier, MD, USA) to construct non-redundant pathways relevant to the two different injury time points. This software was used to interpret biological meaning from gene (protein) expression, build and analyze pathways, and find relationships among genes, proteins, cell processes, and diseases as indexed by the ResNet database (54,55). "Subnetwork Enrichment Analysis" (SNEA) algorithm was selected to extract statistically significant altered biological and functional pathways pertaining to each identified set of protein hits in our study. The algorithm compares the subnetwork distribution to the background distribution using one-sided Mann-Whitney U-test and calculates a p-value indicating the statistical significance of difference between two distributions.
Separate lists of International Protein Index (IPI) accession numbers of the identified upregulated and downregulated proteins for 1 and 7 days post-CCI were imported into the Pathway Studio. In each group, the upregulated proteins were assigned a value of +3, and the downregulated proteins were assigned a value of −3. The network analysis included a search for direct interactions and shortest paths between the identified proteins to map biologically relevant networks and identify relevant cellular processes. For a more comprehensive understanding of the classes of proteins found in the CCI model, we used the PANTHER bioinformatics analysis (http://www.pantherdb.org/ genes/batchIdSearch.jsp) utilizing rat protein ontology database to classify proteins into distinct categories of molecular function and biological process. Integrated Venn diagram analysis was performed using the "InteractiVenn": a web-based tool for the analysis of complex data sets (56).
resUlTs combined caX-Page
This study employed a previously established neuroproteomics approach applied to TBI studies by our group (13)(14)(15) (Figure 1). Our experimental design required pooled ipsilateral cortical lysates (n = 5) from control craniotomy and injured rats (1 and 7 days post-CCI). The proteins in each sample were resolved FigUre 1 | schematic diagram of the differential caX-Page-lc-Ms/Ms proteomics platform used for the TBi study. Proteins in the pooled extracts were resolved in two dimensions, first by cation-anion exchange chromatography followed by 1D-SDS-PAGE. Differential bands were selected, excised for in gel trypsin digestion, and analyzed by data dependent LC-MS/MS. Protein database search against a rat database generated a list of proteins with altered expression. The identified differential proteins were subjected to neurosystems biology pathway analysis. using two-dimensional CAX-PAGE. Samples from either (1) craniotomy control group, (2) 1 day post-CCI, or (3) 7 days post-CCI were sequentially separated by tandem strong cation and anion exchange chromatography according to protein charge. The overlaid CAX chromatograms of the sham control and the two injured groups illustrate differences among the three experimental groups particularly in intensity in some regions (Figure 2). Thirty, consecutive 1-mL fractions collected from each CAX separation were grouped (i.e., fraction 1 of sham control, fraction 1 of 1 day post-CCI, and fraction 1 of 7 days post-CCI) and loaded side-by-side onto a 1D gel to further resolve the proteins in the second dimension. A total of 42 differential bands were selected and excised from the gel for subsequent proteomic analysis as boxed and labeled in Figure 3. Relative fold changes were calculated between control and CCI (1 and 7 days) as shown in Figure 4, and bands with fold change greater than 1.5 were selected for subsequent differential protein identification. In the 1 day post-CCI group, half of the gel bands had decreased intensity and the other half of bands had increased intensity ( Figure 4A). However, in 7 days post-CCI, 14 bands showed a decrease in intensity, and 28 gel bands showed an increase in intensity ( Figure 4B).
identification of Differential Proteins by lc-Ms/Ms at 1 and 7 days Post-cci
Following the gel densitometric analysis, 42 bands showing altered expression in either 1 or 7 days post-CCI were selected for LC-MS/MS analysis (Figure 3). Tables S1 and S2 in Supplementary Material summarize the identified proteins for each band with their corresponding peptide spectral data, sequence coverage and the directionality of the band intensities. In the 1 day post-CCI group, a total of 63 proteins were altered, of which 21 were only upregulated, 38 were only downregulated, and 4 were found to be identified in gel bands that expressed up/downregulation trends (Table S3 in Supplementary Material). While in 7 days post-CCI, a total of 56 proteins showed altered expression, with 41 proteins displaying increase in abundance alone, 13 proteins displaying decrease in abundance only, and 2 were found to demonstrate an increase and decrease in abundance at the same time (Table S4 in Supplementary Material).
Unique and common Upregulated and Downregulated Proteins at 1 and 7 days Post-cci
In order to scrutinize the differences in protein expression at 1 and 7 days post-CCI, unique and common upregulated and/or downregulated proteins were identified between the two studied groups. There are 19 common proteins that showed an increase in abundance at both 1 and 7 days post-CCI (Table 1), including complement C3, peptidyl-prolyl cis-trans isomerase A (Pin1), elongation factor 2, protein kinase C and casein kinase substrate in neurons protein 1 (PACSIN), etc. Conversely, 15 common proteins were found to be downregulated at both time points 1 and 7 days post-CCI (Table 1), including superoxide dismutase (SOD), neurofascin (NF), calmodulin (CaM), etc. Interestingly, some proteins were identified to be uniquely upregulated or downregulated at either 1 or 7 days post-CCI ( Table 2). Indeed, 10 proteins were identified to be uniquely upregulated or downregulated at 1 day post-CCI ( Table 2), while only 3 proteins were found to be exclusively upregulated at 7 days post-CCI ( Table 2). Of those proteins showing an altered expression unique to day 1 or 7 are vinculin and protein-disulfide isomerase, which are differentially expressed at day 1, and annexin A5 (AnxA5), l-lactate dehydrogenase B chain (LDH-B), and thyroid hormone-binding protein μ-crystallin (CRYM) homolog, which are upregulated at day 7 only.
Temporal network analysis of altered Proteins at 1 and 7 days Post-cci
To further understand the difference in protein expression at the cellular level post-CCI between 1 and 7 days, analysis of the specific enriched pathways was performed. The analysis of neuroproteomics data generated interaction maps as presented in Figure 5. The network was generated using the "direct interaction" algorithm to map cellular process and interactions between altered proteins. The red color represents upregulated proteins in day 1 injury onset, while the blue color shows proteins that are downregulated. In the 1 day post-CCI group, the majority of the proteins are associated with apoptosis, inflammatory response, oxidative stress, and autophagy ( Figure 5A). The proteins identified in the 7-day post-CCI samples were also found to be involved not only in apoptosis, inflammatory response, and oxidative stress but also in ischemia as well as cell regeneration and cell growth ( Figure 5B).
gene Ontology analysis of the Differentially expressed Proteins at 1 and 7 days Post-cci For a more comprehensive understanding of the classes of proteins found in the different brain injury samples (1 vs. 7 days), we used the PANTHER application for the rat protein ontology database to classify proteins into distinct categories. Data were defined by biological processes and molecular functions (Figure 6). For 1 day post-CCI, 59 assignments obtained for biological processes were sorted into 13 classifications (Figure 6A), while 142 molecular functions were sorted into 8 classifications (Figure 6B). Similarly, for 7 days post-CCI, 52 assignments obtained for biological processes were categorized into 13 classifications (Figure 6C), whereas 119 molecular function were sorted into 8 classifications (Figure 6D). The classification categories exceeded the number of proteins, and this is due to the fact that some proteins may be assigned more than one molecular function and biological process. Percentages listed represent the number of proteins linked with a particular functional block normalized to the total number of proteins at each time point. Interestingly, the same biological processes and molecular functions were enriched at 1 and 7 days post-CCI but with differences noted in percentages of proteins corresponding to each functional block.
DiscUssiOn
Traumatic brain injury imposes a substantial public burden where diagnosis, management, and treatment remain challenging. With an estimated 10 million people affected annually by TBI worldwide, it is predicted by the World Health Organization (WHO) that by the year 2020, TBI will surpass many diseases to become the third leading cause of global mortality and disability (57). Efforts have been made and continue to focus on elucidating the complex molecular mechanisms underlying TBI pathophysiology and defining specific biomarkers for this disease. Recently, bioinformatics and in particular neuroproteomic studies have proven to be a contemporary and convenient tool in biomarker discovery for many human diseases including CNS injury. In this study, we employed our rat CCI model to decipher mechanistic changes underlying TBI at the level of the proteome after 1 day (acute) and, for the first time, 7 days (subacute) following TBI injury.
The high throughput nature of proteomic studies generates tremendous amount of data. Systems biology has been considered latest contemporary domain in biological science that aims for a system-level elucidation of complex biological processes (58). Rather than focusing on the individual molecular entity, systems biology seeks to understand the system dynamics that govern protein networks and the functional set of proteins that regulate decisions related to the disease or injury.
In our study, the multidimensional CAX-PAGE RPLC-MS/ MS proteomic platform was utilized to identify proteome changes in acute (1 day post-CCI) and subacute (7 days Using Pathway Studio 9.0, altered proteins relevant to CCI at different time points were analyzed. The network was generated using "direct interaction" algorithm to map biological processes and interactions among altered proteins. Several processes believed to be central to the pathogenesis of CCI were identified using this search. (a) Examples of those processes including apoptosis, inflammatory response, oxidative stress, and autophagy were shown to be altered at day 1. (B) Similarly, at day 7, processes including apoptosis, inflammatory response, oxidative stress, ischemia as well as cell regeneration and cell growth were shown to be altered. The red color represents upregulated proteins, while the blue color signifies proteins that are downregulated. were either upregulated or downregulated in response to injury. Here, systems biology approach was utilized to identify altered cellular processes and pathways associated with the differential proteome of acute and subacute TBI.
There are 15 proteins showing decreased abundance in both acute and subacute TBI (Table 1B). Among these are proteins such as macrophage migration inhibitory factor (MMIF), aconitate hydratase (aconitase), SOD, NF, and CaM. MMIF is a brain FigUre 6 | Molecular functions (MFs) and biological processes (BPs) depicting proteins that are altered in response to cci at 1 and 7 days are shown in pie charts. Proteins are classified into families and subfamilies of shared function, which are then categorized using a highly controlled vocabulary (ontology terms) by biological process (a,c) and molecular function (B,D). At 1 day post-CCI, 59 assignments were obtained for biological process and were sorted into 13 categories (a), while 142 molecular functions obtained were arranged into 8 classifications (B). At 7 days post-CCI, 52 assignments were obtained for biological process and sorted into 13 classifications (c), whereas 119 molecular functions obtained were sorted into 8 categories (D). Some proteins may be assigned for more than one molecular function and biological process. The percentages listed are calculated as the number of proteins associated with a particular functional block normalized to the total number of proteins. inflammatory mediator that has been shown to be elevated 1 day post-TBI (59). Although it is also increased in Alzheimer's disease (AD) and mild cognitive impairment (60), our results showed that MMIF expression is decreased at 1 and 7 days post-CCI. This may contribute to the presence of other downstream proteins regulating MMIF function in the brain. Aconitate hydratase (aconitase) was also shown to be downregulated at acute and subacute time points in this present work. Studies reveal that loss of cellular aconitase activity can been used as an index of oxidative stress damage (61,62).
Superoxide dismutase is another oxidative stress-related protein. It is present in very small amounts in the extracellular matrix to remove excess superoxide anions that are released following TBI. Experiments performed on rats with cerebral contusion have shown that administration of exogenous lecithinized SOD reduces edema produced by superoxide ions post-TBI (63,64). In our study, SOD was downregulated (nearly twofold change) at 1 day post-CCI, with a greater decrease in expression of around fivefold change at 7 days post-CCI. This is consistent with a previous study done on mice with brain injury, where SOD levels were significantly lower in the TBI group compared with the sham group (65). Interestingly, SOD may serve as a therapeutic target in TBI, where studies assessing the potential protective effect of SOD2 following brain injury have shown that ameliorating its expression in mice exhibited enhanced neuroprotection and decreased oxidative stress (65,66). Also, the overexpression of manganese superoxide dismutase (MnSOD) protected tissues from radiation by reducing experimental inflammation (67), which can be further tried in TBI.
Additionally, CaM, a Ca 2+ sensor, which showed decrease in expression in acute and subacute TBI, is known to mediate critical signaling functions through binding and regulating a diverse population of downstream targets referred to as calmodulin-binding proteins (CaMBPs). In TBI, our group developed a novel proteomic method that identified a total of 69 potential CaMBPs, of which 26 were known CaMBPs and 43 were putative novel CaMBPs (68). A study evaluating the biochemical mechanisms contributing to memory loss post-TBI have showed that unselective activation of calcium/calmodulindependent protein kinase II (CaMKII) signaling may disrupt the machinery for memory formation, resulting in this memory loss (69). Another study on calcineurin, a calcium/calmodulindependent phosphatase, has also found that its isoforms are differentially modulated by acute TBI in rats in a regionally specific manner (70). Future studies can be hence employed to further assess exactly the functions of CaM downstream targets in TBI, which may help improve current therapeutic interventions and/ or develop new treatments.
On the other hand, a total of 19 proteins showed increase in abundance in both acute and subacute TBI samples (Table 1A). Those include complement C3, Pin1, elongation factor 2, and PACSIN. The complement component C3 was found to be upregulated at both 1 and 7 days post-CCI, with higher expression at 7 days. This protein is necessary for normal neutrophil extravasation comprising the inflammatory reaction in the brain following TBI (71). Also, it has been suggested that complement activation does not induce membrane-damaging effects in penetrating TBI (72) but rather C3-positive microglia and co-localization of both C3 and membrane-attack complex were found on cell bodies and axons of neurons at the TBI areas in rats (73). Thus, C3 may serve as a potential therapeutic target modulating the CNS inflammatory injury (71).
In addition, Pin1 enzyme was highly expressed at both time points in our study. This protein has a neuroprotective role in the CNS where its ablation leads to premature aging and neurodegeneration. Patients with AD were found to have inactivated Pin1 in their neurons (74)(75)(76). Besides, studies have revealed that polymorphisms of the PIN-1 gene may affect neurodegeneration in the hippocampal area, which may lead to mild cognitive impairment and eventually AD (77,78). Hence, relevance of Pin1 pathology to TBI may be taken into consideration in the future to assess its potential role in TBI-induced dementia and AD especially that many posttranslational modifications, which may follow protein synthesis, are similar between AD and TBI (79).
Another protein with increased abundance is PACSIN. Although its function in TBI is still unclear, a study has revealed the involvement of PACSIN in signal transduction to the cytoskeleton of neurons through phosphorylation by protein kinase C and casein kinase 2 (80)(81)(82). Importantly, this phosphorylation of casein kinase 2 has been proven to precipitate Rac1-dependent spine formation in dendrites of hippocampal neurons (83). Moreover, PACSINs play crucial roles in protein packing and tabulation activity through tip-to-tip and wedge loop-mediated lateral interactions on the surface of neuronal membranes (84), and they promote biogenesis of membranetrafficking intermediates by engaging with inositol 5-phosphatase (OCRL1) (85). Besides, a proteomic analysis of the hippocampus in early-stage AD model mice has revealed significant changes in the levels of 14 proteins, one of which is PACSIN 1 that was significantly increased (86). This consistency in the levels of certain proteins, such as PACSIN, between TBI and AD, points to a common mechanistic ground linking the pathophysiology of both maladies (79). More than that, many of the neuroprotective proteins that are overexpressed in TBI, with higher expression in subacute TBI, can serve as novel candidate biomarkers and potential targets for TBI in the future.
Some of the altered proteins identified in our proteomic analysis were unique to each time point as well, where 10 proteins were recognized at 1 day post-CCI alone, while 3 were unique to 7 days post-CCI time point. Of the proteins that are altered at 1 day post-CCI only are vinculin and protein-disulfide isomerase (Table 2A). Vinculin, an integrin complexing protein known to be associated with synaptic destabilization and process retraction, has been shown to dissociate from cellular membranes following TBI (87). It usually participates in anchoring actin filaments to the cell membrane and plays a crucial role in cell motility. In a study done to investigate the levels of actin-anchoring proteins vinculin, talin, and paxillin in rat brains following stab wound injury, it has been shown that astrocytes in the white matter stained positive for these proteins, while none of them was found in the cortex (88). In our study, vinculin was upregulated at 1 day post-CCI but not expressed after 7 days. Moreover, protein-disulfide isomerase, a stress protein that is overexpressed in response to hypoxia in primary-cultured glial cells, can have beneficial effects against brain ischemia and plays a critical role in resistance to ischemic damage through its effect against apoptotic cell death (89). It is also upregulated in acute TBI alone in our study. Also, three proteins were uniquely upregulated at 7 days post-CCI including AnxA5, LDH-B, and CRYM (Table 2B), which may have potential roles following injury and can serve as novel biomarkers for subacute TBI. AnxA5 induces neuronal membrane repair via the formation of a protective 2D bandage at the membrane damaged site (90). On the other hand, the increase in the levels of l-lactate dehydrogenase A chain (LDH-A) and decrease in LDH-B have been shown to contribute to high brain lactate levels, which are predictive of aging phenotypes (91). Finally, CRYM and thyroid hormone levels may be key factors in the development of the pipecolate pathway in the brain (92), especially that neonatal decrease in these hormones is well-known to influence brain development and cause mental retardation and neurological damage (93,94).
Interestingly, our proteomics data analysis also revealed alteration in a subset of proteins that have not been previously studied or tackled in TBI. For instance, at both 1 and 7 days post-CCI, Neuroproteomics Analysis of Acute and Subacute TBI Frontiers in Neurology | www.frontiersin.org November 2016 | Volume 7 | Article 198 a number of proteins were found to be upregulated including elongation factor 2, alpha-1-inhibitor 3, and gamma-enolase (Table 1A). Eukaryotic elongation factor 2 is known to play a key role in regulating the protein translational machinery and controlling ribosomal movements across the mRNA, thus affecting the survival of neurons in cases of elevated oxidative stress (95). Alpha-1-inhibitor 3, a protein of the alpha-macroglobulin family and an acute phase reactant protein, has been previously studied and shows a role in the early stages of the inflammatory response (96), yet it has never been studied in TBI. In addition, gamma-enolase, also known as neuron-specific enolase, possesses neuroprotective effects on cultured neurons from embryonic rat brain (97). On the other hand, a subset of proteins was found to be downregulated at both time points including triosephosphate isomerase, protein DJ-1, and isoform 1 of NF ( Table 1B). Inhibition of triosephosphate isomerase was shown to induce neuronal death in cultured murine cortical cells, while protein DJ-1 possesses a neuroprotective role in Parkinson's disease (98). The importance of those proteins is that they may serve as potential biomarkers and therapeutic targets for TBI, where future studies may be conducted to assess their exact function in the context of TBI. Moreover, other proteins that were found to be uniquely altered at each time point, such as alpha-enolase, complement inhibitory factor H, and zero beta-1 globin, and have never been studied in TBI, can also be assessed for their function in TBI ( Table 2).
In order to scrutinize the differences in protein expression at a functional level, further analysis was carried forward using PANTHER software to identify enriched pathways and biological processes altered in TBI at different time points between acute and subacute states. This software is a unique resource that classifies genes and proteins by their functions using published scientific experimental evidence and evolutionary relationships abstracted by curators with the goal of predicting function even in the absence of direct experimental evidence. Proteins are classified into families and subfamilies of shared function, which are then categorized using a highly controlled vocabulary (ontology terms). In our study, common pathways were altered including apoptosis, inflammatory response, oxidative stress, and autophagy both at 1 and 7 days post-CCI. Yet, it is worthwhile to state that several cellular processes encompassing neuroprotective proteins were found to be enriched in subacute TBI, including cell regeneration, neurite outgrowth, axonogenesis, and cell growth. Interestingly, some pathways were found to be altered uniquely at 1 day post-CCI, such as cell migration, caspase, mitochondrial damage, neuron toxicity, and heat shock response, while others were found to be solely altered at 7 days post-CCI, including regeneration, axon guidance, axonogenesis, cell growth, and cell differentiation (Figure 5).
sTUDY liMiTaTiOns
This work builds on previous and current recent studies from our and other labs that have showed the capabilities of neuroproteomics in identifying putative markers of TBI (UCH-L1, synaptotagmin, and spectrins) utilizing the same CAX separation platform as the one applied here (26,27,52). Several of these markers have been patented and been translated to clinical settings such as UCH-L1 and GFAP (19). However, this work has a number of limitations that relates to the study experimental design, methodology, and finally complying with the recent recommendation of the NIH common data elements (CDEs) for preclinical TBI (99).
First, our work was based on an in silico bioinformatics approach lacking validation or confirmation steps via wet lab techniques; these experiments were kept for the follow-up functional analysis study where we selected few of the identified proteins to study their dynamic alteration. Second, our work has considered only two time points (1 vs. 7 days), while in reality, longer time points post-injury should be assessed to give a comprehensive overview of protein changes. Third, this work assessed only cortical regions, while the emphasis now is to look at different brain regions that can depict the global injury profile rather than being biased to the cortical areas. Therefore, future studies including multiple regions such as the hippocampus and thalamus will provide a better insight of the injury profile. Besides, this work is using the CCI model, which may not be truly the ideal TBI model to recapitulate the different TBI pathologies. This will be gained in future studies through comparing our results to different models such as repetitive mild TBI, and consequently, analyzing clinical samples may be considered. It is likely that there will be common proteins/pathways retrieved; yet there also may be unique findings between focal, diffuse, or even axonal injury pathologies. As per the recommendation of the NIH CDEs for preclinical TBI, the use of female cohort should be included to highlight the diversity in injury profile and avoid any experimental bias; however, one should bear in mind that these neuroproteomics studies have their complexities and limitations in determining the number of samples to be used, the pooling consideration, and finally the time and cost they require. Finally, it is of interest to mention and highlight the shortcomings of this proteomics and other high throughput approaches (MRI, transcriptomics, gene arrays, etc.) generating "big data, " which may be biased toward the instrumentation in use or the method of selection. This will often lead to the missing of big chunk of extremely valuable data from any experimental study. To illustrate this point, it is estimated that the human genome contains around 20,000 protein-encoding genes, while the total number of proteins in human cells is estimated to range between 250,000 and one million. Nevertheless, any genomics study and/ or complementary proteomics platform will identify on average 100-400 changed genes/proteins, which raises the question of the fate of other altered genes/proteins that have missed due to the methodology limitation. To conclude, this work utilizing systems biology concept has showed that it can highlight on several differential pathways and proteins, which can be of high prognostic and diagnostic value. Several of the identified protein hits can be translated clinically as putative markers in clinical TBI.
cOnclUsiOn
As noted from the experimental model and results obtained, the aim of this work is to provide an overview of the altered protein dynamics that are commonly and/or uniquely altered in the acute and subacute phases of experimental model of TBI. Collectively, data obtained from this work suggest that many of the differentially identified proteins in the acute and subacute phases of TBI may serve as potential neural biomarkers and therapeutic targets for TBI. Of interest, the identification of the acute phase protein C3 seems to be a promising candidate marker that is easily measured in human TBI biofluids. Recently, there has been increased interest in the inflammatory markers in neurotrauma that can be coupled with neural specific protein to constitute a panel of TBI markers; several of these inflammatory markers have shown to be elevated clinically after severe TBI (100). Similarly, identified structural proteins constitute an interesting set of markers to be assessed post-injury time points, since these protein families have been among grass root of acute TBI biomarkers (βII-αII spectrins, ankyrins) (101-104).
aUThOr cOnTriBUTiOns KW, RH, and FK designed, developed, and tested the algorithm on real data (Mouse Genome). JG-C, ZZ, and AM prepared and performed the proteomics data, analyzed the omics, and interpreted the results. JA, RR, and AW helped in writing the manuscript. FK and HB assisted in the systems biology data analysis. OG has performed the TBI animal injury and sacrifice. All the authors assisted in the final assessment of data and reviewed the manuscript. AW, RH, and KW conceived the study design and obtained funding for the study. AB, SM, JG-C, AW, and ZZ revised and edited the manuscript. All the authors have read and approved the final manuscript. | 2017-05-04T13:17:42.412Z | 2016-11-22T00:00:00.000 | {
"year": 2016,
"sha1": "4571dd3fabbe5af57d4d702cf803cd70eb1240f1",
"oa_license": "CCBY",
"oa_url": "https://www.frontiersin.org/articles/10.3389/fneur.2016.00198/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "4571dd3fabbe5af57d4d702cf803cd70eb1240f1",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
225081055 | pes2o/s2orc | v3-fos-license | Acute testicular ischemia following manual reduction of inguinoscrotal hernia
Testicular ischemia caused by inguinal hernia repair, and even the presence of the hernia itself, has been recognized in the medical literature, with the latter more commonly in children, but such an event after manual reduction has never been reported before. We present the case of a 67-year-old man who presented to the emergency department with a painful left groin lump. A left inguinoscrotal hernia was diagnosed and reduced “en masse” with manual pressure at the bedside. The patient was discharged but developed acute-onset left scrotal pain as soon as he got home and then re-presented 2 days later with increasing severity of the pain and swelling ever since the hernia reduction. On examination, he was febrile, with a hard, tender, and swollen left testis. Serum inflammatory markers were elevated. Conservative management with intravenous antibiotics and analgesia was commenced. An ultrasound of the testes demonstrated lack of Doppler flow to the left testis, suggestive of acute ischemia. Three days later, there were persistent temperature spikes and significant pain; therefore, the patient underwent an acute left scrotal exploration where a necrotic, black left testis was discovered and excised. He was discharged on the 1st postoperative day; histological analysis confirmed testicular infarction.
INTRODUCTION
Inguinal hernias are common, with a lifetime risk of 27% in men and accounting for 75% of all abdominal wall hernias. [1] Patients often present in the emergency setting with painful or irreducible inguinal hernias, and often, clinicians will attempt reduction at the bedside, especially if it is preferable to avoid an operation. Such hernia reduction "en masse" has been reported to cause severe complications, with persistently incarcerated or strangulated bowel within the hernia postreduction widely reported in the medical literature. [2,3] The effects of this action on the testicular vasculature however are much rarer, and we describe an unusual case of manual reduction of an inguinoscrotal hernia, resulting in acute testicular ischemia.
CASE REPORT
A 67-year-old male with a background of hypertension presented to the emergency department with a painful left groin lump. A left inguinoscrotal hernia was diagnosed and reduced "en masse" with manual pressure at the bedside. Testicular ischemia caused by inguinal hernia repair, and even the presence of the hernia itself, has been recognized in the medical literature, with the latter more commonly in children, but such an event after manual reduction has never been reported before. We present the case of a 67-year-old man who presented to the emergency department with a painful left groin lump. A left inguinoscrotal hernia was diagnosed and reduced "en masse" with manual pressure at the bedside. The patient was discharged but developed acute-onset left scrotal pain as soon as he got home and then re-presented 2 days later with increasing severity of the pain and swelling ever since the hernia reduction. On examination, he was febrile, with a hard, tender, and swollen left testis. Serum inflammatory markers were elevated. Conservative management with intravenous antibiotics and analgesia was commenced. An ultrasound of the testes demonstrated lack of Doppler flow to the left testis, suggestive of acute ischemia. Three days later, there were persistent temperature spikes and significant pain; therefore, the patient underwent an acute left scrotal exploration where a necrotic, black left testis was discovered and excised. He was discharged on the 1 st postoperative How to cite this article: Krishnamoorthy A, Sarmah PB. Acute testicular ischemia following manual reduction of inguinoscrotal hernia. Urol Ann 2020;12:382-4.
This is an open access journal, and articles are distributed under the terms of the Creative Commons Attribution-NonCommercial-ShareAlike 4.0 License, which allows others to remix, tweak, and build upon the work non-commercially, as long as appropriate credit is given and the new creations are licensed under the identical terms.
For reprints contact: reprints@medknow.com
The patient was discharged but developed acute-onset left scrotal pain as soon as he got home and then re-presented 2 days later with increasing severity of the pain and swelling ever since the hernia reduction. On examination, there was a temperature of 38°C, with a hard, tender, and swollen left testis. Serum inflammatory markers were elevated with a white cell count of 21 × 10 9 /L and C-reactive protein of 157 mg/L.
The initial diagnosis at this point given the unusual history was of an acute left epididymo-orchitis, and conservative management with intravenous ciprofloxacin and analgesia was commenced. An ultrasound scan of the testes demonstrated lack of Doppler flow to the left testis, suggestive of testicular ischemia; the right testis was normal. Three days later, the patient continued to spike temperatures and was still complaining of significant pain. He therefore underwent an acute left scrotal exploration and left scrotal orchidectomy, where a necrotic, black, nontorted left testis was discovered and excised.
The patient was discharged on the 1 st day postoperatively with little pain. Subsequent histological analysis confirmed acute testicular infarction.
DISCUSSION
Testicular atrophy, and even infarction, postinguinal hernia repair is a recognized complication, due to injury to the spermatic cord vessels at the time of surgery. [4] Testicular ischemia caused by an inguinal hernia itself causing vascular compression has been reported in an adult patient, with subsequent testicular salvage possible through operative intervention, [5] and one study looking at testicular ultrasonography in men with unilateral inguinal hernias found significantly higher testicular volumes on the ipsilateral side compared to the contralateral testis, with the mechanism attributed to intermittent mechanical compression on the spermatic cord within the inguinal canal. [6] This phenomenon has been more commonly described in infants and children with inguinal hernias with a reported incidence of 5%-34%, [7,8] which has been found to be higher in those with incarcerated hernias. [9] As alluded to in the introduction, the potential effects on the bowel of en masse reduction of inguinal hernias are well recognized. This clinical maneuver has been described to have caused testicular infarction in a 2-month-old infant, resulting in orchidectomy. [10] However, to our knowledge, this is the first reported case in an adult patient and is likely to have resulted from manual compression to the testicular blood vessels in the spermatic cord as it ran through the inguinal canal. Unfortunately, testicular loss occurred in this case, possibly because the rarity of acute ischemia in this setting. Had the patient re-presented earlier, then their case may have been treated as an acute testicular torsion, even with the history of the hernia reduction, and undergone more urgent surgical intervention. Such a complication, albeit rare, should therefore be considered when severe testicular pain occurs following inguinal hernia reduction. CONCLUSION Testicular ischemia following en masse reduction of an inguinoscrotal hernia is a rare, but serious, event which clinicians should be wary of before attempting reduction of incarcerated inguinal hernias en masse, in addition to the recognized bowel complications. If patients present atypically through the development of severe testicular pain postreduction, then acute testicular ischemia secondary to the maneuver should be considered. One should have a low threshold for operative intervention, particularly in the presence of ongoing severe pain; where the symptoms have presented more than 48 h after their onset, imaging in the form of ultrasound scan may be considered to confirm such a diagnosis.
Declaration of patient consent
The authors certify that they have obtained all appropriate patient consent forms. In the form, the patient has given his consent for his images and other clinical information to be reported in the journal. The patient understands that his name and initials will not be published and due efforts will be made to conceal identity, but anonymity cannot be guaranteed.
Financial support and sponsorship
Nil.
Conflicts of interest
There are no conflicts of interest. | 2020-10-28T13:52:30.983Z | 2020-10-01T00:00:00.000 | {
"year": 2020,
"sha1": "d554110ac10c6e52933031c7f22b9ce46f81af73",
"oa_license": "CCBYNCSA",
"oa_url": "https://doi.org/10.4103/ua.ua_38_20",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "0c74c915f8316e48a84a6281b6b208291917d06a",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
270576143 | pes2o/s2orc | v3-fos-license | Cerium Dioxide–Dextran Nanocomposites in the Development of a Medical Product for Wound Healing: Physical, Chemical and Biomedical Characteristics
Purpose of the study: the creation of a dextran coating on cerium oxide crystals using different ratios of cerium and dextran to synthesize nanocomposites, and the selection of the best nanocomposite to develop a nanodrug that accelerates quality wound healing with a new type of antimicrobial effect. Materials and methods: Nanocomposites were synthesized using cerium nitrate and dextran polysaccharide (6000 Da) at four different initial ratios of Ce(NO3)3x6H2O to dextran (by weight)—1:0.5 (Ce0.5D); 1:1 (Ce1D); 1:2 (Ce2D); and 1:3 (Ce3D). A series of physicochemical experiments were performed to characterize the created nanocomposites: UV-spectroscopy; X-ray phase analysis; transmission electron microscopy; dynamic light scattering and IR-spectroscopy. The biomedical effects of nanocomposites were studied on human fibroblast cell culture with an evaluation of their effect on the metabolic and proliferative activity of cells using an MTT test and direct cell counting. Antimicrobial activity was studied by mass spectrometry using gas chromatography–mass spectrometry against E. coli after 24 h and 48 h of co-incubation. Results: According to the physicochemical studies, nanocrystals less than 5 nm in size with diffraction peaks characteristic of cerium dioxide were identified in all synthesized nanocomposites. With increasing polysaccharide concentration, the particle size of cerium dioxide decreased, and the smallest nanoparticles (<2 nm) were in Ce2D and Ce3D composites. The results of cell experiments showed a high level of safety of dextran nanoceria, while the absence of cytotoxicity (100% cell survival rate) was established for Ce2D and C3D sols. At a nanoceria concentration of 10−2 M, the proliferative activity of fibroblasts was statistically significantly enhanced only when co-cultured with Ce2D, but decreased with Ce3D. The metabolic activity of fibroblasts after 72 h of co-cultivation with nano composites increased with increasing dextran concentration, and the highest level was registered in Ce3D; from the dextran group, differences were registered in Ce2D and Ce3D sols. As a result of the microbiological study, the best antimicrobial activity (bacteriostatic effect) was found for Ce0.5D and Ce2D, which significantly inhibited the multiplication of E. coli after 24 h by an average of 22–27%, and after 48 h, all nanocomposites suppressed the multiplication of E. coli by 58–77%, which was the most pronounced for Ce0.5D, Ce1D, and Ce2D. Conclusions: The necessary physical characteristics of nanoceria–dextran nanocomposites that provide the best wound healing biological effects were determined. Ce2D at a concentration of 10−3 M, which stimulates cell proliferation and metabolism up to 2.5 times and allows a reduction in the rate of microorganism multiplication by three to four times, was selected for subsequent nanodrug creation.
Introduction
Public health issues are among the most important issues in the modern world.Among the many issues requiring medical care, trauma, including wounds, ranks first in terms of frequency and severity [1][2][3].Ubiquitous health advertising and sports, especially extreme sports, have played a negative role, providing an increase in the number of somatic and skin wounds in the most physically active part of the human population, those aged 14-35 years old [4].A special place is occupied by gunshot wounds, the number of which continues to increase proportionally both in civil society and in zones of armed military conflicts [5][6][7].The important differences of gunshot wounds are the combination of the lesion with extensive skin lesions and the obligatory contamination of the wounds with defeaters, a wide range of aerobic and anaerobic microorganisms.The situation is greatly aggravated by the decreasing antimicrobial activity of antimicrobial drugs and increasing antibiotic resistance and has become one of the major problems in modern medicine and pharmacology.Rapid mutations of microorganisms, outpacing the speed of development of new antimicrobial drugs, slowly but surely ensure the transition of this problem from the category of issues requiring resolution to a state close to disaster.The "epidemic of antibiotic-resistant microorganisms" that has spread everywhere, especially in intensive care units, where the fight against bacterial complications is one of the top priorities, cannot be considered otherwise [8][9][10][11][12][13][14].
Regeneration in skin injuries is the second most important but the most frequently addressed problem.Acute skin damage in everyday life is a routine situation, but chronic ulcers in patients with diabetes mellitus, tissue arterial ischemia, varicose veins, and postthrombotic syndrome are a heavy burden on family and state budgets [15][16][17][18].Great hopes were pinned on mesenchymal stem cells.The biological theories that were being put forward were impressive due to their coherence and completeness [19][20][21][22].The results obtained in laboratories and vivariums gave hope of a breakthrough in healthcare and successful promotion of these technologies in clinical practice.Unfortunately, the breakthrough did not happen, and the scientific medical world began to be conquered by a new idea-the idea of using nanomaterials that exhibit unexpectedly strong biological effects that could be used to treat some somatic diseases and skin lesions [23,24].
Interest in nanoparticles in medicine, which first emerged in the middle of the last century, grew rapidly as the results of experiments with metal nanoparticles, especially those with variable valence, became available.The confirmed mechanisms of drug delivery directly to tumor cells using nanomaterials, which reduced the toxic load and improved the treatment outcome of cancer patients, further ensured the interest of the medical community in nanotechnology [25][26][27][28][29][30][31].
One promising metal oxide is cerium dioxide.In the last five years, according to PubMed data from 2019 to 2024, there were 7066 papers published on nanoceria, almost double the number of publications in the previous 5 years.Most of the studies investigating the biological effects of nanoceria have shown its redox activity as well as regenerative and even antimicrobial and pro-regenerative effects [32][33][34][35][36][37][38][39].There are many problems in the synthesis of new groups of nanodrugs that have not been encountered before.For example, the biological activity of nanoceria depends not only on the chemical purity of the product.The shape of the nanoceria crystal and its properties vary depending on the method of its synthesis, changes in the pH of the medium, etc. [34,[39][40][41][42]. Similarly, the same dependencies occur with different nanoparticle coatings [41][42][43].Coatings of cerium nanoparticles with various substances, most often of plant origin, are a necessity because they prevent the aggregation of nanocrystals, which leads to a significant decrease in the total area of crystals, which comes into direct contact with the environment.This aggregation leads to a decrease in or disappearance of biological effects, which are illustrated by many researchers when using properly prepared nanoceria sols [44][45][46][47].Natural polysaccharides (agar-agar, alginate, pullunan, chitosan, fucoidan, dextran, and pectin) are often used as coatings for metal nanoparticles [44,48,49].Focusing on the world literature, dextran, which has long been used in medicine for several indications, has performed well in many biological studies with nanoparticles [50][51][52][53][54][55].It is known that dextran polysaccharide is a branched glucose polymer with an average chain mass ranging from 3 to 20000 kDa, which is synthesized by acetic acid bacteria from sucrose.In our studies, we also chose dextran as more accessible and showing high tropism to cerium dioxide nanocrystals in alkaline medium [56,57].
In this regard, the aim of the present work was to synthesize and select the best nanocomposite based on rare earth metal oxide nanoparticles and polysaccharide (dextranstabilized nanocrystalline cerium oxide) for the subsequent creation of a medical and/or veterinary nanodrug for wound regeneration with a new type of antimicrobial effect.
Results of Evaluation of Physicochemical Properties of the Synthesized Dextran-CeO 2 Nanocomposites
The results of UV-visible spectroscopy for dextran-coated cerium nanoceria sols indicate the formation of cerium dioxide phase (Figure 1); with increasing dextran content, the absorption band of cerium dioxide shifts to the visible region (up to 400 nm), which indirectly indicates the change in particle size.According to the data of X-ray phase analysis, the synthesized variants of nanocomposites are CeO 2 nanoparticles with a size of 0.5-4 nm (Figure 2).Moreover, the more dextran in the composition of the composite, the less clear the diffraction peaks characteristic of cerium dioxide.This is due to the fact that with increasing concentration of polysaccharide in the composite, the thickness of the dextran coating of nanoceria increases.The data on the size of cerium oxide particles obtained from the analysis of diffraction maxima are shown in Figure 3, which shows that the size of cerium nanoparticles decreased as the dextran concentration increased.The largest size of cerium oxide nanocrystals (3.3 nm on average) was recorded at the lowest addition of dextran (Ce0.5D); in the Ce1D (1:1) sample, the size of the nanoparticles was 45% smaller, averaging 2.2 nm.The smallest nanoparticle size was in the Ce2D and Ce3D samples (more than two times smaller than at the minimum dextran concentration).At the same time, the median range of values for the Ce2D composite, bounded by the 25%-75% percentiles, was significantly lower (1.2-1.7 nm, Me = 1.5 nm) than for the Ce3D sample (0.4-2.0 nm, Me = 1.2 nm).The data obtained suggest that the strength of the biological effect is greater in Ce2D and Ce3D, which had the smallest nanoparticle core size, of which the Ce2D nanocomposite The data on the size of cerium oxide particles obtained from the analysis of diffraction maxima are shown in Figure 3, which shows that the size of cerium nanoparticles decreased as the dextran concentration increased.The largest size of cerium oxide nanocrystals (3.3 nm on average) was recorded at the lowest addition of dextran (Ce0.5D); in the Ce1D (1:1) sample, the size of the nanoparticles was 45% smaller, averaging 2.2 nm.The smallest nanoparticle size was in the Ce2D and Ce3D samples (more than two times smaller than at the minimum dextran concentration).At the same time, the median range of values for the Ce2D composite, bounded by the 25-75% percentiles, was significantly lower (1.2-1.7 nm, Me = 1.5 nm) than for the Ce3D sample (0.4-2.0 nm, Me = 1.2 nm).The data obtained suggest that the strength of the biological effect is greater in Ce2D and Ce3D, which had the smallest nanoparticle core size, of which the Ce2D nanocomposite with minimal variation in nanocrystal size is the most predictable in terms of stability of biological effects.with minimal variation in nanocrystal size is the most predictable in terms of stability of biological effects.Transmission electron microscopy results confirmed that the nanoparticle sizes were less than 5 nm (Figure 4), while increasing the dextran content led to a decrease in particle size to 0.5-2.5 nm in Ce2D and Ce3D, which agrees well with the XRD data.Transmission electron microscopy results confirmed that the nanoparticle sizes were less than 5 nm (Figure 4), while increasing the dextran content led to a decrease in particle size to 0.5-2.5 nm in Ce2D and Ce3D, which agrees well with the XRD data.Transmission electron microscopy results confirmed that the nanoparticle sizes were less than 5 nm (Figure 4), while increasing the dextran content led to a decrease in particle size to 0.5-2.5 nm in Ce2D and Ce3D, which agrees well with the XRD data.The results of dynamic light scattering have established that as the content of dextran in the composition of the colloidal solution of cerium dioxide increases, the hydrodynamic radius of the particles increases (Figure 5).This is due to the increase in the number of adsorbed dextran molecules on the surface of cerium dioxide particles; however, it is known that according to X-ray diffraction analysis and electron microscopy data, the size of cerium dioxide particles decreases.The results of dynamic light scattering have established that as the content of dextran in the composition of the colloidal solution of cerium dioxide increases, the hydrodynamic radius of the particles increases (Figure 5).This is due to the increase in the number of adsorbed dextran molecules on the surface of cerium dioxide particles; however, it is known that according to X-ray diffraction analysis and electron microscopy data, the size of cerium dioxide particles decreases.The DLS method allows us to evaluate the aggregative stability of cerium dioxide sols stabilized by dextran without directly estimating the actual size of nanoparticles.The DLS results demonstrate that in the synthesis of cerium dioxide sols, increasing the polysaccharide content from 0.5 to 1 (by mass) leads to the formation of particles with hydrodynamic diameters ranging from 10 to 20 nm and 100 nm; further increasing the The DLS method allows us to evaluate the aggregative stability of cerium dioxide sols stabilized by dextran without directly estimating the actual size of nanoparticles.The DLS results demonstrate that in the synthesis of cerium dioxide sols, increasing the polysaccharide content from 0.5 to 1 (by mass) leads to the formation of particles with hydrodynamic diameters ranging from 10 to 20 nm and 100 nm; further increasing the dextran concentration to 2 and 3 (by mass) leads to an increase in the diameter of nanocomposites from 110 nm to 200-300 nm.
According to the IR spectroscopy data, all obtained samples are composites of cerium dioxide + dextran composition.Figure 6 shows that the peaks at 3415, 2925, 1648 cm −1 characteristic for dextran in other samples of the compositions Ce:0.5D,Ce:1D, Ce:2D, and Ce:3D are absent, which may indicate the formation of a composite of a new composition.One of the main peaks is a strong band in the 3415 cm −1 region, which corresponds to an asymmetric O-H vibration that overlaps with hydrogen intramolecular bonding signals.
One peak between 2925 and 2932 cm −1 can be attributed to the symmetric and asymmetric C-H bonding.The peak at 1648 cm −1 corresponds to the aqueous solvate layer of the polysaccharide.Similar peaks are characteristic of a large number of complex polysaccharides such as chitosans, galactans, and glucans.The peak at 1156 cm
Results of Evaluation of the Effect of Nanocomposites on Cytotoxicity, Metabolic and Proliferative Activity of Human Fibroblast Cell Culture
Firstly, the study of the effect of nanocomposites on the metabolic activity of cells was performed.This is important to study on fibroblast cell cultures in partic these cells can be stimulated either in the form of proliferation with an increas population or in the form of increased production of the intercellular substan produce, which is very important for wound healing in the skin.
In the course of analyzing the results of the MTT test, it was found that the m The above-described peaks on the curves of composites Ce:0.5D,Ce:1D, Ce:2D, and Ce:3D are changed (present with low intensity or absent).This may additionally indicate the formation of new dextran-nanoceria bonds, which have new properties that are not characteristic of dextran.And the available wavelength differences among the nanocomposites indicate different properties of the synthesized nanomaterials differing in the cerium to dextran ratio.Consequently, each nanocomposite may exhibit different biological properties.
Results of Evaluation of the Effect of Nanocomposites on Cytotoxicity, Metabolic and Proliferative Activity of Human Fibroblast Cell Culture
Firstly, the study of the effect of nanocomposites on the metabolic activity of human cells was performed.This is important to study on fibroblast cell cultures in particular, as these cells can be stimulated either in the form of proliferation with an increase in cell population or in the form of increased production of the intercellular substance they produce, which is very important for wound healing in the skin.
In the course of analyzing the results of the MTT test, it was found that the metabolic activity of fibroblasts at 72 h of co-cultivation increased with increasing dextran content in the nano-composites studied at a concentration of 10 −2 M. The highest level, significantly different from both the control group and the comparison group of dextran, was recorded in the Ce3D sols.Moreover, the Ce2D and Ce3D groups did not differ from each other, and in most cases, fibroblast metabolism was higher than in the control (Figure 7).No statistically significant differences were found between Ce0.5D and Ce1D groups.These groups also did not differ from the control and dextran groups, although they were slightly superior to dextran in terms of effect.
R PEER REVIEW
8 of 24 concentration of 10 −3 M was found to significantly inhibit fibroblast proliferation by an average of 75% relative to control (34.6 ± 27.4%, p < 0.001).According to ANOVA Bonferroni post hoc test, statistically significant differences were recorded between the performance of Ce2D and Ce3D nanocomposites from all other studied groups.Ce2D was the best in this parameter, while Ce3D was the worst (Figure 8).The data obtained suggest that the stimulation of fibroblast metabolism probably depends on the concentration of polysaccharide and the size of nanoparticles in the nanocomposite; the smaller the size of nanoparticles of the inorganic core of the nanocomposite (cerium dioxide) and the larger the hydrodynamic polysaccharide radius, the higher the degree of stimulation of metabolic activity of fibroblasts with their production of interstitial wound-filling substance.
Cell counting after 72 h of co-culture determined inter-group differences, indicating the effect of nanocomposites at a concentration of 10 −2 M on human fibroblast proliferation.
A clear advantage of the Ce2D nanocomposite was demonstrated, where a 107-291% stimulation of cell division was recorded, with an average of 172.1% ± 46.8% relative to control (p < 0.001).In the groups of dextran, Ce0.5D, and Ce1D, no statistically significant differences in cell number were registered relative to the control, while in the Ce0.5D group, an unreliable tendency to inhibition of fibroblast proliferation by an average of 23% relative to the control was determined.Ce3D nanocomposite at a concentration of 10 −3 M was found to significantly inhibit fibroblast proliferation by an average of 75% relative to control (34.6 ± 27.4%, p < 0.001).According to ANOVA Bonferroni post hoc test, statistically significant differences were recorded between the performance of Ce2D and Ce3D nanocomposites from all other studied groups.Ce2D was the best in this parameter, while Ce3D was the worst (Figure 8).The obtained results prompted us to perform cell counting at different concentrations of all studied nanocomposites.For this purpose, sols at concentrations of 10 −3 M, 10 −4 M, 10 −5 M, and 10 −6 M were prepared using water for injection.
From the results of direct cell counting, it was determined that fibroblast proliferation was enhanced to the greatest extent and over the widest range of concentrations when co-cultured with the Ce2D nanocomposite (172% on average at a concentration of 10 −2 M, 245% at 10 −3 M concentration, 193% at 10 −4 M concentration, and by 145% relative to control at 10 −5 M concentration, p < 0.05), with the best effect, significantly superior to all other subgroups, registered when exposed to Ce2D at 10 −3 M concentration.On average, the number of fibroblasts after 72 h of co-cultivation in the Ce2D sample at 10 −3 M concentration was 245 ± 63% relative to the control.The Ce0.5D nanocomposite had no effect on cell numbers at all concentrations.The Ce1D nanocomposite enhanced proliferation of human fibroblasts at concentrations of 10 −3 to 10 −5 M by an average of 164-187% relative to control (p < 0.05).Ce3D significantly enhanced fibroblast proliferation only at concentrations of 10 −3 M by an average of 196% (Figure 9).number of fibroblasts after 72 h of co-cultivation in the Ce2D sample at 10 M concentration was 245 ± 63% relative to the control.The Ce0.5D nanocomposite had no effect on cell numbers at all concentrations.The Ce1D nanocomposite enhanced proliferation of human fibroblasts at concentrations of 10 −3 to 10 −5 M by an average of 164-187% relative to control (p < 0.05).Ce3D significantly enhanced fibroblast proliferation only at concentrations of 10 −3 M by an average of 196% (Figure 9).It is interesting to note the fact that the maximum efficiency of all nanocomposites was registered at the concentration of 10 −3 M with a clear advantage of Ce2D composite, at which the number of fibroblasts after 72 h was 2.5 times higher than in the control (at the concentration of 10 −3 M Ce0.5D-1.4times, Ce1D-1.5 times, Ce3D-2.0 times relative to the control, p < 0.01).
To determine the cytotoxicity and safety of the synthesized nanocomposites, cells were visualized by light microscopy (Figure 10) and the percentage of dead cells was evaluated.At a high concentration (10 −2 M of cerium dioxide), 100% cell survival in all 12 samples was recorded in the control, Ce2D, and Ce3D groups.The latter situation (absence of cell death along with their normal visualization) against the background of cell division inhibition proves the safety of the samples, with living cells spending energy not for proliferation but for collagen and elastin synthesis, which is confirmed by MTT-test data.It is interesting to note the fact that the maximum efficiency of all nanocomposites was registered at the concentration of 10 −3 M with a clear advantage of Ce2D composite, at which the number of fibroblasts after 72 h was 2.5 times higher than in the control (at the concentration of 10 −3 M Ce0.5D-1.4times, Ce1D-1.5 times, Ce3D-2.0 times relative to the control, p < 0.01).
To determine the cytotoxicity and safety of the synthesized nanocomposites, cells were visualized by light microscopy (Figure 10) and the percentage of dead cells was evaluated.At a high concentration (10 −2 M of cerium dioxide), 100% cell survival in all 12 samples was recorded in the control, Ce2D, and Ce3D groups.The latter situation (absence of cell death along with their normal visualization) against the background of cell division inhibition proves the safety of the samples, with living cells spending energy not for proliferation but for collagen and elastin synthesis, which is confirmed by MTT-test data.
The most frequent dead cells were determined in the Ce0.5D group (16.7% of cases registered 2-29% dead cells).Single cases of dead cells were reported in the dextran and Ce1D groups (dextran: 25% of cases (3 out of 12 cells) reported 2% to 5% dead cells; Ce1D: 16.7% of cases (2 out of 12 cells) reported 5% dead cells).Although this is not an indication of cytotoxicity (acceptable values), it is reasonable to select Ce2D and Ce3D for future drug development.
At other concentrations, more dilute sols also showed advantages of Ce2D and Ce3D nanocomposites, with 100% survival (no dead cells) for both nanocomposites with excess dextran recorded at concentrations of 10 −2 M, 10 −3 M and 10 −6 M (otherwise up to 2-5% dead cells in 1-2 out of 12 samples).In the samples with Ce1D nanocomposite, insignificant amounts (up to 5%) of dead cells in 1-2 out of 12 wells were recorded at all concentrations.In samples with the Ce0.5D nanocomposite, there were no dead cells only at the lowest concentrations (10 −5 -10 −6 M).The most frequent dead cells were determined in the Ce0.5D group (16.7% of cases registered 2-29% dead cells).Single cases of dead cells were reported in the dextran and Ce1D groups (dextran: 25% of cases (3 out of 12 cells) reported 2% to 5% dead cells; Ce1D: 16.7% of cases (2 out of 12 cells) reported 5% dead cells).Although this is not an indication of cytotoxicity (acceptable values), it is reasonable to select Ce2D and Ce3D for future drug development.
At other concentrations, more dilute sols also showed advantages of Ce2D and Ce3D nanocomposites, with 100% survival (no dead cells) for both nanocomposites with excess dextran recorded at concentrations of 10 −2 M, 10 −3 M and 10 −6 M (otherwise up to 2-5% dead cells in 1-2 out of 12 samples).In the samples with Ce1D nanocomposite, insignificant amounts (up to 5%) of dead cells in 1-2 out of 12 wells were recorded at all concentrations.In samples with the Ce0.5D nanocomposite, there were no dead cells only at the lowest concentrations (10 −5 -10 −6 M).Thus, the most preferred and safe nanocomposite for the future development of a drug to accelerate wound healing is Ce2D, characterized by a balance of simultaneous stimulation of both cell proliferation and metabolism with the most predictable dose-dependent effect.
Antimicrobial Activity of Nanocomposites
The study by gas chromatography with mass spectrometry revealed that cerium oxidebased nanocomposites coated with dextran in different ratios affect E. coli abundance with a significant bacteriostatic effect at all cerium to dextran ratios after 48 h of incubation, but after 24 h of incubation, this only occurred with two types of composites (Ce0.5D and Ce2D).Thus, after 24 h of incubation in the control groups, where there was only nutrient medium and E. coli, the average number of microbial bodies was on average 1.36 times higher than in samples with the addition of 10 vol% Ce0.5D at a dose of 10 −2 M, where the number of microbial bodies averaged 250 ± 5.5 × 10 5 cells/g medium (p < 0.01) and 1.28 times higher than in samples with the addition of Ce2D at the same dose of 10 −2 M (p < 0.01), where the number of E. coli averaged 265.7 ± 4.1 × 10 5 cells/g medium.Accordingly, the percentage of significant suppression of E. coli growth when co-cultured with nanocomposites after 24 h was 26.7% when incubated with Ce0.5D and 22.1% when incubated with Ce2D (p < 0.01).There was no statistically significant difference in this index between the Ce0.5D and Ce2D groups (p = 0.071 with Bonferroni correction for multiple comparisons).At the same time, both groups Ce0.5D and Ce2D were significantly different from groups Ce1D and Ce3D, in which the number of microbial bodies after 24 h was 25-34% higher and averaged 332.4 ± 7.2 and 335.2 ± 8.1 × 10 5 cells/g medium, respectively; this did not differ from the control groups (p > 0.05) (Figure 11).dependent effect.
Antimicrobial Activity of Nanocomposites
The study by gas chromatography with mass spectrometry revealed that cerium oxidebased nanocomposites coated with dextran in different ratios affect E. coli abundance with a significant bacteriostatic effect at all cerium to dextran ratios after 48 h of incubation, but after 24 h of incubation, this only occurred with two types of composites (Ce0.5D and Ce2D).
Thus, after 24 h of incubation in the control groups, where there was only nutrient medium and E. coli, the average number of microbial bodies was on average 1.36 times higher than in samples with the addition of 10 vol% Ce0.5D at a dose of 10 −2 M, where the number of microbial bodies averaged 250 ± 5.5 × 10 5 cells/g medium (p < 0.01) and 1.28 times higher than in samples with the addition of Ce2D at the same dose of 10 −2 M (p < 0.01), where the number of E. coli averaged 265.7 ± 4.1 × 10 5 cells/g medium.Accordingly, the percentage of significant suppression of E. coli growth when co-cultured with nanocomposites after 24 h was 26.7% when incubated with Ce0.5D and 22.1% when incubated with Ce2D (p < 0.01).There was no statistically significant difference in this index between the Ce0.5D and Ce2D groups (p = 0.071 with Bonferroni correction for multiple comparisons).At the same time, both groups Ce0.5D and Ce2D were significantly different from groups Ce1D and Ce3D, in which the number of microbial bodies after 24 h was 25-34% higher and averaged 332.4 ± 7.2 and 335.2 ± 8.1 × 10 5 cells/g medium, respectively; this did not differ from the control groups (p > 0.05) (Figure 11).An ANOVA test determined that multiple differences were significant after 24 h (F = 2805.9;p < 0.001).However, after 48 h, all types of nanocomposites at a concentration of 10 −2 M ceria showed significant antimicrobial (bacteriostatic) activity compared to the control groups.An ANOVA test determined that multiple differences were significant after 24 h (F = 2805.9;p < 0.001).However, after 48 h, all types of nanocomposites at a concentration of 10 −2 M ceria showed significant antimicrobial (bacteriostatic) activity compared to the control groups.
Thus, after 48 h of incubation in the thermostat, the values of the number of microbial cells (×10 5 /g) were as follows: in the Ce0.5D group-609.4± 9.7; Ce1D-796.8± 9.9; Ce2D-653.8± 6.9; Ce3D-1098.2± 18.9 cells per gram of medium.These results were statistically significantly different from those of the control groups.In the control tubes, the average number of microorganisms was 4.3 times higher than in the Ce0.5D group (p < 0.01), 3.3 times higher than in the Ce1D group (p < 0.01), 4.0 times higher than in the Ce2D group (p < 0.01), and 2.4 times higher than in the Ce3D group (p < 0.01).Consequently, the application of cerium-dextran nanocomposites at a concentration of 10 −2 M, can inhibit the growth and multiplication of E. coli by an average of 77% with Ce0.5D, 70% with Ce1D, 75% with Ce2D and 58% with Ce3D.
After 48 h, according to GC-MS data, the number of E. coli in the control groups significantly increased on average by 7.7 times, while during incubation with nanocomposites, this increased by 2.4-3.3 times.Namely, this increased by 2.44 times with Ce0.5D, 2.40 times with Ce1D, 2.46 times with Ce2D, and 3.29 times with Ce3D.Thus, at a ratio of 0.5-2 of dextran in the synthesis of nano-ceria oxide, the number of E. coli increased equally on average by 2.4 times, which is significantly less than in the control groups, i.e., at these ratios of cerium/dextran (Ce0.5D,Ce1D and Ce2D), the multiplication of E. coli was inhibited equally on average by 3.2 times compared to the control groups (p < 0.01).Less pronounced bacteriostatic activity was possessed by Ce3D, in which the number of microorganisms for the second day of the study increased by 3.29 times, although a significant antimicrobial effect was established in the form of inhibition of E. coli colony growth relative to the control by 2.3 times (p < 0.01) (Figure 12).The ANOVA test established the significance of multiple differences after 48 h of co-culture of dextran nano-composites (F = 448.7;p < 0.001).
After 48 h, according to GC-MS data, the number of E. coli in the control gro significantly increased on average by 7.7 times, while during incubation nanocomposites, this increased by 2.4-3.3 times.Namely, this increased by 2.44 times Ce0.5D, 2.40 times with Ce1D, 2.46 times with Ce2D, and 3.29 times with Ce3D.Thu a ratio of 0.5-2 of dextran in the synthesis of nano-ceria oxide, the number of E increased equally on average by 2.4 times, which is significantly less than in the con groups, i.e., at these ratios of cerium/dextran (Ce0.5D,Ce1D and Ce2D), the multiplica of E. coli was inhibited equally on average by 3.2 times compared to the control group < 0.01).Less pronounced bacteriostatic activity was possessed by Ce3D, in which number of microorganisms for the second day of the study increased by 3.29 ti although a significant antimicrobial effect was established in the form of inhibition coli colony growth relative to the control by 2.3 times (p < 0.01) (Figure 12).The ANO test established the significance of multiple differences after 48 h of co-culture of dex nano-composites (F = 448.7;p < 0.001).Thus, the best antimicrobial activity in the form of bacteriostatic effect against Gramnegative bacterium E. coli was found in Ce0.5D and Ce2D nanocomposites.
Since the Ce2D nanocomposite was selected according to previous studies, the next stage of microbiological study was to determine the antimicrobial activity of this nanocomposite, but at different concentrations (10 −2 -10 −5 M).
In the results of gas chromatography with mass spectrometry after 24 h of incubation of Ce2D with nutrient medium containing E. coli, only at a concentration of 10 −2 M was a significant antimicrobial effect established, confirming the results of the previous experiment in the form of a reduction in the number of microbial bodies (1.3 times more microbes in the control; p < 0.01).At other concentrations of Ce2D, no bacteriostatic effect was detected after 24 h, and the results were comparable with the control groups.
In the results of gas chromatography with mass spectrometry after 24 h of incubation of Ce2D with nutrient medium containing E. coli, only at a concentration of 10 −2 M was a significant antimicrobial effect established, confirming the results of the previous experiment in the form of a reduction in the number of microbial bodies (1.3 times more microbes in the control; p < 0.01).At other concentrations of Ce2D, no bacteriostatic effect was detected after 24 h, and the results were comparable with the control groups.
After 48 h, a bacteriostatic effect against E. coli was established for all concentrations of Ce2D.The values of microbial cell counts (×10 5 /g) in Ce2D groups averaged 1417 ± 13.8 ×10 5 at 10 −5 M concentration; 791 ± 12.9 at 10 −4 M concentration; 661.5 ± 12.9 at 10 −3 M concentration; and 526.9 ± 10.7 ×10 5 cells per gram of medium at 10 −2 M concentration.These statistically significantly differed from the control groups by an average of 1.8 times, 3.3 times, 4.0 times, and 5.0 times, respectively (p < 0.01) (Figure 13).Consequently, the application of Ce2D is able to inhibit E. coli multiplication by 47-80% (at a concentration of 10 −5 M by 47% on average, 10 −4 M by 70%, 10 −3 M by 75%, and 10 −2 M by up to 80%).ANOVA analysis established the significance of multiple differences after 24 h (F = 2276.3;p < 0.001) and after 48 h (F = 638.8;p < 0.001).A significant dose-dependent effect of dextran nanoceria on E. coli multiplication was established.In the group with the lowest concentration of nano-ceria (10 −5 M), the number of microbial bodies was significantly higher than in all other groups (on average 1.8 times higher than at 10 −4 M (p < 0.01), 2.1 times higher than at 10 −3 M (p < 0.01), and 2.7 times higher than at 10 −2 M (p < 0.01).The number of microbial bodies was significantly lower in the group with the highest concentration of nano-ceria than in the groups with concentrations of 10 −4 M and 10 −5 M (p < 0.01).The group with 10 −3 M concentration of A significant dose-dependent effect of dextran nanoceria on E. coli multiplication was established.In the group with the lowest concentration of nano-ceria (10 −5 M), the number of microbial bodies was significantly higher than in all other groups (on average 1.8 times higher than at 10 −4 M (p < 0.01), 2.1 times higher than at 10 −3 M (p < 0.01), and 2.7 times higher than at 10 −2 M (p < 0.01).The number of microbial bodies was significantly lower in the group with the highest concentration of nano-ceria than in the groups with concentrations of 10 −4 M and 10 −5 M (p < 0.01).The group with 10 −3 M concentration of Ce2D was not statistically different from the group with 10 −2 M concentration (p > 0.05); that is, the antimicrobial activity of 10 −2 -10 −3 M after 48 h is comparable.
If in control groups, the number of microorganisms for a day (from 24h to 48h) increased on average by 7.8 times, then during incubation with Ce2D at concentrations of 10 −2 M and 10 −3 M, the number of E. coli increased only by 2.0 times, at a concentration of 10 −4 M-on average by 2.4 times, and at a concentration of 10 −4 M-on average by 4.2 times.Consequently, with the application of Ce2D composite at concentration of 10 −2 to 10 −3 M, it was possible to reduce the reproduction of microorganisms within 2 days by 3.9 times, at a concentration of 10 −4 M-3.2 times, and at a concentration of 10 −5 M-1.8 times.
Thus, it was proved that after 24 h, nanocomposite Ce2D in the maximum concentration of 10 −2 M can significantly inhibit the multiplication of E. coli on average by 22%.After 48h, all concentrations of Ce2D showed bacteriostatic activity, inhibiting E. coli multiplication by 47-80%, and a dose-dependent effect was established.The higher the concentration of nano-Ce2D, the greater the antimicrobial (bacteriostatic) activity.After 48 h, the number of microbial bodies at a dose of 10 −3 M Ce2D is comparable to the concentration of 10 −2 M. The application of Ce2D nanocomposite at a dose of 10 −2 -10 −3 M allows a reduction in the rate of microorganism multiplication in the period from 24h to 48h by 3.9 times (at the dose of 10 −4 M-3.2 times, at 10 −5 M-1.8 times).
All this makes the Ce2D nanocomposite at a concentration of 10 −2 -10 −3 M attractive for the further development of a medical drug with a new type of antimicrobial activity.
The solution was continuously stirred on a magnetic stirrer, to which 3 M aqueous ammonia solution (special purity, Himmed, Moscow, Russia) was added dropwise for 3 h, maintaining pH at 7.5-8.0.The pH solutions were measured using a Crison GLP 22 pHmeter (Crison Instruments, SA, Barcelona, Spain) equipped with a Crison 5201 combination electrode and providing an accuracy of ±0.003 pH in the range from 0 • C to 100 • C.
When the pH became constant, the mixture was additionally stirred for 2 h.Then, aqueous ammonia solution was added up to pH = 12 followed by additional stirring for at least 8 h.The end of formation of cerium dioxide particles during the experiment was monitored by UV-visible absorption spectra.The experiment was considered complete when the peak characteristic of Ce 3+ disappeared in the spectra.The complete oxidation was ensured by differential UV-vis spectroscopy by disappearance of local minimum on differential spectra at 280-290 nm.
For cerium dioxide samples with different dextran contents, the time to complete the synthesis was different.The longest synthesis was required for the nanocomposite with the highest dextran content (48 h).
Then, excess isopropanol (350 mL) was added to the obtained solution until a yellow precipitate was formed.The precipitate was further washed several times with isopropanol (Chimmed, Moscow, Russia), centrifuged at ~18,000 RCF (relative centrifugal force) and dried at 50 • C. Dextran-stabilized cerium sol was obtained by dispersing the powder in distilled water.
Then, thermogravimetry was performed, based on the results of which the concentration was calculated and solutions with different concentrations were prepared for biological studies.We used alund crucibles, brought them to constant mass, weighed them, placed aliquots of the sol inside, dried and annealed them at 900 • C, and weighed again.The weight form was considered to be CeO 2 .From the mass difference, the mass of cerium oxide was determined and converted to the concentration of the sol.
To determine the stability of the hydrosols, the hydrosols were observed for at least 3 months after their synthesis, keeping the samples under normal conditions at room temperature.The obtained nanoceria sols stabilized with dextran did not require additional stabilization, and after 1, 2 and 3 months, they had a stable pH value (from 6.8 to 7.4), were easily diluted with solutions and did not lose their aggregative stability (transparent solution of yellowish color did not change, no precipitate).After stability evaluation, the nanocomposite sols were investigated in biological (cellular and microbiological) experiments.
Studied Samples and Comparison Groups
Depending on the initial ratio of cerium nitrate to dextran, in this work, we evaluated 4 types of nanoceria sols at a concentration of 10 −2 M: Ce0.5D (at a ratio of 1:0.5),Ce1D (at a ratio of 1:1), Ce2D (at a ratio of 1:2) and Ce3D (at a ratio of 1:3).Also, in biomedical experiments, sterile water for injection was used to study the effects of different concentrations of the obtained sols, and concentrations of 10 −2 M, 10 −3 M, 10 −4 M, 10 −5 M and 10 −6 M were obtained.
Nanoparticle concentrations are expressed as M (mol/L).This is the molarity per formula unit of CeO 2 , i.e., actually a mole of cerium dioxide per liter of colloidal solution.
The control was the water with which the dilution was performed, used in the same volume as the sols.In addition, the results obtained were compared with a group of dextran (Mr = 6000, ABCR GmbH, Karlsruhe, Germany) used to prepare the sols.Therefore, the comparison group was dextran without cerium (0.3 g of dextran was dissolved in 50 mL of distilled water).
Microbiological experiments were also controlled by groups with antibiotic (ceftriaxone).
Methods of Evaluation of Physicochemical Properties of Nanocomposites
The obtained samples of nanomaterials were characterized by UV-visible spectroscopy, transmission electron microscopy, X-ray phase analysis, dynamic light scattering (DLS), and infrared spectroscopy.
Ultraviolet-visible spectroscopy was performed on a spectrophotometer SF-2000 (OKB Spektr, Saint Petersburg, Russia), working on a single-beam scheme.Imaging was performed in the wavelength range from 190 to 800 nm with a step of 0.1 nm, and the optical slit width was 0.2 nm.Imaging in the range from 190 to 394.9 nm was performed using a deuterium lamp, and from 395 to 800 nm using a halogen lamp.The exposure time was 50 msec.Each spectrum was imaged 10 times with subsequent averaging of the results.
Transmission electron microscopy (TEM) of the synthesized nanocomposites was carried out on a JEM 2100 JEOL electron microscope (JEOL Ltd., Tokyo, Japan) with an accelerating voltage of 200 kV.
X-ray phase analysis of samples was carried out on a Rigaku D/MAX 2500 diffractometer (CuK-radiation) at a goniometer rotation speed of 1-2 • 2θ/min (Rigaku Corporation, Tokyo, Japan).The identification of diffraction maxima was carried out using the International Center for Diffraction Data (Joint Committee on Powder Diffraction Standards (JCPDS) data bank, PA, USA).
The sols were studied by dynamic light scattering and zeta potential measurements at 20 • C using a Photocor Compact-Z analyzer (Photocor LLC, Moscow, Russia).The correlation function for each sample was obtained by averaging 10 curves (the accumulation time of one curve was 20 s).The hydrodynamic diameter of particles was determined using the regularization method (DynalS software, available online: http://www.softscientific.com/science/WhitePapers/dynals1/dynals100.htm,accessed on 17 May 2024).
Dynamic light scattering (DLS) studies were performed using a Zetasizer Nano ZS laser analyzer with a 633 nm laser (Malvern Instruments Ltd., Malvern, Worcestershire, UK).
Infrared spectroscopy analysis of nanoceria-dextran composites was performed on a Perkin Elmer Spectrum 65 FTIR spectrometer (PerkinElmer, Waltham, MA, USA).
Methods of Evaluation of the Effect of Cerium-Dextran Sols on Cytotoxicity, Metabolic and Proliferative Activity of Human Fibroblast Cell Culture
The study was performed on human fibroblast culture (BJTERT line) derived from neonatal foreskin.The origin of the line is the ATCC collection of typed cultures (Manassas, VA, USA).
Cell Culturing
Human immortalized fibroblasts of BJ TERT line were cultured in DMEM (Dulbecco's Modified Eagle's Medium) (Paneco, Moscow, Russia) supplemented with 10% fetal calf serum (Global Kang Biotechnology, Qinhuangdao, China), 1% penicillin/streptomycin, and 0.32 mg/mL glutamine (Paneco, Russia).Cell passaging was performed every 7 days according to a standard protocol, and the medium was changed every 3 days between passages.Cells were cultured in a CO 2 incubator (Binder, Tuttlingen, Germany), and incubation temperature was 37 • C in a humid atmosphere with 5% carbon dioxide in the air.For the experiment, human immortalized fibroblasts of BJ TERT line were seeded in 24-well plates (NEST, Wuxi, China) at a cell concentration of 5 × 10 4 cells/mL in suspension.After 24 h, the test substances were added at concentrations according to the experiment design in a volume of 100 µL.Incubation was then continued under standard controlled CO 2 incubator conditions for 72 h.Equivalent volumes of 0.9% sodium chloride solution were added as a control.At the end of the incubation time, proliferative activity evaluation tests were performed.
MTT Test
An MTT assay was used to determine metabolic activity and to assess changes in proliferative activity.
The MTT assay was performed according to a standard protocol, according to which MTT salt (3-(4,5-dimethylthiazol-2-yl)-2,5-diphenyltetrazolium bromide, Neofroxx, Germany) was dissolved in PBS (stock solution; 5 mg/mL).Next, a working solution of MTT (0.5 mg/mL) was prepared by dissolving the stock solution in the culture medium.Then, the medium containing nano-ceria oxide compounds was removed from the culture plates, and the MTT working solution was added to each well for 30 min at 37 • C.After the MTT solution was removed, DMSO (PanReac AppliChem, Darmstadt, Germany) was added for 5 min at room temperature on an Elmi-S4 oscillating shaker (ELMI Ltd., Riga, Latvia).Finally, the solution was transferred to a 96-well plate, and the absorbance was recorded on a spectrophotometer (Multiscan, Labsystems, Vantaa, Finland) set at λ = 540 nm.The final measurement result was expressed in relative optical density (OD) units.
Determination of Fibroblast Proliferative Activity and Assessment of Cytotoxicity by Cell Counting and Assessment of Plasma Membrane Integrity by Trypan Blue Staining
The cultivation of immortalized human fibroblasts was performed according to the protocol described above.At the end of incubation with nanocomposites, cells were detached and counted automatically using Countess II Automated Cell Counter (Thermo Scientific, Waltham, USA) in special plastic disposable slides (RWD, Shenzhen, China) following the manufacturer's protocol.The procedure allowed, in addition to counting the total number of cells, the determination of cell viability by penetration through the cell membrane and staining of non-viable cells with trypan blue solution [58].Briefly, after the detachment of cells by trypsinization (trypsin: Versen solution (Paneco, Moscow, Russia) at a ratio of 1:4), 0.4% trypan blue solution (Paneco, Moscow, Russia) was added to the cell suspension.The solution was mixed by pipetting.Then, the stained cell suspension was introduced into a slide, which was placed in a counter for automatic counting.The total number of cells per unit volume (×10 5 cells/mL) and the percentage of live and dead cells were counted as the result.
When performing cell experiments, each sample was tested at 12 repetitions in both the counter and MTT test to ensure reliable results and valid conclusions.
Determination of Antimicrobial Activity of Cerium-Dextran Nanocomposites by Mass Spectrometry of Microbial Markers Using Gas Chromatograph with Mass-Selective Detector
In the course of our previous studies, we proved that the antimicrobial effect of nanocrystalline cerium oxide sols using classical methods (diffusion into agar, serial dilution method) is not reasonable to study, since nano-cerium does not diffuse into agar and its mechanism of action does not comply with the standard rules of pharmacopoeia [35], which is actually associated with contradictory results of different researchers (there are works claiming that metals nanoparticles have a pronounced antimicrobial effect [33,[59][60][61][62][63]; at the same time, there are many works in which the authors did not obtain such an effect [64][65][66]).
This study was performed on a gas chromatograph with mass-selective detector (GC-MS) "Maestro-aMS" (Interlab, Moscow, Russia).The method is based on the highprecision determination of the presence of molecular features of microorganism-specific markers (higher fatty acids, aldehydes, alcohols and sterols in the sample under study), i.e., the fatty acid status of the microorganism, which is specific and genetically determined.The microbiological analyzer Maestro allows the accurate quantification of the content of microorganisms in any biological sample.Although the automatic algorithm of the device allows the determination of 50 microorganisms simultaneously, in the present study, we investigated the effect of nanoceria on the growth inhibition of one Gram-negative bacterium, Escherichia coli (E. coli ATCC 8739), using a test strain from the collection of the A.A. Tarasevich Research Institute for Standardization and Control of Medical Biological Preparations (Moscow).The method is characterized by high sensitivity (able to detect 10 3 cells in the sample).The final calculation of the exact number of microorganisms was expressed as number × 10 5 microbial cells per gram of tested material.
Two stages of experiments were performed.In the 1st series, a total of 8 groups were studied (Table 1).The studied 4 groups of nanocomposite sols (Ce0.5D,Ce1D, Ce2D, Ce3D) were of one concentration, 10 −2 M. Control groups: Culture media (CM) represented meatpeptone broth (MPB), CM + E. coli, CM + E. coli + H 2 O (sterile water for injection in the same volume as the nanocomposite was used).Ceftriaxone III-generation cephalosporin antibiotic (ZAO Rafarma, Russia, 1g powder pack) was chosen as a reference comparison to evaluate the antimicrobial activity, to prove the antibacterial effect against strains of microorganisms.In the experiment, we used a concentration of 100 mg/mL, dissolving ceftriaxone powder (contents of the vial, 1 g) with sterile water for injection in a volume of 9.6 mL.In each tube of all groups, the initial quantitative content in 0.5 mL of E. coli suspension was 5 × 10 5 cells.
The tubes with the contents were placed in a thermostat at 37 • C for 24 h; then, an aliquot was taken and analyzed by gas chromatography-mass spectrometer (GC-MS), and the remaining tubes were placed in the thermostat for another 24 h to repeat the GC-MS study after 48 h of co-cultivation.
In the 2nd series of experiments, the antimicrobial activity of different concentrations of the selected nanocomposite was investigated using a similar methodology.Eight groups were also studied.The control and comparison groups did not change; only the studied groups changed, of which there were 4, each with different concentrations of the selected nanocomposite (10 −2 M, 10 −3 M, 10 −4 M and 10 −5 M).
Despite the high accuracy of the device, each sample of all series of experiments was tested at least five times (five repetitions).
Statistical Analysis
For the creation of the graphs and to analyze the data of the nanoceria physicochemical characterization, OriginPro 2018 from OriginLab software SR1 (Northampton, MA, USA) was used.
Statistical processing of the results of biomedical research was carried out using the statistical program SPSS 25.0 (IBM Company, New York, NY, USA).First of all, the normality of the distributions of indicators was assessed using the Kolmogorov-Smirnov and Shapiro-Wilk criteria.All samples and their groups obeyed the law of normal distribution.After that, we performed descriptive statistics of continuous quantitative indicators, which obeyed the law of normal distribution, in the form of mean, std.deviation, std.error, 95% confidence interval for mean (95CI), minimum, and maximum.In the cell experiments, the mean value was determined in the control group, which had only cell medium containing fibroblasts and 100 µL of diluent (the solution with which the nanoceria was diluted to create the specified concentrations).Relative to the mean value in each experiment, the percentages in the studied groups were calculated, obtaining the final figure, the percentage of the control.One-factor ANOVA analysis of variance was performed for comparative analysis of different subgroups of the test.Posterior multiple comparisons were performed using Dunnett's test (for comparison with controls) and Bonferroni's test.Differences were considered statistically significant at p-value < 0.05.
Discussion
Currently, there is an active search for optimal ways to create new wound-healing drugs with a better balance of regenerative and antimicrobial effect of a new type that does not cause antibiotic resistance.Great hopes are placed on nanocomposites based on cerium oxide nanoparticles [32][33][34][35][36][37][38][39], which is a rare earth metal with variable valence, coated with a polymer shell, dextran [50][51][52][53][54], which is widely used in medicine [67][68][69].This complex seems to be an excellent combination for the development of a medical drug that effectively accelerates wound healing.
In order to achieve our aim, we had to solve both chemical-physical and biological problems in the process of experimentation on the preparation of nanocomposite construction from cerium oxide as the core of the particle and dextran as the outer coating of the particle.First, the size dependence of the created composites on the concentration of dextran in the nanoceria sols was confirmed using physicochemical methods.Direct illustrations of the created composites obtained by transmission electron microscopy confirm this dependence while maintaining the relative dimensional structure and size of the cerium nanocrystal particle itself in the required sizes up to 5 nm.However, the size structure of the synthesized nanocomposites turned out to be different.The change in the nanocomposite size, directly proportional to the dextran concentration in the sol, was logically explained by the higher degree of deposition of dextran molecules on the cerium dioxide crystal compared to other groups.An initially lower concentration of cerium nitrate in the system with high concentration of dextran correlates with an increase in the thickness of dextran coatings, which reduced the diffraction peaks characteristic of cerium dioxide and, therefore, changed the degree of contacts, and thus the activity of the nanocomposite itself with respect to biological objects.The higher concentration of dextran in the nanosol led to relatively faster blocking of aggregation (slugging) of nanoceria particles.This statement is confirmed by the analysis of diffraction maxima, which showed an inversely proportional dependence of the nanocomposite core size on the dextran concentration in the nanosol.This means that it is possible, by changing only the dextran concentration or synthesis time, to model nanoparticles of the required size of both the core size and the thickness of the polymer coat.
Infrared spectroscopy demonstrated that the nanocomposite particles obtained in the process of synthesis are indeed composite and consist of cerium dioxide and dextran, and the differences in wavelengths of the synthesized nanocomposites differing in the cerium/dextran ratio may indicate different intensities of effects on biological objects.It remained to determine the nature of the effect of synthesized nanoparticles on biological objects depending on their physical characteristics and to select the best compounds for further study on animal models.
Several parameters of changes in the physiology and structure of human fibroblasts, the main cells of the skin that provide synthesis of interstitial substance-collagen, which is necessary for wound healing-were taken for such evaluation.Cytotoxicity was determined with a nanocomposite with a maximum concentration of cerium (10 −2 M).Although all samples showed low toxicity, the best results in terms of safety characteristic were shown by samples with Ce2D and Ce3D ratios, i.e., samples with the minimum, and therefore the most effective, cerium core size.The Ce2D group showed significantly better results both in the parameter of stimulation of fibroblast proliferation in cell culture and in the parameter of their metabolic activity.Thus, the optimal sizes and ratios of cerium core and dextran coating thickness of the synthesized nanocomposite, providing the best conditions for effective cell viability, were determined.
An attempt to obtain a compound that has both a stimulating effect on body cells and antimicrobial activity, which are almost always present in wound contents, may seem like biological nonsense.However, the authors hoped for the antioxidant activity described by some researchers, which is provided by the change in the valence of cerium dioxide depending on the pH of the medium.The use of gas chromatography with spectrometry showed that the best antimicrobial nanocomposites at the time of 24 h of incubation are composites of Ce0.5D and Ce2D; that is, compounds with greater accessibility to microbial bodies and maximally expressed biological activity of nanoceria.Increasing the contamination time to 72 h extinguished this difference between groups and equalized the reported antimicrobial effect.Unfortunately, it was possible to obtain a pronounced but only bacteriostatic effect and not a bactericidal effect.However, it is very likely that with an increase in antimicrobial effects, we would have obtained an increase in toxicity, which would reduce the regenerative functions of the cell culture.At the same time, we have proven the presence of a new type of antimicrobial effect.This is an antimicrobial effect due to redox activity and a change in valence when the pH of the environment in the wound changes (in particular, acidification of the medium in the wound with E. coli contamination).This mechanism is fundamentally different from the mechanism of action of antibacterial drugs; therefore, it gives hope that nano-drugs based on cerium dioxide nanoparticles can help in the fight against antibiotic resistance.
Conclusions
The main conclusions of our work are as follows: 1.
The synthesis of sols of nanocomposites-nanocrystalline cerium dioxide less than 5nm in size, coated with polysaccharide dextran, differing in the initial ratio (by mass) of cerium nitrate to dextran, which is reflected in the physicochemical characteristics and final biological properties-was carried out.
2.
As the content of dextran in the nanocomposite increases, the particle size of cerium dioxide decreases and the hydrodynamic radius increases.
3.
A high level of safety of nanocomposites was proved, while the absence of cytotoxicity (100% cell survival at the maximum tested concentration) was established for Ce2D and Ce3D.4.
Metabolic and proliferative activity of fibroblasts increases as the dextran content increases (and as the size of cerium oxide crystals, the core of nanocomposites, decreases).The lowest level of fibroblast activity was recorded for Ce0.5D and the highest for Ce2D.In all types of nanocomposites, the most effective stimulation of human fibroblasts proliferation was registered at the concentration of 10 −3 M, with a clear advantage of Ce2D composite, with which the number of fibroblasts was 2.5 times higher than in the control (at 10 −3 M Ce0.5D-1.4times, Ce1D-1.5 times, Ce3D-2.0 times relative to the control).5.
The best antimicrobial activity of nanocomposites at a concentration of 10 −2 M (bacteriostatic action) was found in Ce0.5D and Ce2D, inhibiting the growth and multiplication of E. coli after 24 h by an average of 22-27%, and after 48 h, all variants of nanocomposites significantly inhibit the growth and multiplication of E. coli by 58-77%, which is most (and equally) expressed in Ce0.5D, Ce1D, and Ce2D, at which E. coli multiplication was inhibited equally by an average of three-fold compared to the control groups.After 48 h, all concentrations of Se2D composite showed bacteriostatic activity, inhibiting E. coli multiplication by 47-80%, and a dose-dependent effect was determined: the higher the concentration of nano-ceria, the higher the antimicrobial activity.6.
The complex of interdisciplinary studies allowed us to select for further development the Ce2D composite at a concentration of 10 −3 M, characterized by the balance of simultaneous stimulation of cells with the most predictable dose-dependent effect, including stimulating cell proliferation and metabolism up to 2.5 times and allowing a reduction in the rate of microorganism multiplication by 4 times.
In conclusion, it should be noted that the method of nanocomposite synthesis developed by the authors allows us to talk about the prerequisites for the development of a technique for obtaining for obtaining compounds with predetermined physical, chemical and biological properties.The optimal sizes of the cerium core of the nanocomposite and the thickness of its dextran coating were found.The necessary physical characteristics of the nanocomposite, which provides the best conditions for wound healing biological effects, were determined.The results obtained during the experiment allow us to speak about composites with nanocerium-dextran structure as promising compounds for their usage in medical and veterinary practice for healing and disinfection of skin wounds.
Figure 4 .
Figure 4. TEM microphotographs of nanocomposites obtained with different dextran content.
Figure 4 .
Figure 4. TEM microphotographs of nanocomposites obtained with different dextran content.
Figure 7 .
Figure 7. Effect of different concentrations of polysaccharide in cerium oxide nanocomposite + dextran on metabolic activity of human fibroblasts in MTT test, percent of control (ANOVA OD:F = 15,162; df 5, p < 0.001; *-different from control at p < 0.001; Dunnett t-tests; # dextran-different from dextran comparison group (without nanoceria) at p < 0.01, post hoc Bonferroni test).
Figure 7 .
Figure 7. Effect of different concentrations of polysaccharide in cerium oxide nanocomposite + dextran on metabolic activity of human fibroblasts in MTT test, percent of control (ANOVA OD:F = 15,162; df 5, p < 0.001; *-different from control at p < 0.001; Dunnett t-tests; # dextran-different from dextran comparison group (without nanoceria) at p < 0.01, post hoc Bonferroni test).
Figure 7 .
Figure 7. Effect of different concentrations of polysaccharide in cerium oxide nanocomposite + dextran on metabolic activity of human fibroblasts in MTT test, percent of control (ANOVA OD:F = 15,162; df 5, p < 0.001; *-different from control at p < 0.001; Dunnett t-tests; # dextran-different from dextran comparison group (without nanoceria) at p < 0.01, post hoc Bonferroni test).
Figure 8 .
Figure 8.Effect of nanocomposites based on polysaccharide dextran and cerium oxide nanoparticles at concentrations of 10 −2 M on the proliferative activity of fibroblasts (BJTERT cell line) by direct cell counting using an automated cell counter.Mean percentages from control are presented (ANOVA OD:F = 20.781;df 5, p < 0.001; difference from control at *-p < 0.001 Bonferroni and Dunnett t-tests).Green *-stimulation, red-suppression of proliferation.
Figure 8 .
Figure 8.Effect of nanocomposites based on polysaccharide dextran and cerium oxide nanoparticles at concentrations of 10 −2 M on the proliferative activity of fibroblasts (BJTERT cell line) by direct cell counting using an automated cell counter.Mean percentages from control are presented (ANOVA OD:F = 20.781;df 5, p < 0.001; difference from control at *-p < 0.001 Bonferroni and Dunnett t-tests).Green *-stimulation, red-suppression of proliferation.
Figure 9 .
Figure 9.Effect of nanocomposites based on cerium oxide polysaccharide dextran polysaccharide in a wide concentration range of 10 −2 M-10 −6 M on the proliferative activity of human fibroblasts by direct cell counting.Mean percentages from control are presented (ANOVA OD:F = 19.703;df 20, p < 0.001; *-difference significant from control at *-p < 0.05 Dunnett t-tests).
Figure 9 .
Figure 9.Effect of nanocomposites based on cerium oxide polysaccharide dextran polysaccharide in a wide concentration range of 10 −2 M-10 −6 M on the proliferative activity of human fibroblasts by direct cell counting.Mean percentages from control are presented (ANOVA OD:F = 19.703;df 20, p < 0.001; *-difference significant from control at *-p < 0.05 Dunnett t-tests).
Figure 11 .
Figure 11.Number of E. coli microbial bodies (×10 5 /g) in control groups and when co-cultured with sols of dextran-coated nanoceria in different cerium-dextran ratios, after 24 h of incubation (*significant difference from control (culture media and E. coli) at p < 0.01; ANOVA test including posterior Dunnet tests).
Figure 11 .
Figure 11.Number of E. coli microbial bodies (×10 5 /g) in control groups and when co-cultured with sols of dextran-coated nanoceria in different cerium-dextran ratios, after 24 h of incubation (*-significant difference from control (culture media and E. coli) at p < 0.01; ANOVA test including posterior Dunnet tests).
Figure 12 .
Figure 12.Number of E. coli microbial cells (×10 5 /g) in control groups and when co-cultured wi vol.% sols of nanocomposites with different dextran concentration, after 24 h and 48 h (*-signifi difference from control (culture media and E. coli) at p < 0.01; ANOVA including Dunnett's post tests).
Figure 12 .
Figure 12.Number of E. coli microbial cells (×10 5 /g) in control groups and when co-cultured with 10 vol.% sols of nanocomposites with different dextran concentration, after 24 h and 48 h (*-significant difference from control (culture media and E. coli) at p < 0.01; ANOVA including Dunnett's posterior tests).
Figure 13 .
Figure 13.Number of E. coli microbial cells (×10 5 /g medium) in control groups and when co-cultured with 10 vol.%Ce2D nanocomposite sols at different concentrations after 24 h and 48 h (*-significant difference from control (culture media and E. coli) at p < 0.01; ANOVA including Dunnett's posterior tests).
Figure 13 .
Figure 13.Number of E. coli microbial cells (×10 5 /g medium) in control groups and when cocultured with 10 vol.%Ce2D nanocomposite sols at different concentrations after 24 h and 48 h (*-significant difference from control (culture media and E. coli) at p < 0.01; ANOVA including Dunnett's posterior tests).
0.5 mL of E. coli suspension 0.5 mL solution 5.0 mL
Table 1 .
Study groups for microbiologic examination and sample preparation. | 2024-06-19T15:12:51.560Z | 2024-06-01T00:00:00.000 | {
"year": 2024,
"sha1": "98ec18aad6fe5ca6f761934b045a87e041cf9c08",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/1420-3049/29/12/2853/pdf?version=1718762814",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "af6a3e3281e9a5e0123c8eaf9b28becb60de0567",
"s2fieldsofstudy": [
"Medicine",
"Materials Science"
],
"extfieldsofstudy": [
"Medicine"
]
} |
73684401 | pes2o/s2orc | v3-fos-license | The Effect of Environmental Regulation on Employment in China : Empirical Research Based on Individual-Level Data
This paper argues that the scale effect and substitution effect in the labor demand for environmental regulations should not be ignored in the pursuit of environmental improvements. It is necessary to analyze the influential mechanism of environmental regulations on employment. Based on the pooled cross-section data combined by CHIP (Chinese Household Income Project) data and macro data at the city level, this paper investigates the impacts of environmental regulations on an individual’s employment probability in China. The results show that there exists a U-shaped relationship between environmental regulations and an individual’s employment probability. The employment effect on workers from different regions and industries or with different hukou (Household Registration System) is heterogeneous. Specifically, the regulations are more stringent in the east, more significant in secondary and tertiary industries, and stronger on urban workers’ employment. The findings are robust to alternative measures.
Introduction
With the increasing deterioration of the environment in China in recent years, the Chinese government cannot continue to ignore the importance of environmental protection when committing to economic growth.However, scholars cannot reach a consensus about whether environmental regulations policies affect employment or if China can obtain a "Double Dividend Effect" which means the mitigation of pollution emissions and stimulation of employment creation [1].
According to the "Pollution Haven Hypothesis", if pollution-intensive enterprises are incapable of promoting industrial upgrades or cannot afford their costs, they tend to agglomerate in countries or regions with looser regulations to avoid stricter environmental regulations.Therefore, environmental regulations will change the spatial layout of industries and then lead to the change of total employment and its structures in places where polluted industries transfer in/out.The effect is similar to the scale effect of the labor demand for environmental regulations [2].A few studies have proven this hypothesis [3][4][5].However, properly designed environmental regulations can trigger innovation that partially or fully offset their compliance costs, especially for energy-intensive enterprises.As environmental regulation intensity increases in China, other factors of production (like labor) will be used instead of natural resources.This means a substitution effect in the labor demand for environmental regulations, which is called the "Porter Hypothesis" [6].Ambec et al. (2013) give a theoretical review for this hypothesis [7].The previous researches mostly focus on the scale effect instead of the substitution effect described by the "Porter Hypothesis" [8,9].In addition, environmental regulations increase marginal costs of enterprise management.The decreased labor demand in these enterprises is often accompanied by an increase in the labor demand in environmental regulation activities, and it also triggers technological innovations in enterprises for the consideration of the role of substitution in terms of flexibility [10].Yan et al. (2012) found that there exists two thresholds when environmental regulations were set as the threshold variable using a nonlinear threshold panel data model and industry level data from China.Most areas of China do not reach this threshold, which suggests that the increase in intensity of environmental regulations will promote employment in the affected sectors [11].
Literature Review
Some studies analyze the effect of environmental regulations from the perspective of regulatory costs, like productivity slowdown, inflation, and unemployment.With the signing of the Kyoto protocol in 2005, US forced a series of green policies to meet the requirements of it, which made the biggest polluters suffer large negative shocks accompanied by the increase in systematic risk [12].Ryan (2012) evaluated the welfare costs of the 1990 Amendments to the Clean Air Act on the U.S. Portland cement industry.He found that the Amendments significantly increased the sunk costs of entry, leading to a loss of between $810 million and $3.2 billion in product market surplus [13].The strengthening of emission standards in the early 1990s led to a persistent decline in employment in the affected sectors, and the newly regulated polluting sector fell by more than 15 percent in the ten years following the change in regulations [14].As the pollution tax induces significant reduction in employment of regulated industries accompanied by an increase in the unregulated ones, the net effect of regulation policies on overall employment is limited, either in the long-run or short-run [15].The increase of production cost and slowdown in productivity growth of more-regulated industries in US are attributed to environmental regulations during 1998-2011, especially smaller firms [16].In China, researchers also find a remarkable negative effect of environmental regulations on employment in the short run but no sufficient evidence to prove the existence of long-term effects [17,18].Some studies have investigated the impact of environmental regulations on foreign trade and found that it will alter a country's comparative advantage and trading patterns [19][20][21][22].Some other papers find that the long-run tradeoff between the intensity of environmental regulations and employment growth does not exist [23][24][25][26], some surveys even indicate a positive attitude on environmental protection efforts from union members of industries regulated dramatically [27].
The literature review above generally shows the negative effects of regulations on industrial employment in the short term but positive effects in the long term.That is to say, there exists a "U" type relationship between environmental regulations and regional employment [28][29][30][31].Considering China's problems (like the difficulties in obtaining employment for graduates, regional development imbalances, and urban-rural dual structures), the analysis of the influential path of this effect is indispensable.
The goal of this paper is to assess the effects of changes in regulatory stringency over time on employment using worker-level statistics.To test whether the "U" type relationship between environmental regulations and regional employment still exists in worker-level statistics, the quadratic term of regulations is added to the model and regional various situations are also discussed.Some policy recommendations will be given to obtain the double dividend effect.
This paper makes following contributions.First, it works within the framework of Cahuc (2004) [32] and Cole (2008) [23] to assess the scale effect and substitution effect of environmental regulations on the probability of individual employment.Second, it investigates the heterogeneity of Chinese environmental regulations in different regions, industries, and the hukou of labor.The Household Registration System (the hukou system) dates back to ancient times.The current form came into being in 1958.Under this system, each citizen was divided into agricultural or non-agricultural hukous.People with a non-agricultural hukou will have access to benefits not available to those who are not, like urban residents can receive public services such as medical care and compulsory education for their children.Internal migration was also restricted by the government, rural residents usually have only to farm in his registered location without other options.This system led to a binary structure in the social economy, including an aggravated social stratification and a huge gap between urban and rural areas.Third, previous studies have usually focused on the macro level employment, and environmental regulation indexes are mostly based on provincial data.This paper builds indexes of environmental regulations on the city-level, which will be more accurate in measuring regulatory intensity.Moreover, work-level data will supplement and advance existing research in this field.
The remainder of this paper is organized as follows.Section 3 introduces the background of Chinese environmental pollution and provides a theoretical framework about the mechanism that models the impact of environmental regulations on employment.It then outlines the econometric specifications, including data considerations in Sections 3.3 and 3.4.Section 4 provides the results and conclusions will be presented in Section 5.
Background of Environmental Regulations in China
Over the past decades, the environmental quality in China has increasingly deteriorated.Based on the monitoring data of groundwater quality from 6124 monitoring sites in 235 prefecture-level and above cities in China, only 10.1% of the cities obtain a grade of good [33].In Yale University's 2016 Environmental Performance Index (EPI), China is one of the worst performers (ranked 109 out of 180 countries) with respect to its water quality [34].The Environmental Analysis Report of China jointly issued by the Asian Development Bank and Tsinghua University on 14 January 2013 shows that seven of the world's ten most polluted cities are in China.The percentage of the 500 big cities in China that meets the air quality standards of the World Health Organization (WHO) is less than 1%.However, the Chinese government has been actively controlling air pollution with fiscal and administrative means.The increasingly polluted environment has affected our quality of life.It has led to high economic and societal costs and it constitutes an obstacle to the long-term sustainable development of the economy.
As early as the beginning of 1980's, environmental protection had already been classified as the fundamental state policy [35].After the introduction of the Law of Environmental Protection of China in 1989, the National People's Congress and its standing committee have instigated 29 laws about environmental and resources protection, including the Prevention and Cure Law on Water Pollution, the Atmospheric Pollution Prevention Law, the Environmental Pollution Prevention and Control Law of Solid Wastes, and others [36].Environmental regulations have accompanied environmental pollution over the past decades.
Emissions of industrial dust (soot) and SO 2 actually started to decline after 2005 (see Figure 1), although their absolute values are still high.Actually, the Chinese government had tried to reduce SO 2 emissions since 2007 in preparation for the 2008 Beijing Olympics.In Aug 2012, targeted energy conservation and emission reductions were proposed as part of the Chinese government's 12th Five-year Plan (2011)(2012)(2013)(2014)(2015).The Air Pollution Prevention and Control Action Plan issued by the State Council in 2013 aimed at reducing air pollution with specific targets.For example, by 2017, the urban concentration of inhalable particles should decrease by 10% compared with 2012 levels.The coexistence of rapid national economic growth and improved environmental conditions shows that Chinese environmental regulations and improved energy efficiency have played certain roles in this process.
Foreign observational data also confirms declining SO 2 emissions in China (see Figure 2).The reduction of SO 2 is significant (about a 60% reduction from 2012 to 2015), although SO 2 levels in China remain the highest in the world [37,38].This change shows that Chinese environmental regulation policies aiming at improving air quality have functioned well.Apart from the decrease in absolute value, the change in relative level also indicates a similar trend in environmental regulations.This study combines pollution emissions with economic indexes to illustrate the pollution situation (see Figure 3), it's clear that both intensities have been falling during the period, which further confirms the previous statements.
To examine specific changes in the intensity of environmental regulations, we plot the intensity over time for each pollutant (see Figure 4).The figure reveals that environmental regulations have been more stringent since 1990s.Then, what is the price for achieving this?Is the impact of environmental regulations on employment positive or negative in China?To examine specific changes in the intensity of environmental regulations, we plot the intensity over time for each pollutant (see Figure 4).The figure reveals that environmental regulations have been more stringent since 1990s.Then, what is the price for achieving this?Is the impact of environmental regulations on employment positive or negative in China?
The U-Shaped Relationship between Environmental Regulations and Employment
In the context of increasing intensity of environmental regulations, the economic growth pattern cannot adapt to it in the short term due to the development inertia, which eventually leads to a drop in the employment absorption capacity.According to the statistical data of China's Ministry of Human Resources and Social Security in 2016, 13.14 million new jobs in urban areas were created.This is an increase of approximately 0.15 percent year-on-year, and the fourth straight year of remaining over 13 million.At the same time, there were approximately 15 million new unemployed youth who all need jobs, which indicates that the employment gap is large, especially considering those who cannot find a job over the years.In fact, urban registered unemployment in China hit 9 million in 2010.The unemployment rate has held at approximately 4 percent since then.Since Chinese unemployment registration system is deficient, a large gap exists between the registration and the actual condition.
In addition to the quantity contradiction, structural contradiction also affects the job market, which often manifests as more new jobs are created without enough qualified employees accompanied by the upgrade of the industrial structure.Therefore, it is necessary to account for the influence on the job market exerted by the government's environmental regulations when formulating the policies.We cannot achieved this without a detailed understanding of the complex linkages between environmental regulations and the job market.
Previous researches have addressed the connection between environmental regulations and economy-wide employment.Since worker-level analyses are rare, this paper evaluates the effects of the regulations on worker-level employment.This theoretical model builds on the works of Cahuc
The U-Shaped Relationship between Environmental Regulations and Employment
In the context of increasing intensity of environmental regulations, the economic growth pattern cannot adapt to it in the short term due to the development inertia, which eventually leads to a drop in the employment absorption capacity.According to the statistical data of China's Ministry of Human Resources and Social Security in 2016, 13.14 million new jobs in urban areas were created.This is an increase of approximately 0.15 percent year-on-year, and the fourth straight year of remaining over 13 million.At the same time, there were approximately 15 million new unemployed youth who all need jobs, which indicates that the employment gap is large, especially considering those who cannot find a job over the years.In fact, urban registered unemployment in China hit 9 million in 2010.The unemployment rate has held at approximately 4 percent since then.Since Chinese unemployment registration system is deficient, a large gap exists between the registration and the actual condition.
In addition to the quantity contradiction, structural contradiction also affects the job market, which often manifests as more new jobs are created without enough qualified employees accompanied by the upgrade of the industrial structure.Therefore, it is necessary to account for the influence on the job market exerted by the government's environmental regulations when formulating the policies.We cannot achieved this without a detailed understanding of the complex linkages between environmental regulations and the job market.
Previous researches have addressed the connection between environmental regulations and economy-wide employment.Since worker-level analyses are rare, this paper evaluates the effects of the regulations on worker-level employment.This theoretical model builds on the works of Cahuc (2004) [32] and Cole (2008) [23], in which pollution emissions were regarded as one production factor, and its price can be represented by environmental regulations.When environmental regulations become more stringent, the pollution costs of enterprises will be higher.This signal leads to a series of adjustments in production modes and employment.Based on this assumption, we build the partial equilibrium production model.This model supposes that: First, an enterprise has N kinds of production factors, including labor inputs, pollution inputs, and other inputs (like capital, technology etc. all represented by "other" for simplicity).Second, the intensity of regulations equals the pollution costs.Third, polluting enterprises will be regulated by the government through levying taxes, on carbon emissions or other polluting activities according to discharge standards.These punishments will directly increase the pollution costs.In other words, enterprises will choose the optimal combination of production factors in a given level of output that is determined by profit maximization.For simplicity, take a Cobb-Douglas production function to describe the enterprise's production activities.
where y 0 denotes the output of the polluting enterprise, W denotes the pollution input, L 0 denotes the labor input, and T represents all other inputs.α, β, and γ are the pollution-elasticity, labor-elasticity and other factors' elasticity, respectively (0 < α, β, γ < 1).Enterprises select the level of W to maximize profit r: where P denotes the price of products made by enterprises, V 0 represents the price of labor, Q denotes the price of other input factors, and C is the price of W. With the increasing intensity of environmental regulations, pollution costs will be higher, which increases C.There is a positive relationship between C and the intensity of environmental regulations.After the partial derivative of function MAX(r) with respect to the variable W, L 0 and T, we can get Equation (3) according to profit maximization.
Our measurement of the relationship between regulatory stringency and employment growth is based on the assumption that when environmental regulations are tightened, each enterprise faces a proportional rise in costs and will reduce its initial labor employment.It can be shown that: where represents the price elasticity of pollution inputs.Since C is already defined, it refers to the intensity of environmental regulations.θ WC denotes the price elasticity of pollution inputs.When regulations are tightened, enterprises will reduce expenditures on pollution.Therefore, dW/dC < 0. We add a minus sign in Equation (4) to make sure that θ WC is non-negative.Equation ( 4) can be decomposed into two parts.First, βW/αV 0 represents the employment change caused by the change in the relative price between C and L 0 , and we can call it the substitution effect.Second, (βW/αV 0 )θ WC represents the employment change caused by the change of enterprises' production scales under the regulations, and we call this change the scale effect.These two effects have opposite signs.
The effect of L 0 on C is determined by θ WC .When θ WC > 1, then βW/αV 0 < (βW/αV 0 )θ WC and the scale effect is bigger than the substitution effect.dL 0 /dC < 0, which implies that the effect of L 0 on C is negative.When θ WC < 1, then βW/αV 0 > (βW/αV 0 ).dL 0 /dC > 0, and the effect is positive.Considering the economic meaning of θ WC , at the initial stage of environmental regulations, enterprises have to expand its pollution prevention investments to meet the regulatory standards.However, the quantity of this investment gradually diminishes, and the proportional decrease will be bigger than the increase in regulatory stringency.Therefore, θ WC > 1.As regulations become more stringent, enterprises do not need to invest as much as the initial stage in pollution prevention.Since the room for decreasing pollution prevention investments is limited, the increasing degree of regulatory stringency will be bigger.Therefore, θ WC < 1.The effect of a stringent environmental regulation policy can be summarized as follows.As environmental pollution intensity increases, θ WC will change from >1 to <1 and dL 0 /dC < 0 will be replaced by dL 0 /dC > 0. In other words, the employment growth that accompanies the increase in environmental controls will be better after the deterioration in the initial stage.
After the analyses above, it could be found that there exists two effects of regulations on employment: the scale effect and the substitution effect.Increasing regulatory stringency will simultaneously create both, but the substitution effect is initially smaller than the scale effect, then bigger with the increase of investment in pollution prevention.That is to say, environmental regulations will increase an individual's employment probability when the regulatory stringency reaches a given level.Therefore, there exists a "U" type relationship between environmental regulations and an individual's employment probability.
Is the reality of China consistent with this?If the "U" relationship is true, are there any differences of the position of different regions of China in this U-shaped curve?Does industrial heterogeneity of the effect of regulations on an individual's employment probability exist?These questions will be discussed in the empirical part of this paper.
Data Collection
The data about individual employment comes from the Chinese Household Income Project (CHIP) database [39].This database covers income and expenditure information in 1988, 1995, 2002, 2007, 2008, and 2013.These are called CHIP1988, CHIP1995, CHIP2002, CHIP2007, CHIP2008, and CHIP2013, respectively.The CHIP survey consists of three parts: the Urban Household Survey, the Rural Household Survey, and the Migrant Household Survey.Considering that the survey did not cover rural-to-urban migrants before 2002 and the information absence, this paper chooses CHIP2007, CHIP2008, and CHIP2013 data as the data sources.The 2007 and 2008 surveys are also a part of the larger RUMiC (Rural-Urban Migrants in China) survey project.Both contain 5000 households in the migration sample, 8000 households in the rural sample, and 5000 households in the urban sample.The data comes from interviews with questionnaires designed by the project team.
For the surveys of urban local households and rural-urban migrant households, nine provinces were selected as the survey target in 2007 and 2008.They were Shanghai, Jiangsu, Zhejiang, and Guangdong from eastern China; Anhui, Henan, and Hubei from central China, and Chongqing and Sichuan from western China.The rural household survey also covered nine provinces.Differing from the urban and migrant's surveys, Shanghai was excluded, while Hebei was included.The CHIP 2013 sample was selected by the systematic sampling method in the east, center and west.It contains 14 provinces.They are Beijing, Liaoning, Jiangsu, Shandong, and Guangdong from eastern China, Shanxi, Anhui, Henan, Hubei, Hunan, and Chongqing from central China, and Sichuan, Yunnan, and Gansu from western China.This provided a total of 126 cities, 234 counties, 18,948 households, and 64,777 individuals.In this, there are 7175 urban households, 11,013 rural households, and 760 migrant households.As CHIP focuses on the employment situation and the income and expenditure information, it has been widely accepted by scholars aiming at researching Chinese employment and income problems, like Bishop and Liu, 2008 [40]; Chen and Feng, 2011 [41]; Bishop et al., 2014 [42]; Gao et al., 2015 [43] and Li et al., 2017 [44].That is also the primary reason that we choose CHIP as the input instead of others.Besides, CHIP also covers a survey of rural-to-urban migrants that makes it have the advantage of comparing different subsamples in the same topic then contributing to forming a comprehensive understanding of it.Other macro level data come from the China City Statistical Yearbook and the China Statistical Yearbook.
Econometric Model
An individual's employment probability is defined as: where i denotes an individual, j denotes a city, t denotes the year, and ure ijt (the dependent variable) is a dummy variable that denotes employment status (employed = 1, unemployed = 0).Those "not in the labor force" are eliminated from the sample data, which exclude retirees, students, homemakers, women who are pregnant or in maternity leave, and people on long-term sick leave.The age ranges are from 16 to 60 for men and 16 to 55 for women.er jt represents the intensity of environmental regulations.Various methods can be used to measure it, including the emission density of different contaminants [45], the pollutant emissions volume per unit of output [46], a comprehensive index composed of multiple pollution indicators [47,48], a proxy index like per capita GDP [21,49] or cases of administrative penalty related to environmental protection [28,50].The second method actually measures the pollution intensity without excluding the effect of technological and industrial factors.The third one is significantly affected by the weight of different indexes.Since per capita GDP, as an environmental regulation index, will be better among different countries than different regions of a country, the last one still has a lack of consensus.Xu and Song (2010) found that regions with higher average incomes does not necessarily have higher levels of environmental regulations [51].Therefore, this study chooses the first method and adopts the emission density of sulfur dioxide (SO 2 ) in each city as the measurement index.Most studies establish a provincial index that measures the level of environmental regulations.Considering the imbalances in regional development, economic levels vary a lot within a province, the index used in this paper established at the city level will be more accurate and convincing.
The data about industrial SO 2 emissions come from the China City Statistical Yearbook [52], er2 jt is the quadratic term of the regulatory intensity.δ j is an unobservable variable that is irrelevant to time.µ t denotes the time dummy, and ijt is the error term.In Equation ( 6), X ijt represents the other factors that will affect ure ijt , including two categories: individual demographic characteristic and macro-level data.First, gender is represented by gender ijt (man = 1, woman = 0).Age is represented by age ijt and its quadratic term age2 ijt (unit is %).Nationality is represented by nat ijt (Han = 0, Minority = 1).mar ijt denotes the marital status (single = 0, not single = 1).edu ijt denotes years of formal education (excluding the number of years skipped or failed).Finally, hea ijt denotes one's health condition (good = 1, not good = 0).
Next is the macro-level data.Fixed asset investments are denoted by f ai jt , which is measured as the ratio between total investments in fixed assets in urban areas and GDP of each city.Fixed asset investments are one of the most important forces for economic growth and fostering job creation in China.The variable f e jt denotes government expenditure and is measured by the ratio between municipal expenditures and GDP.Due to the strong control of Chinese government over the economy, government expenditures are the main tool to achieve this, which then affects employment.Previous literatures have proven that, in the long run, an increase in public investment contributes to employment remarkably [53].
The variable l p jt denotes labor productivity measured by the ratio between annual gross output of all industrial enterprises above a designated size deflated by the GDP deflator (2000 as the base year) and the annual average number of workers in each city.Productivity growth has a positive impact on corporate profits but a significant negative effect on the labor demand per unit product.Therefore, labor productivity growth has a significant negative effect on employment growth [54].
The variable rgdp jt denotes the developmental level of the regional economy measured by the real GDP (2000 as the base year) of each city.A higher economic development level often means more jobs created.
The variable secondary jt denotes the industrial structure measured by the share of second industrial output value to GDP.Different industries have different effects on employment [55].A higher share of the second industry often means more jobs.
The variable open jt represents openness measured by the ratio between total actual FDI and the GDP of different cities.In an open economy, international investments will simultaneously create absorption and crowd-out effects.On one hand, foreign-capital enterprises will bring investments that will create more employment opportunities.On the other hand, it will diminish domestic investment and stimulate improved production efficiency, which have a negative effect on domestic employment. in f ra jt and pd jt are used to measure regional infrastructure and urban population density. in f ra jt represents the per capita area of paved roads in city.Since, enterprises are more likely to invest in a city with a higher infrastructure level, thereby more jobs will be created in these cities.
All macro-level data come from the China City Statistical Yearbook [52] and the China Statistical Yearbook [56].To avoid the nonnormality and heteroscedasticity of variances, data of this level are logarithmically transformed.
In order to discuss the effects of regional and industrial heterogeneity, cities involved are divided into eastern, central, and western regions according to the standard released by the National Bureau of Statistics (NBS) of China in 2003.The industry classification standard comes from CHIP2013 that includes 20 Industries.The specific classification information and the matches between Chinese industries and US two-digit industries can be found in Table S1.The data sources and the meaning of indexes can be found in Table 1.
General Results
As CHIP2007, CHIP2008, and CHIP2013 are used in this paper and merged into a pooled crossed-section regression data, and the dependent variable is a dummy variable.The model (Equation ( 5)) is estimated using the probit model and year and region dummies are included in all estimations.The general results could be found in Appendix A. The reported coefficients represent the marginal effect of variables at mean (Table A1).Column (1) reports that regulatory intensity measured by the proportion of industrial SO 2 removed (er jt ) has a significantly negative effect on an individual's employment probability at the 1% level.The coefficient of the quadratic term of the regulation intensity (er2 jt ) is significantly positive at the 1% level too, which proves that the U-shaped curve exists between environmental regulations and an individual's employment probability.We can find the knee point of this U-shaped curve, which is approximately 0.515 according to the coefficients er jt and er2 jt .Compared with the average regional regulatory intensity of 2013, we find that the average intensity of environmental regulations in most cities are already in the upward phase of the U-shaped curve.From a regional perspective, those still in the downward phase were mainly located in western China.(The precise results are shown in Table 2).Table A1 also shows that men have an advantage over women in finding a job, although the gap is narrow.A healthy body also significantly improves the probability of being employed.One more year of education increases the probability of being employed by 0.03%.The coefficients of age and its quadratic term suggest that the relationship between a worker's age and the probability of being employed is an inverted U-shape.The coefficients of lnsecondary jt , lnopen jt , ln f e jt , and lnrgdp jt (respectively represent the logarithm of secondary jt , open jt , f e jt and rgdp jt ) are positive, which are consistent with our expectations.The result shows that improved infrastructure is a favorable factor for increasing employment probability.However, the coefficient of ln f ai jt is not consistent with our prior expectations, which indicates that the effect of fixed assets investments on employment growth varies among different periods.Considering that environmental regulations usually cannot have an immediate influence upon employment, this lag may underestimates the effect of environmental regulations.To avoid the interference, we use all macro independent variables with one year lagged to replace the original ones.The regression results are shown in column (4) of Table A1, which proves that the "U" type relationship still exists.
Robustness Check
Endogeneity is a potential problem for some of our regulation indexes.For example, the regional unemployment rate could be endogenously determined by pollution intensity rather than the other way around.It could be argued that those cities with high economic activities that attract workers through higher salaries usually have poorer environmental quality and high environmental regulations [57].Endogeneity concerns should be examined in the results.We choose a substitution index as the instrumental variable of environmental regulation.The proportion of industrial soot removed (erb jt ) is measured by the ratio between the volume of industrial soot removed and the volume of industrial soot produced: Column ( 2) (Table A1) reports the results of the probit model with erb jt as the regulatory intensity.Column (5) (Table A1) reports the probit estimation with one year lagged based on CHIP2007, CHIP2008, and CHIP2013 with this index.They all show that there is a "U" type relationship between environmental regulations and an individual's probability of being employed.Therefore, our conclusion is robust.
The index selection above is not, to a degree, a systematic reflection of environmental regulations.Therefore, following Wang and Liu (2014) [58], we set a comprehensive measurement index.Specifically, the linear weighted sum method is used to set a comprehensive index of environmental regulation intensity (ers jt ) based on the proportion of industrial SO 2 and soot removed.First, we standardize er jt and erb jt : where pr jk represents the proportion of pollutant k removed in city j.max(pr k ) and min(pr k ) represent the maximum and minimum of pr jk , respectively.pr s jk denotes the standardized value of pr jk .Second, as the share of industrial SO 2 and soot emission varies in different cities, the emission levels of different pollutants are different, even within a city.It is necessary to endow different weights for each pollution emission index of each city, so as to get a precise description of the change of regulatory intensity of each city.Here, we set an adjustment coefficient (A jk ) as the weight: ∑ j pe jk / gdp j ∑ j gdp j (10) This coefficient means that the ratio between the proportion of the emission volume of pollutant k to the total emissions in China and the rate of GDP of city j to national GDP.The intensity of environmental regulations of one city will be stronger than others with the same er t or erb t if the emission volume of SO 2 or soot is relatively larger when we add this coefficient into our indexes.Since the proportion of removed SO 2 or soot is an absolute value and GDP reflects the scale and level of economic development of one city, the combination of both can more objectively reflect the regulatory intensity of different cities.Therefore, ers jt can be expressed as: Column ( 3) and ( 6) in Table A1 report the results of the probit model with ers jt as the regulatory intensity.Column (6) reports results of the regression variables with one year lagged.They all show that there still exists a "U" type relationship between environmental regulations and an individual's probability of being employed.It further confirms the robustness of our conclusion.
Is There Heterogeneity in the Regulatory Effects across Regions?
This subsection explores whether the regulatory effects vary across different regions.As the level of economic development is highly imbalanced among the three regions, it is necessary to consider local situations and the position of different cities is in a downward or upward phase of the U-shaped curve.It (Table 2) presents the differences in regional regulatory intensity in 2013.
Table 2 suggests that the eastern and central regions are already in the upward phase of the U-shaped curve (value of the knee point approximately 0.515), but the western is near the knee point.The double dividend effect of the positive relationship between productivity growth and employment growth still does not appear in the western area, which is coincident with Lu (2011) [59].Therefore, the tradeoff between employment and the environment should be weighed carefully in different regions.
According to Li's classification method (2016) of high-pollution industries, the value of high-pollution industrial production is relatively higher.It could be found that the value of industrial production of pollution-intensive industries (including coal mining and dressing, ferrous metals mining and dressing, nonferrous metals mining and dressing, farm products processing, textile industry, paper making and paper products, petroleum processing, coking and nuclear fuel processing, raw chemical materials and chemical products, chemical fibers, nonmetal mineral products, smelting and pressing of ferrous metals, smelting and pressing of non-ferrous metals and electric power, heat power production and supply) in the eastern region is approximately 2.8 and 5.3 times greater than the central and western regions, respectively, during 2007-2013 [30].Therefore, the average regulation in the eastern region is more stringent.However, the standard deviation is larger in the central region, which indicates a vast internal diversity within the central region, and this difference between different regions is not accidental.When we use different colors to represent the values of er jt of each city in the map (Figure 5), it shows that the difference in the regulatory intensity index among central cities is larger, and the overall intensity of environmental regulations increased from 2006 to 2014.During this time, we can also find that the environmental regulations of eastern cities are more stringent.
Table A2 (Appendix A) displays the principal results of the effects of environmental regulations on an individual's probability of being employed across regions.
The results suggest that the "U" type relationship between the intensity of environmental regulations and an individual's probability of being employed exist in the eastern and central regions, although the estimation of the central region is not significant.For the western region, the employment probability follows an insignificant inverted U-shaped pattern related to the intensity of environmental regulations.
This result indirectly reflects the imbalance in regional economic development.It can be stated that regardless of per capita added value of the industry or the proportion of employment, the eastern region is higher than the central and western regions.Moreover, the government executive abilities in the eastern region are higher.New regulatory policies can be effectively carried out, whereas economic growth trumps green priorities in the western and central regions.Therefore, those policies will not function well or will totally lose their regulatory functions.The probability of being employed in the eastern region is more sensitive to the changes in regulatory intensity.
With respect to the western region, the industrial structure is not diverse enough.The proportion of high-pollution, high energy-consuming industries is relatively lower.Therefore, the pollution intensity is weaker.As environmental regulations become increasingly stringent, more eastern and central enterprises tend to transfer their factories to the west, which may lead to a "Pollution Haven".However, when the intensity of regulations meets a given threshold level, their cost savings due to weaker regulatory intensity cannot cover the extra expenses of production in the west.Then, they will relocate to eastern or central cities, resulting in a decrease of employment opportunities.The estimation (Table A2) shows that the value of the knee point of the western region is approximately 1.45, far above the average regulatory intensity.Thus more stringent regulations will benefit the employment of the western cities.
Table A2 (Appendix A) displays the principal results of the effects of environmental regulations on an individual's probability of being employed across regions.
The results suggest that the "U" type relationship between the intensity of environmental regulations and an individual's probability of being employed exist in the eastern and central regions, The difference of environmental regulatory intensity among cities poses new challenges to the equilibrium development among regions.More low-value-added and traditional industries will be forced to transfer to the western region when they cannot adjust to the regulatory intensity in the eastern and middle regions.Divergent regulatory policies aimed at different regional situations should be adopted to obtain a double dividend effect.Meanwhile, the local government should form a long-term perspective about the effect of environmental regulations on employment.This trade-off between employment and environmental protection must be given great emphasis.
Is
There Heterogeneity in the Regulatory Effects across Industries?Do the regulatory effects vary by industry due to the differences in the utilization of resources, pollution emissions, technological level, bargaining power, and others among industries?This subsection explores this heterogeneity in reference to the industrial classification standard made by CHIP2013.As the estimated result of the probit model is in accordance with the Linear Probability Model (LPM), and for the convenience of the comparison of different coefficients and identifying the average treatment effect [60], we use the LPM to conduct the regression analysis of these 19 respective industries (The sample from the industry of International Organizations is so small that we eliminate it from our model.).Each row in Table A3 (Appendix A) pertains to an industry.This allows us to compare the differences of the regulatory effects in each of the relevant industries.
The "U" type relationship between environmental regulations and an individual's probability of being employed still exists in most industries.The effects of regulations are particularly significant in industries like manufacturing, the wholesale and retail trade, construction, the management of water conservancy, environmental and public facilities, and education.However, the employment probability follows an inverted U-shaped pattern related to environmental regulations in service industries like finance and insurance, real estate, the management of water conservancy, environmental and public facilities, health, and social work.This is consistent with our expectation.As most service industries do not produce emissions and are expanding in China, at the same time, most workers are likely to transfer to these industries from those strongly affected by environmental regulations.There exists positive short run effects of environmental regulations on these service industries until the point where the level of environmental regulations meet the knee point of each industry.While the results also show that with the increase of environmental regulation intensity, an individual's probability of being employed in mining and quarrying industry will increase instead of a U-shaped pattern.The reason may be that traditional pollution-intensive industries are regarded as a major regulating object during the early stages of pollution regulation, which in turn contributes to them becoming pioneer in industrial upgrading.It can be found that China's environmental protection laws and regulations were aimed at industries like electricity, coal, steel, cement, non-ferrous metals, coke, and other pollution-intensive industries during the early stages, such as Guidance Directory of Industrial Restructuring of 2005, 2007, and 2011 and The Temporary Provision Rules for Promotion of Adjustment of Industrial Structure which was deliberated on and passed by the executive meeting of the State Council in March 2008, etc.On one hand, those regulations placed great stress on the involved industries' sustainability, on the other hand, they also made them gain first-mover advantage in industrial transformation and upgrading, the elasticity coefficient of employment in this industry to the change of environmental regulations is relatively smaller, and even positive under certain conditions.When these 19 industries are divided into three categories according to the "Regulation on Classification of Three Sectors (GB/T 4754-2011)" promulgated by NBS in 2012 (the specific classification details can be found in Table S2), the results (Table 3) support the existence of a "U" type relationship in three categories, but it is not significant in primary industry.Since primary industry produces relatively little pollution, the impacts of environmental regulations on employment in these industries are quite limited.Therefore, after removing samples related to primary industry, the significant "U" type relationship in secondary and tertiary industry further confirms our conclusion.
The heterogeneity among industries illustrates the mobility of labor between industries.Generally, industries such as the production and distribution of electricity heating power, gas and water, the management of water conservancy, environmental and public facilities, health, and social work are controlled by governmental organizations or public institutions that are in a position of market dominance to some degree.Their anti-risk capacity helps to free them from policy shocks, and the effects of regulations are limited.In addition, these industries have a close relationship with the government, and their bargaining power enables them to negotiate the regulatory conditions.All these factors make them more attractive to workers due to more steady jobs than those in other industries. 1 The numbers in parentheses are robust standard errors.* significant at 10% level; ** significant at 5% level; *** significant at 1% level. 2 The reported results represent the marginal effect of variables, which are estimated by the probit model and other control variables are not presented here because of space considerations.
While competitive industries like manufacturing, construction, wholesale, and retail trade are exposed to the impacts of regulations, in the short run, the increasing regulatory stringency will raise the operating costs of these industries.This will in turn downsize their production scale or force them to exit the market.Workers in these industries will transfer to others that face less stringent regulations.Therefore, more labor flows to service industries, government organizations, and public institutions.In the long term, when various traditional industries finish upgrading the industrial structure and adjust to the limits of regulations, emerging industries transformed from the traditional ones will create more new jobs, and a backflow of labor will occur in these industries.That is why we can see the inverted U-shaped curve between the intensity of environmental regulations and the probability of being employed in finance and insurance, real estate, and other service industries, governmental organizations, and public institutions.Most rural migrant workers engage in manufacturing and construction [61], in which the effect of environmental regulations on employment is very significant.It is crucial to analyze the heterogeneity in the regulatory effects across groups of labor, especially rural migrant workers.
Is There Heterogeneity in the Regulatory Effects across Hukou of Labor?
The household registration system, the separated labor market resulting from the urban-rural dual structure, and the differences of social insurance, information acquisition, and employment opportunities for different hukou lead to a noticeable gap in available job resources, career stability, and anti-risk capacity.Usually, urban-hukou employees are relatively highly skilled and possess employment security and longer average years of education, which strengthen their anti-risk capacities.Therefore, the effect of regulations is weaker on them compared with rural-hukou employees whose jobs are often not stable.The risk of non-agricultural employment is principally relieved by the rationality of farmer households.Owing to constraints imposed by the urban-rural dual structure and the land equalization system as the basic relationships of property, the endogenous rationality of agricultural households and rural communities can internalize external risks.This means that hundreds of millions of rural migrant workers undertake this risk with risk-free assets (land distributed equally through non-market ways in each family-unit) [62].
Table 4 provides estimated results of different hukous of labor using Equation (5).Columns (1)-( 2) and ( 3)-( 4) display the employment probability changes caused by different regulatory intensities with farmers and rural migrant workers.Those coefficients indicate that the "U" type relationship between environmental regulations and an individual's employment probability still exist.Columns ( 5)-( 6) present the results of urban workers, which shows a significant "U" type relationship.Specifically, the effects of environmental regulations on the employment probability of rural workers, regardless of gender, are not significant.It can be explained that most rural workers occupy primary industries that do not produce SO 2 , soot, dust, or other pollutants.Therefore, increasing intensity of environmental regulations will have little influence on the employment of this industry.However, the results indicate that migrant female workers experience a stronger impact from environmental regulations than males.This proves that, to a certain extent, gender inequality still exists in the job market.Moreover, most migrant workers are temporary workers that do not have labor contracts with enterprises.According to the data obtained from annual Investigation Report on Rural Migrant Workers in China [61], the proportion of migrant workers don't have a labor contract with employers is 56.1%, 58.7%, 62%, 63.8%, and 64.9% in 2012, 2013, 2014, 2015, and 2016, respectively, which shows an upward tendency.When environmental regulations become more stringent, employers tend to improve the production technology or downsize their factories to save costs, which all have negative effects on employment.Due to their disadvantages in labor productivity and social status, migrant female workers are more likely to be fired.For urban workers, they usually do not own any land to internalize external risks, and the majority of their occupations belong to manufacturing and construction (more than 40%), which are the primary targets of environmental regulations.Therefore, it is no surprise to find a significant influence of environmental regulations on urban workers' employment.These differences are consistent with our expectations, indicating that urban workers are more vulnerable to environmental regulations.This result also warns us of the differing effects of environmental regulations on different hukou of labor.Pertinent regulatory policies aimed at urban and rural areas should be taken into account.
Discussion and Conclusions
Environmental protections are becoming more important to individuals and, to some degree, have become an indispensable component of the improvement of life quality.The Chinese government has changed the notion that always pays attention to economic growth and ignores the costs of environmental destruction.Therefore, regulatory environmental management is becoming a new indicator for the government's performance appraisals.However, we should notice that investment in the treatment of environment pollution will increase with the rise of pollution levels, but the double dividend effect will not occur until the regulatory intensity reaches the knee point value of the U-shaped curve between the regulatory intensity and an individual's employment probability.In the short run, the negative effects of regulations on employment may be a constraint imposed on policymakers at all levels.This paper estimates the effects of environmental regulations on an individual's employment probability with worker-level data from CHIP2007, CHIP2008, and CHIP2013.The principal finding is that the employment probability follows a U-shaped pattern related to environmental regulatory intensity in China.When accounting for heterogeneity in the regulation effects across different regions, industries, and groups of labor, we find that the effects of regulations are particularly harsh on an individuals' employment probability in the eastern region, secondary and tertiary industry, and among urban workers.
Our study of environmental regulations in different industries suggests that the "U" type relationship between environmental regulations and an individual's employment probability still exists in most industries.For service industries like finance and insurance, real estate, management of water conservancy, environmental and public facilities, health, and social work, the "U" type relationship turns into an inverted U-shaped one.When 19 industries are divided into three categories, the results suggest that the "U" type relationship still exists in all of them, but it is not significant in primary industry.
Finally, our results suggest that the regulatory effects are different across three different hukous of workers; generally, the effects of regulations on urban and migrant female workers are fiercer.
The management of environmental regulations should focus on environmental improvement along with various factors like economics, the environment, and people's livelihoods.This paper stresses the importance of coordinating the relationship between environmental regulations and employment with developmental foresight.
Subsidy policies and other incentives may be alternatives to actively guide more affected industries (like manufacturing and construction) to improve their production technology.The heterogeneity in the regulation effects across regions demands specific regulatory policies.Environmental regulations will contribute to industrial restructuring and upgrading when they are properly used.
Author Contributions: Analyzed the data: FL MZ.Contributed to the writing of the manuscript: FL KX.
Figure 1 .
Figure 1.Emissions of industrial SO2 and Soot (Dust) from 2000 to 2015 (10,000 ton).Source: China Statistical Yearbook on Environment.Note: Industrial SO2 emissions refer to the volume of SO2 discharged in industrial production processes.Soot emissions refer to the volume of particulates in smoke emitted in the process of fuel burning by industrial activity.Dust emissions refer to the volume of particulates emitted by industrial production processes and suspended in the air for a given period of time.As the statistics of industrial Soot and Dust merged into one index after 2010, they are reported in the form of sum before 2011 for the sake of consistency.
Figure 2 .
Figure 2. Changes in SO2 in China between 2005 and 2014.Note: (a) Average SO2 for 2005-2007 from the OMI instrument on the Aura satellite, expressed in Dobson Units (1 DU = 2.69 × 1016 molecules cm −2 ).(b) Same as (a) but for 2011-2014.The map in the figure were generated by Joshua Stevens and Jesse Allen, and the original image is available from: https://www.earthobservatory.nasa.gov/IOTD/view.php?id=87154.
Figure 3 .
Figure 3. Pollution intensity (ton per million yuan of value-added of industry) for SO2 and Soot (Dust) from 2000 to 2015.Source: China Statistical Yearbook on Environment.Note: As the NBS of China does not release the data about value-added of industry but the growth rate of it after 2006, so we calculate value-added of industry by the accumulated growth rate of it from 2007 and is deflated by PPI with 2000 as the base year.
Figure 1 . 23 Figure 1 .
Figure 1.Emissions of industrial SO 2 and Soot (Dust) from 2000 to 2015 (10,000 ton).Source: China Statistical Yearbook on Environment.Note: Industrial SO 2 emissions refer to the volume of SO 2 discharged in industrial production processes.Soot emissions refer to the volume of particulates in smoke emitted in the process of fuel burning by industrial activity.Dust emissions refer to the volume of particulates emitted by industrial production processes and suspended in the air for a given period of time.As the statistics of industrial Soot and Dust merged into one index after 2010, they are reported in the form of sum before 2011 for the sake of consistency.
Figure 2 .
Figure 2. Changes in SO2 in China between 2005 and 2014.Note: (a) Average SO2 for 2005-2007 from the OMI instrument on the Aura satellite, expressed in Dobson Units (1 DU = 2.69 × 1016 molecules cm −2 ).(b) Same as (a) but for 2011-2014.The map in the figure were generated by Joshua Stevens and Jesse Allen, and the original image is available from: https://www.earthobservatory.nasa.gov/IOTD/view.php?id=87154.
Figure 3 .
Figure 3. Pollution intensity (ton per million yuan of value-added of industry) for SO2 and Soot (Dust) from 2000 to 2015.Source: China Statistical Yearbook on Environment.Note: As the NBS of China does not release the data about value-added of industry but the growth rate of it after 2006, so we calculate value-added of industry by the accumulated growth rate of it from 2007 and is deflated by PPI with 2000 as the base year.
Figure 2 .
Figure 2. Changes in SO 2 in China between 2005 and 2014.Note: (a) Average SO 2 for 2005-2007 from the OMI instrument on the Aura satellite, expressed in Dobson Units (1 DU = 2.69 × 1016 molecules cm −2 ).(b) Same as (a) but for 2011-2014.The map in the figure were generated by Joshua Stevens and Jesse Allen, and the original image is available from: https://www.earthobservatory.nasa.gov/IOTD/view.php?id=87154.
Figure 2 .
Figure 2. Changes in SO2 in China between 2005 and 2014.Note: (a) Average SO2 for 2005-2007 from the OMI instrument on the Aura satellite, expressed in Dobson Units (1 DU = 2.69 × 1016 molecules cm −2 ).(b) Same as (a) but for 2011-2014.The map in the figure were generated by Joshua Stevens and Jesse Allen, and the original image is available from: https://www.earthobservatory.nasa.gov/IOTD/view.php?id=87154.
Figure 3 .
Figure 3. Pollution intensity (ton per million yuan of value-added of industry) for SO2 and Soot (Dust) from 2000 to 2015.Source: China Statistical Yearbook on Environment.Note: As the NBS of China does not release the data about value-added of industry but the growth rate of it after 2006, so we calculate value-added of industry by the accumulated growth rate of it from 2007 and is deflated by PPI with 2000 as the base year.
Figure 3 .
Figure 3. Pollution intensity (ton per million yuan of value-added of industry) for SO 2 and Soot (Dust) from 2000 to 2015.Source: China Statistical Yearbook on Environment.Note: As the NBS of China does not release the data about value-added of industry but the growth rate of it after 2006, so we calculate value-added of industry by the accumulated growth rate of it from 2007 and is deflated by PPI with 2000 as the base year.Sustainability 2018, 10, x FOR PEER REVIEW 5 of 23
Figure 4 .
Figure 4. Environmental regulation intensity measured by SO2 and soot removed proportion.Source: China Statistical Yearbook on Environment and calculation of author.
Figure 4 .
Figure 4. Environmental regulation intensity measured by SO 2 and soot removed proportion.Source: China Statistical Yearbook on Environment and calculation of author.
Sustainability 2018 ,
10, x FOR PEER REVIEW 12 of 23 increased from 2006 to 2014.During this time, we can also find that the environmental regulations of eastern cities are more stringent.
Figure 5 .
Figure 5.The dynamic change of the environmental regulation index from 2006 to 2015.Note: This index measured by calculated by the method explained in Section 3. The maps in the figure are generated using ArcGIS (version 10.2), each color represents a range of the proportion of removed.Higher numerical value means stronger intensity of environmental regulations.
Figure 5 .
Figure 5.The dynamic change of the environmental regulation index from 2006 to 2015.Note: This index measured by er jt calculated by the method explained in Section 3. The maps in the figure are generated using ArcGIS (version 10.2), each color represents a range of the proportion of removed.Higher numerical value means stronger intensity of environmental regulations.
Table 1 .
Data sources and brief descriptive statistics 1 .
Table 1
shows that most observations have a job, and their education is over junior middle school.Openness varies greatly across the sample, the smallest one is 0.25, 0.19, and 0.12 percent of the size of the largest ones in 2007, 2008, and 2013, respectively.Most individuals are married and of the Han nationality.We also find that China's economic level varies across regions.The highest city's real GDP is approximately 83.76, 78.73, and 75.77 times bigger than the lowest one in 2007, 2008, and 2013, respectively.
Table 2 .
Regional intensity of environmental regulation statistics in 2013.
Table 3 .
Heterogeneity in the regulatory effects by three strata of industry1,2 .
Table 4 .
Heterogeneity in the regulatory effects across different hukous of labor1,2 . | 2018-12-25T22:00:29.736Z | 2018-07-09T00:00:00.000 | {
"year": 2018,
"sha1": "d6ebd1335633573d656cbe1b9926a6e750d797e1",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2071-1050/10/7/2373/pdf?version=1531102416",
"oa_status": "GOLD",
"pdf_src": "ScienceParseMerged",
"pdf_hash": "d6ebd1335633573d656cbe1b9926a6e750d797e1",
"s2fieldsofstudy": [
"Environmental Science",
"Economics"
],
"extfieldsofstudy": [
"Economics"
]
} |
119474015 | pes2o/s2orc | v3-fos-license | Colossal Magnetoresistance using the Small Polaron Picture with Finite Bandwidth Effects
We present a small polaron picture and show that finite bandwidth effects are important to understand colossal magnetoresistance. Besides the polaron size parameter, we show that there is another parameter (adiabaticity parameter) that is relevant to studying magnetoresistance. We find that for fixed values of the polaron size parameter an increase in the adiabaticity parameter increases the magnetoresistance. The magnetic transition is studied within a mean field approach. We point out important oversights in the literature. We find that for the reported values of the bandwidth (based on band structure calculations) and for experimentally determined values of activation energy and Debye frequency, the calculated values of the magnetoresistance compare favorably with experimental ones. We calculate the optical conductivity too and find that there is reasonable agreement with experiment.
I. INTRODUCTION
Studying perovskite manganites of the form A 1−δ B δ MnO 3 (A=La, Pr, Nd, etc.; B=Sr, Ca, Ba, etc.) as a function of doping δ has lead to a variety of rich phenomena. [1][2][3] Of these La 1−δ Ca δ MnO 3 is perhaps the simplest one because the ionic size difference between La 3+ and Ca 2+ is less than 3%. In La 1−δ Ca δ MnO 3 , at low doping, as temperature is lowered the system undergoes orbital ordering and at even lower temperatures a layered antiferromagnetism is observed. [4] At intermediate doping (δ ∼ 0.2 − 0.4), simultaneous metal-insulator (MI) and paramagnetic-ferromagnetic transitions occur in this compound as the temperature is decreased. At even higher doping (i.e., greater than δ ∼ 0.5) charge ordering is realized while at δ ∼ 1 antiferromagnetic order results at low temperatures. [1] To explain the magnetic ordering at low doping, de Gennes [5] some time ago had proposed double exchange mechanism wherein, on account of strong Hund's coupling between the spin of a mobile hole and the spin of the localized electrons, the hopping integral of the itinerant hole is reduced by half of the cosine of the angle between the 3/2 spins of the localized electrons on neighboring sites. Furukawa pioneered in demonstrating the usefulness of the dynamical mean field theory in understanding the properties of double exchange systems. [6] However it was recognized by Millis et al. [7] that double exchange mechanism itself is not sufficient to explain colossal magnetoresistance (CMR). Millis and co-workers have proposed a model [8] which uses Jahn-Teller coupling between electrons and nuclei. However this model treats phonons classically and does not seem to yield satisfactory results away from half-filling. Furthermore, these authors have studied the phenomenon using only the polaron size parameter (the ratio between the hopping integral and the binding energy). They have not studied the effect of another dimensionless parameter -the ratio between the hopping integral and the Debye frequency (adiabaticity parameter). Röder et al. have also stressed the importance of Jahn-Teller coupling in understanding these manganites. [9] Lee and Min too have studied polaron transport in manganites. [10] However these authors do not take into account the renormalization of the electron-phonon interaction due to finite bandwidth effects. Jaime et al. [11] and Worledge et al. [12] have demonstrated that their high temperature resistivity data fits well to an adiabatic small polaron model. All in all there is growing evidence for a small polaron picture to explain CMR.
In this paper we study CMR phenomena in perovskite manganites by considering the carriers as small polarons whose high temperature behavior is hopping type and low temperature behavior is metal-like. [13] Our model includes effects due to electron-phonon coupling and on-site Hund's coupling between itinerant holes and localized electrons. To understand MI transition we simplify the hamiltonian by accounting for the Hund's coupling through the double exchange hopping term. [5] By including finite bandwidth effects, and using a suggestion by Toyozawa [14] we obtain an expression for the small polaronic wavefunction. Using this nearly small polaronic wavefunction we obtain the dynamic conductivity. We find that in the presence of a magnetic field both double exchange and finite bandwidth effects lower the resistivity and shift its peak to higher temperatures and thus can lead to CMR. In our picture the main reason for the CMR is due to the renormalization of the electron-phonon interaction (or the lattice distortion) due to finite bandwidth effects. One of our important conclusions is that for a fixed value of the polaron size parameter the magnetoresistance increases as the adiabaticity parameter increases.
Within a mean-field approach, we calculate the magnetization (M ) of the localized spins. The magnetization M is a result of the effective magnetic field generated by the band like motion of the electrons or in other words, the itinerant electrons due to the strong Hund's coupling polarize the localized spins. We have studied the magnetization for both with and without external magnetic fields and found that our M values are qualitatively in agreement with experimental results. [15] Furthermore our magnetoresistance values also compare favorably with experimental ones. [15] We also calculate the optical conductivity above the metal-insulator transition temperature. We find that the optical conductivity scaled by the DC conductivity depends only on the renormalized electron-phonon coupling and the Debye frequency. We calculate optical conductivity as a function of frequency at various temperatures and find reasonable agreement with experiments. [16] II. BACKGROUND Earlier on Zener proposed a double exchange model where conduction from a M n 3+ to a M n 4+ via an O 2− can be understood in terms of simultaneous transfer of two electrons -one electron from a M n 3+ to a neighboring O 2− and another electron from the O 2− to a neighboring M n 4+ . De Gennes further realized that since the Hund's coupling is much larger than the hopping integral t, the hopping term gets modified to be t cos(θ/2) where θ is the angle between neighboring spins that are treated classically. More recently it was pointed out that there is a Berry phase factor that enters the hopping term. [17] However, we feel that the geometric phase will not significantly alter the basic Physics behind CMR. Then to understand propagation of holes, of doping level δ, along a chain of spins (all with spin S) coupled antiferromagnetically through the coupling parameter J one needs to minimize the energy E = −2tδ cos(θ/2) + |J|S 2 cos(θ) to obtain the ground state canting angle cos(θ/2) = tδ/(2|J|S 2 ).
Within a mean-field approach one can approximate cos(θ) to be given by S i /S · S j /S = M 2 /M 2 S where M/M S is the scaled magnetization. Thus the hopping term gets modified to be t (1 + M 2 /M 2 S )/2. However this double exchange modification of the hopping integral itself does not explain the the observed metal-insulator transition in the intermediate doping regime.
To understand the colossal magnetoresistance phenomenon we will now use small polaronic picture. First we will provide the motivation for this approach. One of the striking features of a single small polaron is that the shape of the inverse of the mobility as a function of temperature is quite similar to that of the resistivity observed for the manganite systems that display colossal magnetoresistance. To understand how this comes about we will present our understanding of a small polaron first by ignoring spin effects. In systems like the transition metal oxides the electron couples to the vibrational modes of the host molecule (say the breathing mode). Due to strong electron-phonon coupling the molecular equilibrium configuration gets distorted. The electron gets bound in the distortion to form a polaron. This composite entity, i.e., the electron plus distortion, is the polaron. When the distorted region is less than a lattice spacing (i.e., 1/2 bandwidth < binding energy) we have a small polaron. The polaron propagates just like an electron. However, the hopping integral gets modified because one has to take into account the wave functions of the host molecules which correspond to displaced simple harmonic oscillators. The overlap between the simple harmonic oscillator wavefunctions between adjacent sites decreases with increasing temperature because as temperature increases higher eigenfunctions with more nodes come into play. The band energy thus assumes the form ǫ k ∼ −2t(cos(k x a) + cos(k y a) + ..) exp[−g 2 0 coth(ω 0 /2k B T )] with g 0 being proportional to molecular distortion and ω 0 being the Debye energy. As can be seen the band energy at high temperatures decreases exponentially with temperature.
We are dealing with narrow band systems that are non-degenerate and hence the mobility is given by µ = eD/(k B T ) where D is the diffusivity. At higher temperatures band narrowing occurs and when coherent motion is no longer possible the electron becomes localized and propagates by hopping. The diffusivity is then given by D ∼ a 2 τ where the life time τ , for adiabatic transport, is given by the classical Holstein's article [18] for details). On the other hand at low temperatures band-like conduction is possible and the diffusivity is given by D ∝ v 2 τ with v being the velocity and τ being still given by the above formula. The crossover from band-like motion to hopping conductivity takes place when the uncertainty in energy (h/τ ) is of the order of half the bandwidth. Now, it is important to point out that the total mobility is the sum of the band and hopping mobilities (see Friedman's paper [19] for details). The total mobility is then given by where q e is the electronic charge and β = 1/(k B T ). When we include Hund's coupling, t gets modified to be t (1 + M 2 /M 2 S )/2. Now when a magnetic field gets switched on, the value of the effective hopping integral increases, lattice gets less distorted (effective g 0 decreases), resistivity decreases, and band like motion persists longer (i.e., the peak position of the resistivity shifts to higher temperatures). Thus the system can have a large drop in resistivity at T C when a magnetic field is applied.
III. ADIABATIC TRANSPORT OF SMALL POLARONS
To study transport we use double exchange modification to include effects due to on-site Hund's coupling between itinerant holes and localized electrons and take the total hamiltonian to be where c j (a q ) is the hole (phonon) destruction operator, ij corresponds to nearest neighbors, ω q is the optical phonon frequency (h = 1), g q ω q is the hole-phonon coupling, and t DE = t (1 + M 2 /M 2 S )/2, M the magnetization, M S the saturated magnetization, and N is the number of sites. Here it should be mentioned that by allowing only one electron per site the restriction on two electrons of opposite spin to occupy the same site is equivalent to Pauli blocking and can be more severe than hard core repulsion (i.e., it can lead to higher energies). Furthermore the above hamiltonian corresponds to assuming a single orbital per site which on account of Jahn-Teller splitting may be justified. Actually, we feel that only one orbital ought to be involved in the transport process as tunneling between two similar potential wells is more likely than between two dissimilar potential wells (i.e., tunneling between same orbitals is more likely because of resonant tunneling). Then the electron lowers its energy better through enhanced hopping or lower kinetic energy. Furthermore the experiments of A. Lanzara et al. [20] are in agreement with our claims. We will elaborate on this after we present our polaronic picture.
We will first outline the procedure for obtaining the resistivity for the case where the hopping term is small compared to the binding energy (i.e., the true small polaron case) and then proceed to incorporate finite bandwidth effects. We will now perform the so-called Lang-Firsov transformation [21]H = e S He −S to diagonalize the hamiltonian with The resulting hamiltonian is given bỹ where and q g 2 q ω q N is the binding energy. Here it should be pointed out that to obtain the above transformed hamiltonian we have used the following approximation j,k, q The above relationship is exact when g 2 q ω q is independent of q. The single small polaron eigen state of the hamiltonian in Eq. (2) is given by where |i is the molecular orbital eigenstate at site i and |..n q .. is the product of the eigenstates of the molecules at various sites executing simple harmonic motion with phononic occupation number n q . The above wavefunction is exact in the limit of the ratio of the hopping term to the binding energy being vanishingly small. In Eq. (4) the first term involving the hopping term is the small parameter. If the dominant transport mechanism corresponds to diagonal processes (i.e., number of phonons in each state q remains unchanged) phase coherence is maintained when the particle propagates. In fact then the particle moves like a Bloch electron and forms energy bands with energy being given by the thermal average of the first term in Eq. (4) (see Appendix A for details).
where n k = c † k c k and ... corresponds to thermal average. We will now calculate the conductivity for localized states, i.e., the hopping conductivity. The polarization operator is given by Then the current operator is given by Using the above form of the current operator and the many-body states obtained from the single particle states |i |..n q .. and taking ω q = ω 0 and g q = g 0 , we can obtain the conductivity to be (see Appendix B) where n c is the density of carriers. Furthermore, it should be noted that we need 2g 2 0 csch(βω 0 /2) >> 1 for Eq. (10) to be valid. Now the mobility, for a system of non-degenerate electrons, is given by the Einstein relation µ = q e Dβ where D is the diffusivity. In the region of interest, i.e., around the metal-insulator transition, we expect band narrowing to be sufficiently strong so that the Fermi energy is not much larger than the thermal energy. Since the hopping-regime diffusivity is given by D hop = a 2 /(6τ ), we readily obtain the scattering time τ to be The above expression for τ corresponds to the non-adiabatic regime (or t << ω 0 ). As for the adiabatic case, τ is given by which for high temperatures reduces to the classical case with g 2 0 ω 0 /2 corresponding to the activation energy. The diffusivity for band conduction is given by where d is the dimension of the system and τ is given by the same expression as in the hopping case.
[22] Then based on Friedman's work [19] we take the total mobility (µ T ) to be the sum of the band mobility and the hopping mobility and hence the total resistivity (1/ρ = n c q e µ T ) to be given by where θ = g 2 0 . Here we note that Friedman's analysis also accounts for how an electron may seem localized yet it can have translation invariance symmetry intact -the reason being that the mobility is the sum of the hopping and band-like mobilities and hence will always have a component, no matter how small, that is metallic.
IV. SMALL POLARON PICTURE WITH FINITE BANDWIDTH EFFECTS
We will now include finite bandwidth effects in our small polaronic picture. However we will still treat the ratio tDE 2g 2 0 ω0 as small. The polaronic wave function now spreads and barely extends to the nearest neighboring sites. Then the eigenstate of the nearly small polaron is given by (see Toyozawa's paper also [14]) whereX i is given byX In the above equation B(j) = 0 for | R j | > a. Furthermore B(j) and α q are to be determined by minimizing the single small polaronic energy. Upon taking the expectation value of the hamiltonian with respect to a small polaron state of momentum k given by |Ψ k = i e k· Ri |Ψ i one gets and where we have neglected the small valued vibrational overlap factors ..n ′ q ..|X † iX j |..n q .. for i = j. Upon minimizing the polaron energy with respect to α q one obtains For small values of tDE 2g 2 0 ω0 , on using the constraint that j B 2 (j) = 1, one obtains from the above equations B(0) ≈ 1 and for | R j | = a. The above results are similar to those obtained by Gosar. [23] Using the above eigenstate |Ψ i for the small polaronic state and again using the procedure outlined in Appendix B for deriving the conductivity in the localized regime, one obtains (on neglecting B(j) for j = 0 due to its small contribution) the same expression for conductivity but with the lattice distortion g 2 0 renormalized withλ q = e i q· Rj 1 − e i q· δ α q . Here it should be pointed out that the authors of Ref. [10] did not take into account the renormalization of λ q due to the finite bandwidth effects. They also failed to recognize thatγβ >> |ǫ k − ǫ p | while evaluating the integral with respect to time in order to obtain the expression for conductivity. Now, upon taking ω q = ω 0 and g q = g 0 , one obtains the same expression for the total resistivity given by Eq. (15) where θ is expressed as follows with z being the coordination number. [24] Here, it should be noted that we need 2θcsch(βω 0 /2) >> 1 for Eq. (15) to be valid. Furthermore, the optical conductivity above the metal-insulator transition (where only the conductivity due to localized carriers dominates) is given by whereγ 0 = θω 2 0 cosech(βω 0 /2). We note, from Eq. (27), that the optical conductivity scaled by the DC conductivity depends only on the parameterγ 0 .
We will now study the magnetic transition within a mean field approximation. The magnetization ratio M/M S is not very sensitive to the type of approximation (see Ref. [25] for a weak Hund's coupling treatment of the magnetization problem). The magnetization ratio is given by with H ef f = λ M MS . On using the condition that as T → T C we have M/M S → 0, we get for S = 3/2 the relation λgµ B = 1.2k B T C . Estimating the transition temperature T C with accuracy is difficult and we will only give an order of magnitude estimate for it and will provide a qualitative feel for its dependence on various Physical parameters of the system. Above the transition point the electrons are localized and form small ferromagnetic domains (or magnetic polarons) so as to minimize the free energy. At the transition point the magnetic polarons align to give a ferromagnetic phase whose size is of the order of the size of the system. At this point the decrease in kinetic energy because of the electronic delocalization is equal to the increase in the entropic contribution to the free energy as given below. [26] − N δ6t DE e −θcoth(βω0/2) ≈ −N k B T C ln 4. (29)
V. RESULTS AND DISCUSSION
In the doping regime where the manganites are insulating (i.e., δ ∼ 0 or δ > 0.5), although there is orbital order and both orbitals enter the hamiltonian, in the doping regime 0.2 < δ < 0.4 where CMR is observed only one orbital need be considered. As described in a recent interesting paper by Khomskii [27], the manganite system tries to lower its overall energy by entering into a ferromagnetic orbitally ordered state with the same orbital being occupied at each site. The situation is similar to what is encountered in Nagaoka type of ferromagnetism in spin systems. When doped with a few holes, just as an antiferromagnetic spin state can become ferromagnetic so will an orbitally aniferromagnetic state become an orbitally ferromagnetic one. In Ref. [27], it is also pointed out that at sizeable doping a state with d x 2 −y 2 or d z 2 ordering may have lower energy than the proposed state where only d z 2 ± id x 2 −y 2 orbitals are occupied.
Based on the experimental results of Lanzara et al. [20] we will now try to justify that only one of the orbitals d z 2 or d x 2 −y 2 is occupied and that the orbital ordering temperature is higher than the magnetic transition temperature T C (which is possible because there is no reason to expect coupling between the order parameters for the magnetic transition and the orbital ordering transition). In Ref. [20], in Fig. 4 we see that only one type of distortion of the octahedron (the so-called Q 3 normal mode) seems to be relevant both above and below T C . However, there are two distortions of this same type (at sites A and B) above T C but their degree of distortion is different. We think that it indicates that the lattice distortion is less in the ferromagnetic domains (site A) that exist even above T C and is similar to the distortion (again of the same Q 3 type) in the ferromagnetic region below T C at T < 200 K. Moreover, in the paramagnetic regime (site B) the distortion is expected to be more in our picture because the effective hopping integral t DE is smaller in this regime. Furthermore, from Fig. 2 of Ref. [20] we see that even at T = 300 K (> T C = 240 K) only Q 3 mode exists which prompts us to conclude that the orbital ordering probably occurs at a fairly higher temperature than T C . It is also of interest to note from Fig. 3 of Ref. [20] that EXAFS probes instantaneous and local distortions that are larger than the ones observed from diffraction experiments. This may explain why other probes (like neutron scattering) do not show noticeable JT distortion at low temperatures.
Our magnetization curves M/M S as a function of the reduced temperature T /T C [see Fig. 1] are independent of the values of the various parameters of the system like t, δ, ω 0 , and g 0 because of the mean-field nature of the approximation. The qualitative behavior of the experimental curves is mimicked by our calculations but the experimental values of M/M S rise faster with T /T C (see Ref. [15]).
The peak in the resistivity occurs when the system goes from insulating behavior to a metallic behavior (however the peak need not occur exactly when the hopping mobility becomes equal to the band mobility). When the system becomes metallic the system also becomes ferromagnetic because the itinerant electrons polarize the localized spins. Thus we can take the metal-insulator transition point as also the magnetic transition point -a fact borne out by experiments (see Ref. [15]).
From the expression for the resistivity (see Eq. (15)) it follows that for a given value of t/ω 0 , the ratio k B T C /ω 0 is fixed and one need not treat ω 0 as a variable when studying resistance dependence on various parameters.
We will now discuss the resistivity given by Eq. (15). The conduction goes from a hopping type at high temperatures to a band type at low temperatures. In Fig. 2 we have shown the dependence of resistivity ρ on temperature at various magnetic fields. The values of the hopping integral t are taken such that the bandwidth lies in the range 1 eV − 3.5 eV which is a realistic range based on band structure calculations. The values of g 2 0 are taken from the experimentally obtainable activation energy (θω 0 /2) values corresponding to temperatures in the range 1000 K − 2500 K (see Refs. [1,12]) while the chosen value of the Debye temperature T D = 500 K is realistic too (see Ref. [1]). The general trend of the resistivity including the drop at the MI transition at H = 0 T is similar to the experimental results. [15] On introducing a magnetic field the system gets magnetized at temperatures higher than T C and thus the value of θ is smaller (see Eq. (26)). Consequently the resistivity is smaller and T ρ max (the temperature at which resistivity becomes maximum) increases.
For T ≥ T C , when D band /D hop >> 1 the magnetoresistance ∆ρ(H) ≡ (ρ(0) − ρ(H))/ρ(0) is given by [see Eqs. (15) and (26)] and when D band /D hop << 1 it is given by For βω 0 /2 < 1, on taking csch(βω 0 /2) ≈ 2/(βω 0 ) and tanh(βω 0 /4) ≈ βω 0 /4, if θβω 0 /2 > 1 the following can be shown: (i) T ρ max M increases as g 2 0 decreases (or t 2 /ω 2 0 increases); and (ii) for fixed values of g 2 0 and t 2 /ω 2 0 and for large enough (z+1) , T ρ max increases as M increases. The above observations are borne out by the numerical results reported in Table I where the empty boxes correspond to cases where our approximation may not be good. We further note that for the same value of the polaron size parameter t/(ω 0 g 2 0 ) the magnetoresistance ∆ρ(H) increases as the adiabaticity parameter t/ω 0 increases. In Fig. 3, we plot the scaled optical conductivity Reσ(ω)/σ(0) [given by Eq. (27)] as a function of the frequency at different temperatures. The maximum of the optical conductivity occurs at ω ≈γ 0 β as expected from the formula in Eq. (27). We note that as the value of the renormalized electron-phonon coupling parameter θ increases, the optical conductivity curve spreads out more. We also find that, as the temperature increases the value of the scaled optical conductivity decreases. Furthermore, the calculated scaled curves are in qualitative agreement with experiments. In the experimental situation there are two pieces to the conductivity -one coming from transitions with electrons parallel to the core spins (ω ∼ t) and another at higher energy (ω ∼ twice the Hund's coupling energy) involving transitions to states where the electron spins are antiparallel to the core spins. However, since we do not allow for double occupancy at any site, the second piece of the optical conductivity does not appear in our calculated curves.
In conclusion, we say that we showed the importance of finite bandwidth effects in understanding CMR within a small polaron picture. In addition to the polaron size parameter studied by other authors [8], we have also identified another dimensionless parameter (the adiabaticity parameter) and demonstrated its importance. The values of magnetoresistance calculated by us compare favorably with the experimentally reported ones.
ACKNOWLEDGMENTS
The author would like to thank C. S. Ting, T. V. Ramakrishnan, H. R. Krishnamurthy, V. Pai, D. D. Sarma, Ram Seshadri, Jinwu Ye, G. F. Giuliani, and A. N. Das for discussions. The author would also like to acknowledge useful discussions with D. Khomskii regarding the validity of considering only one orbital in the doping regime where CMR occurs. This work was partially supported by Texas Center for Superconductivity and a grant from Texas ARP (ARP-003652-0241-1999).
APPENDIX A:
In this Appendix we will derive the band narrowing due to phonons. The number of phonons in each state q remains unchanged in a diagonal transition. If this is the dominant mechanism, then phase coherence is maintained and the electron propagates as a band-like particle. Using the decoupling scheme c † i c j X † i X j = c † i c j X † i X j one obtains the single particle energy to be where | R δ | = a. We will now proceed to derive the above expression. Now where λ q = e i q· Rj 1 − e i q· δ g q N 1/2 . Then the thermal average is given by On noting that with a l |n = n! (n − l)!
we get the following relationship where L n (x) is the Laguerre polynomial. Since the following identity holds we obtain from Eqs. (A3) -(A7) with N q being the Bose-Einstein distribution function. Then for ω q = ω 0 and g q = g 0 we obtain Eq. (A1).
APPENDIX B:
In this Appendix we will calculate the conductivity within the small polaron picture in the hopping regime.
In the above equation, the dominant contribution is obtained when j = j ′ and δ = δ ′ . The first correlation function in the above equation can be approximated by where f p is the Fermi-Dirac distribution function. Now X † j (t)X j+δ (t)X † j+δ X j = Tr e −βH e iHt X † j X j+δ e −iHt X † j+δ X j Tr e −βH = Π q ∞ n q =0 n q |e −βn q ω q e −|λ q | 2 /2 e λ ⋆ q a † q e iω q t e −λ q a q e −iω q t e −|λ q | 2 /2 e −λ ⋆ q a † q e λ q a q |n q ∞ n q =0 n q |e −βn q ω q |n q = Π q (1 − e −βω q )e −|λ q | 2 (1−e −iω q t ) ∞ n q =0 e −βn q ω q n q |e λ ⋆ q a † q (e iω q t −1) e −λ q a q (e −iω q t −1) |n q = Π q e −|λ q | 2 [(N q +1)(1−e −iω q t )+N q (1−e iω q t )] , where, to obtain the last line, use has been made of the fact that e −βn q ω q n q |e u ⋆ a † e −ua |n q = e −|u| 2 N q . | 2019-04-14T01:56:25.849Z | 2000-10-14T00:00:00.000 | {
"year": 2000,
"sha1": "fce30f299f2b6ee6733427a1d9293264d95f1ff6",
"oa_license": null,
"oa_url": null,
"oa_status": "CLOSED",
"pdf_src": "Arxiv",
"pdf_hash": "7a4ac24cfe520f4d19836327c07146974ddc65fd",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
237975433 | pes2o/s2orc | v3-fos-license | Analysis of the representation of key topics of sustainable development in subject educational standards of Kyrgyzstan
Education for Sustainable Development is an area of activity that aims at fostering responsible members of society. The very process of personality formation within the educational environment cannot be divorced from the use of standardized technologies, procedures, techniques and teaching methods, the designation of significant areas of knowledge and the developed methods for assessing the results of the development of competencies. Subject standards, as part of the country’s educational standard, are designed to coordinate the process of assimilating meaningful information, developing skills and acquiring skills to meet the needs of representatives of the current generation, without putting at risk the capacity of generations to come in meeting their own requirements. The study analyzes the current educational standards of the Kyrgyz Republic from the point of view of comparing the approaches of standard environmental education and Education for Sustainable Development. The features of the implementation of an interdisciplinary approach in the study of economic, social and environmental aspects are revealed. Steps are proposed to improve subject standards in order to ensure both information sufficiency and the formation of behavioral practices consistent with the key accents reflected in the conventions on climate change, combating desertification and biodiversity.
Introduction
The need to implement a sustainable development program is no longer challenged in the world. Sustainability presupposes, on the one hand, the sufficiency of material, human, ecological, and social resources necessary to allow humans to meet basic needs and to support continued development, on the other hand, ensuring the fairness of this access, both for representatives of the current and future generations. But in order for these postulates to be implemented, it is necessary to adopt the most effective forms of creation, accumulation, exchange and training of information on sustainable development. The main questions facing the entire education system are: Why is Education for Sustainable Development (ESD), necessary what methods and forms of its implementation exist, who and how will implement it, taking into account age and cognitive characteristics -these are the questions, The 4. 7. SDGs have identified the need to ensure by 2030 that all learners have acquired the knowledge and skills necessary to meet needs, using a balanced and integrated approach to the economic, social and environmental dimensions of sustainable development [1]. At the same time, on the one hand, Goals 4. 7. are focused primarily on children and adolescents in formal education, and the proposed global key topics are mainly addressed to them. On the other hand, this Goals are closely related to the concept of lifelong learning, since it does not contain any mention of specific levels of education or age groups to which it concerns.
From this point of view, education systems in different countries understand the need for an integrated interdisciplinary approach to mastering the key topics of the SDG, as well as to finding the most effective teaching practices. The curricula of the disciplines, from this point of view, are the main way of transferring knowledge and skills necessary to promote sustainable development and education of global citizenship, and the analysis of various practices of ESD implementation allows to justify the most effective solutions and adapt the most effective practices in the country context.
For example, an analysis of how far the ideals of SDG 4.7 are embodied in policies and curricula across 22 Asian countries provides compelling evidence that we must redefine the purposes of schooling, addressing the fundamental challenges to efforts to promote peace, sustainability and global citizenship through education [2,3].
Of considerable interest is the experience of Greece and Italy, where the implementation of energy saving education in the school environment was based on the use of gamification, competitions and educational events based on the Internet of Things [4]. Another approach, based on the ClimACT methodology, has been applied in pilot schools in Portugal, Spain, France and Gibraltar. At the heart of this approach, on the one hand, is the environmental measurement benefits based on the calculation of the school sustainability index (based on the comparison of measurable indicators in the areas of transport, procurement, energy and water conservation, waste reduction, air quality and greening in school premises), on the other hand, encouraging students, teachers and families towards an energy-efficient and low-carbon pathway through a structural procedure based on roles, activities and progress evaluation [5].
It is difficult to overestimate the importance of project activities in improving environmental awareness. So green projects implemented in schools in Indonesia propose to consider the introduction of sustainable development practices to reveal creativity, flexibility of thinking, increase the effectiveness of learning, since they are based on real work such as products that utilize waste, contain articles, art training and so on [6] In the same country, a comprehensive study of the relationship between real-life environmental knowledge application practices made it possible to establish a cross-correlation between active project-based EE learning and the reduction of waste and the growth of landscaping in the area supporting the project. The results allow researchers to even recommend that knowledge aimed at environmental education and support for sustainable development be singled out as a separate discipline with a compulsory examination at the end of school education [7].
Of no less interest to researchers is the possibility of a comparative juxtaposition of the application of different forms of implementation of education for sustainable development. Methods and techniques for the implementation of ESD, ranging from the use of educational games [8], game forms of waste processing [8], the maintenance of an ecological or school garden [10] and many others [11,12].
Another area of research reveals a significant problem -the inevitability of a collision of schools striving to introduce frameworks of education for sustainable development with contradictions between their own aspirations and external demands [13,14,15].
All of the above allows us to formulate an understanding not only about the teaching areas and key topics within the framework of the ESD goals, but also made it possible to substantiate the need to revise the existing educational standards of subject education in schools of the Kyrgyz Republic for relevance and compliance with objective 4.7. SDG and general sustainability issues.
Results and Discussion
An essential characteristic of identifying the factors of transition to sustainable development is the spatio-temporal heterogeneity of the territory of the state, depending on the structure of the surface and geographic location. Geomorphologically, Kyrgyzstan is a country with a complex, rugged, highly dissected relief, which occupies more than 60% of its territory. About 40% of the territory is practically unsuitable for economic activity: these are glaciers, eternal snow, rocks, talus, high-mountain rubble deserts, deserts and mountain steppes stand out for several tens of kilometers, etc. The relief of the mountainous type predetermines the formation of climatic zones with vertical zoning, which in turn will determine the diversity and fragility of mountain ecosystems formed in these zones. Due to the high complexity, contrast of the geographic environment, there is a very high concentration of species in the country, there are an order of two more species per unit area than the average for the planet and Central Asia. It should be understood that the rational and sustainable use of the peculiarities of its territory and geographic location accelerates the achievement of relative independence of the country's welfare from the geographic environment. Greater or lesser isolation from the natural environment is achievable only in a limited space and for a limited time. The environment of the Kyrgyz Republic is determined not only by its climate, relief and subsoil, but, first of all, by living natural communities that maintain ecological balance. Kyrgyzstan is home to about 2% of the world's flora and more than 3% of the world's fauna [20]. Awareness of that the preservation biological diversity is an overall task of the humanity, extinction species and disappearances natural ecosystems is an serious threat for well-being and existence life on the Earth prompted governments many countries the world to conclude in 1992 in Rio de Janeiro (Brazil) the Convention on Biological Diversity [19].
Another danger carries desertification: it related as from change climatic conditions, So and from activities human. So, excessive and haphazard using pasture resources (which make up more 85% the whole squares agricultural land), predominantly, in purposes natural haymaking and graze animals, led to degradation soil-vegetable cover pasture zones... More fifty% pasture land (4.5 million ha) relate to the number degraded. Decreased yield pastures in two times per 15-20 years. Also in the result low culture irrigation and absence necessary drainage systems is happening salinization irrigated land. Problem salinization exists for twenty% processed land in lower reaches Chuyskoy and Fergana valleys.
Finally, the change climatea global problem, which tied as from nearby natural factors, So and from activities human, in particular, from emissions greenhouse gases etc.
The To carry out the reform, including in the field of education in the Kyrgyz Republic, such documents were adopted as the National Development Strategy of the Kyrgyz Republic for 2018-2040 [21], the Concept for the Development of Education until 2020 and the Strategy for the Development of Education for 2012-2020 [22] and the Strategy for the Development of Education in the Kyrgyz Republic for 2021-2040 (the project has been submitted for public discussion) [23], as well as action plans for the implementation of the Strategy for the Development of Education.
The goal of the first action plan was stated as "Establishing the foundations for quality results-oriented education". The fulfillment of this goal in terms of the reform of the educational content implied the development and adoption of new generation standards (curricula), which would make it possible to clearly formulate the expected results of education and, thus, reliably measure the quality of education. The national curriculum presupposes: the transfer of the entire system to the principles of results-oriented education, expressed in a set of student's competencies, ensuring the readiness of the individual for successful activities in the social and professional spheres; "End-to-end" implementation of the stated goals at all levels of the system; a gradual transition to the integration of subjects within the selected 7 educational areas and between them, which will contribute to the formation of a holistic picture of the world among students and will ensure a decrease in the load on students without losing the quality of knowledge. The educational areas are the same for all levels of study and each of them is implemented throughout all levels of study.
To develop new educational outcomes, taking into account Education for Sustainable Development (ESD), educational standards and programs of the relevant subject areas were analyzed for the presence of sustainable development issues and Global Environmental Conventions (including Biodiversity Conservation and Climate Change). Desertification issues are not mentioned in the standards and programs.
Analysis state standards subject education in schools Kyrgyz Republics [16,24] shows, what ecological problematic and separate aspects of sustainable development partially presented in the framework following school disciplines: • Homeland Studies (1-4 grades) -1 • Physics (7-11 grades) -2 • Geography (6-11 grades) -3 • Natural science (5 grade) -2 • Chemistry (8-11 grades) -5 • Biology (6-11 grades), including "Botany","Zoology" and "The general biology"(10-11 classes), Where ecology given in size 8-10 hours -6 • Labor training (5-9 grades) -7 • History of Kyrgyzstan and world history (10-11 grades) -8 • Human and Society (10-11 grades) -9 Analysis representation key themes sustainable development in data school disciplines shows the following picture (Table 1): Nevertheless, it should be noted that only a small part of the issues related to climate change or biodiversity conservation is presented in the curricula of schools of the Kyrgyz Republic. In addition, these issues are analyzed not in a complex, but torn apart, in separate topics, since the traditional approach to teaching continues to dominate in schools, which also affects the quality of student preparation.
An example is the geography of Kyrgyzstan (grades 8-9), where the topic "Rational use of natural resources and protection. Environmental problems in the territory of Kyrgyzstan. Reserves, sanctuaries, national parks, biosphere reserves, unique places and complexes. Strategic directions for nature protection of the Kyrgyz Republic ".
Also, these questions were studied in the course of biology. In the course "Man and Society" they are reflected in the topic "Global problems of our time." However, at present it is necessary not only to include new content lines of education for sustainable development in the educational programs of schools, but also to revise the content of topics already integrated into the standards and programs at the previous stage.
In order to take the study of these problems to a new level, it is necessary to consider them as part of education for sustainable development. A comparison of the approaches of traditional and ESD education is given below ( Table 2).
Ecological basic education Education for Sustainable Development
The cause of the ecological crisis is declared to be a certain abstract planetary man who experiences insufficient "love" for nature or destroys it due to the lack of ecological knowledge.
Reveals, reveals and discusses specific social, political, economic and other reasons leading to the destruction of nature and destabilization of society.
It usually focuses on the environmental impact on society of pollution, wastewater, emissions from cars, enterprises, etc., on their causes, consequences and opportunities for their reduction, as well as on caring for and protecting nature.
More often focused on finding the efficient use of natural resources and the importance of their renewability (sustainability). The negative environmental impact is primarily seen as a consequence of the irrational use of resources. Sets the vector of concern for "future generations" in general and rather on a theoretical level.
Speaking about future generations, he goes further and raises specific and practical questions -in what form, and what exactly will we leave to them? Focuses exclusively on environmental issues. To solve them, he offers mainly "technical" methods of solution -pollution control, improvement of laws, reduction of resource consumption.
Considers environmental problems in their close connection with economic and social issues, seeking to offer comprehensive ways to solve them. In conditions of war, poverty, hunger, etc., environmental sustainability cannot be achieved. Motivates students to preserve nature by saying that otherwise "things will only get worse." Basically, it puts an emphasis on prohibitions -indicates "what is not allowed", but says very little about "what is allowed".
Pointing out risks, however, offers a positive perspective -sustainable development is not only a matter of survival, but also a way to improve our quality of life.
The entire "burden" in the implementation of ESD falls on the teacher who teaches the subject of ecology.
Responsibility for ESD implementation is shared among the entire school community: everyone should contribute. Therefore, in the action plan for 2016-2017 on the implementation of the strategy for the development of education, the goal was specified "Creation of the foundations for quality education, result-oriented and a gradual transition to education for sustainable development." In the new subject standards approved by the collegium of the Ministry of Education and Science of the Kyrgyz Republic on September 15, 2015 [24], an attempt is made to consider these issues in a comprehensive manner.
So, in the standard of the course "Natural Science" (grade 5), such questions for study are indicated as "Human needs and the environment. Changes in wildlife due to human economic activities. Environmental protection. The role of nature reserves, botanical gardens, etc. in the protection of flora and fauna ".
Issues related to the conservation of biodiversity and the problem of climate change are also included in the standards for the courses "Geography", "Biology", "Man and Society", "Physics", etc.
For example, in the new standard for geography, one of the meaningful lines is "The Nature of the Earth". In the 8th grade, within the framework of the disclosure of this content line, the topic "Anthropogenic changes in landscapes" is studied; in the 9th grade -"Human adaptation to the natural environment. Protection of the population from natural hazards. Rules of Conduct in Extreme Conditions ".
Another content line -"Ecological, economic and social development of the modern world", includes such questions as in the 6th grade -"Earth climate change. Stable climate and unstable climate ", in the 7th grade -" The main climatic feature of recent decades ", "The impact of climate change on human health, nutrition and living conditions ", in the 8th grade -" Preservation and rational use of mountain ecosystems and biodiversity of Kyrgyzstan", "Problems of desertification", "Changes in the Earth's climate and the country's climate "" Greenhouse effect: the rise in temperature in the world and in Kyrgyzstan", "The problem of the growth of hydrometeorological phenomena" ; in the 9th grade -the problem "The impact of climate change on the economy" is also analyzed. Thus, an attempt has been made to carry the issues regulated by the above conventions through the entire basic school (grades 5-9).
In recent years, the problem of climate change has become one of the most pressing topics in ESD. Experts around the world are studying the causes and consequences of greenhouse gas emissions, which are a major factor in global warming, along with a decrease in the ability of natural ecosystems to regulate the Earth's climate. The international community is taking measures against the increase in greenhouse gas emissions from transport, industrial enterprises and the domestic sector.
Therefore, the inclusion of these issues in the standards of all classes can be considered a significant step forward.
In the standard for biology within the content line "Man and his environment", for example, in the 7th grade, the problem "Preservation of diversity, protection of ecosystems" is indicated.
However, since programs and textbooks (educational and methodological complexes) have not yet been developed, it is not possible to analyze in terms of content those issues that are stated in the standards.
However, an analysis of the standards shows that the consideration of conventionrelated issues in geography and biology is inconsistent. The same can be said about other items.
In addition, as noted above, in modern textbooks, for example, the concept of diversity and its importance for the development of the world is not always spelled out in terms of sustainable development. So, the basis of the presentation of the course of biology is still mainly a description of species diversity, while it is necessary to move on to the ecosystem principle of presentation. It is necessary, first of all, to broadcast the idea that it is biodiversity that is the "Fabric of Life", of which we are an integral part and on which we are completely dependent. It is the ecosystem, and not a separate living organism and not even a species, that is a real, time-stable unit of Life, since no species can exist outside of its connection with other species and habitat. And only this whole set, that is, the natural community, can produce and sustain Life as such. The works of the Russian scientistecologist V. Gorshkov and his theory of Biotic regulation of the environment can serve as substantive pillars here. Particular attention in this theory is given to the concept of "natural ecosystems", since they have the maximum potential for biotic regulation and maintenance of conditions for the existence of Life on the planet. This approach will also provide an opportunity to harmonize convention-related topics across all subjects and to teach them in an integrated manner, either through integrated topics or at least making full use of interdisciplinary connections. It is the most important resource and also a way to learn about convention issues. It is also important to note that for the first time in school education, according to standards, the problems of all three global environmental conventions should be presented. Therefore, it is important to pay attention to the development of programs and textbooks so that they reflect the issues specified in the standard.
Among the content lines of ESD required for inclusion in school curricula, taking into account the specifics of Kyrgyzstan, also include the following topics: • Conservation and rational use of mountain ecosystems (including pastures, agroecosystems, etc.); • Biodiversity of Kyrgyzstan; • Environmental human rights (taking into account the specifics of the environmental legislation of the Kyrgyz Republic); Prevention of natural disasters and reduction of the risk of natural disasters. Education for sustainable development is a change in approaches to education: for a teacher -from the transfer of knowledge to the creation of conditions for active learning and practical experience for children; for students -from passive assimilation of information to active search for it, critical reflection, use in practice, to communication and activity; for the school -to change the policy of managing the team of adults and children, using resources, organizing interaction with the local community. An important content line of liberal arts education is seen as the acquisition of the skills of self-determination and self-identification by the student, so that at the end of the humanitarian cycle at school, a person has specific "tolerant-oriented" ways of interaction both within his own culture and with other cultures.
In general, it should be noted that the integration of these content lines into school curricula is mandatory for the further development of education in Kyrgyzstan and its expansion to the framework of Education for Sustainable Development.
Conclusion
1. Substantive lines of Education for Sustainable Development will allow you to get a graduate who is effectively adapting and functionally independent in the modern conditions of globalization and climate change. Education for SD, expressed through the content of education and methods of its organization, helps students to accept such values, develop such knowledge and skills that will provide them in the future with the opportunity to choose individual and collective decisions (local and global) to improve the quality of life without a threat to the future. planets. 2. Kyrgyzstan has a regulatory legal and methodological framework for the implementation of education for sustainable development in school education. However, it should be noted that the materials that would make it possible to implement the issues of Sustainable development and Rio Environmental Conventions in the education system are clearly not enough and there are no educational and methodological complexes in which materials for studying the conventions would be prescribed from a modern standpoint and in sufficient volume. 3. The content of the themes of the three global Rio Conventions in the secondary education system of the Kyrgyz Republic is extremely poorly reflected, the consideration of problems related to the conventions is inconsistent. Despite the fact that the issues of climate change and biodiversity are raised, the focus is shifted towards taxonomy, species, rather than ecosystem approach, topics are presented without focusing on the practices of rational nature management, ecosystem conservation and community adaptation. 4. In the programs of the secondary school of the natural science field there are no clear indications related to the themes of the UN Global Environmental Conventions, but, nevertheless, in a number of cases, topics that are similar in content are considered, but they are described more broadly, which gives teachers the opportunity to integrate the themes of Rio Conventions in the curriculum. It should be noted that, despite the fairly large opportunities for the integration of content that are within the framework of educational programs, these topics are described extremely poorly in educational materials. 5. Materials on Environmental Conventions and sustainable development are developed and published; however, to ensure the possibility of their effective use, it is necessary: − systematize the available materials, create electronic access to the library of materials on sustainable development; − define a list of questions that are not included in the existing materials and develop guidelines and visual materials on these issues for both teachers and students; − use the results of the above analysis for the development and adoption of new generation subject standards; − develop indicators and methodology for assessing the representation and adequacy of information on Teaching Scopes in educational materials within the framework of the goals of Education for Sustainable Development; − develop a methodology for assessing the effectiveness of the teaching materials used and forms of educational activity. 6. The analysis of the standards shows that the ultimate goals of teaching school disciplines have not yet been determined and requirements have not been formulated that would allow describing and measuring the competencies formed by the means. It is important to pay attention to interdisciplinary connections, the socio-economic dimension postulated in the conventions, conservation practices and safety mechanisms (health-climate, health-biodiversity links). The 2014 standards did not solve the problem of integrating school disciplines into the subject / educational area, which does not contribute to the promotion of the ideology of sustainable development as a generalizing concept of human activity. 7. Until now, a system for assessing student achievement in terms of education for sustainable development has not been developed at the country level. | 2021-08-27T17:09:43.201Z | 2021-01-01T00:00:00.000 | {
"year": 2021,
"sha1": "71ece0f77a68c799b112ef79c48388c802995180",
"oa_license": "CCBY",
"oa_url": "https://www.e3s-conferences.org/articles/e3sconf/pdf/2021/72/e3sconf_esmgt2021_08015.pdf",
"oa_status": "GOLD",
"pdf_src": "Adhoc",
"pdf_hash": "dccb32d6349b9c48ad3cfe4b698a7a4f89664d9c",
"s2fieldsofstudy": [
"Education"
],
"extfieldsofstudy": [
"Political Science"
]
} |
67065387 | pes2o/s2orc | v3-fos-license | Modeling and Simulation of Microgrid Modeling and Simulation of Microgrid
Complex computer systems and electric power grids share many properties of how they behave and how they are structured. A microgrid is a smaller electric grid that contains several homes, energy storage units
Introduction
The current electric power grid suffers aging in both the developing and the developed world.The results of aging grid and old infrastructures become more pronounced as the number of power outages increases.Old equipment is prone to failures, and old engineering planning and operation methods are ineffective in tackling current challenges.To better deal with current challenges, a paradigm shift is needed.Recently, concepts from system engineering have been adapted to upgrade the electric power.That is, the electric grid can be treated as a complex system.A complex system is a large collection of interacting elements that act together to perform an overall nonlinear activity or task.A complex system is not centralized but distributed and self-organized.This paper investigates various models of microgrid components and treats them as a complex system.
System of Systems (SoSs) Definition
A system of systems is a relatively new concept in system engineering and is becoming a hot topic for researchers in different fields.Despite the fact that this concept is in its early stages, this concept has achieved widespread use, such as real-time systems and hardware-in-loop simulations [1].It was restricted to two main domains: defense and information technology.Nowadays, it has entered a wide variety of different domains.Although there are different definitions of SoSs, the most general one That a SoSs are large-scale integrated systems that are diverse and autonomous, but are working together to achieve a common goal [2].The main reason for initiating this concept is to improve either economy or performance.SoSs consist of employable heterogeneous subsystems.The subsystems can work independently and each one has no power over the other.However, subsystems are connected to communicate and transmit tasks and achieve an overall mission.Some characteristics distinguish SoSs from a complex monolithic system, and they are listed in table 1.
Evolutionary development
The overall system is not monolithic.Instead, it is flexible to adding new subsystems
Emergent behavior
All subsystems work as collective unit to accomplish a big task
Geographic distribution
The subsystems are sequentially distributed to facilitate the flow of information
Managerial independence
The subsystems are in control for their own operation
Microgrid as SoSs
Figure 1 shows an example of a microgrid contains renewable energy sources.The renewable energy sources are integrated to a dc bus through power electronic interfaces [3][4][5][6].One the most important goals of a microgrid is to be able to work with various types of renewable sources and meet the load demand in case of outages.The subsystems can communicate with each other to achieve the desired goal [7].
Microgrid System Modeling
A complex system can be any system that contains a large number of elements that has distinguishing features such as a large number of interacting agents, self-organizing collective behavior, decentralization, openness, and nonlinearity between input and output.The central properties of complex systems are elements and their number, interactions and their strength, time scale of operations, variability, environment and its demand, and activities and their objectives.Attributes of complex systems are interdependent, independent, distributed, cooperative, competitive, and adaptive.Many examples of large-scale systems are built from components, such as internet networks, global satellite networks, enterprise information systems, and the electric power grid.There are many similarities between electric grids and complex computer systems so that microgrids can be treated as SoSs
Battery System modeling
A storage system is a vital element in the microgrid.It operates in the case of an electricity blackout, and it mitigates the variability of renewable energy sources.Therefore, it is usually placed between the renewable sources and the load to help the generation match the load demand at any moment, and by doing that, the stability of the system is assured.The size of battery storage is important, and detailed calculations should be made to meet the demand when the power from the electric grid is not available.The required battery capacity is given by where E load is the load that needs to be supplied during unavailability of power in ampere hour, Days off is the storage days (the days that power from the electric grid is unavailable), DoD max is the maximum depth of discharge of the battery, and η temp is the temperature corrector factor.At high penetration rate, fluctuating sources such as wind generation can cause a problem with balancing the system.These sources cannot be dispatchable and cannot be seen as a negative consumer.The control in this case might become very challenging.Knowing the charge quantity and setting it as a control input can mitigate the challenges associated with renewable energy integration.The charge quantity of the storage system is given by where ζ, η inv and η batt are the hourly self-discharge factor, efficiency of inverter, and efficiency of the battery, respectively; E B (t) and E B (t-1) are the charge quantity of storage system at time t and t-1, correspondingly; and E GA and E L are the renewable energy power and load demand, respectively.The charge quantity is constrained by maximum and minimum charge quantities E Bmax and E Bmin [8], respectively.
Load Modeling
Modeling electric load is a very challenging task.The behavior of electric load depends on energy consumption of various devices that are turned on and off either automatically like air conditioning devices or manually like hair dryer.Often, electric load is modeled using a constant electric impedance for the sake of simplicity.However, the load can be modeled using machine learning algorithms or artificial neural networks if more accuracy is required.
There are usually some demand peaks at various times of the day.The peaks are sharp during weekdays, because of air-conditioning and other high-power devices.Modeling load demand can be simplified as active and reactive power.The values of apparent power components P and Q are usually pre-set for the sake of simplicity.In this paper, the load was modeled and considered stochastic.Therefore, load profiles were generated using a feedforward neural network, as explained in section 5.
Modeling of Photovoltaic System
The physical model of solar irradiance on an inclined surface is given by ( ) where I b is normal solar irradiance and I d is diffused solar irradiance.Parameters R d and R r are the tilt factors for the diffused and reflected part of the solar irradiance.The sun position in the sky is the main factor that total solar irradiance depends on [9].There are many models for hourly output PV power, which is given by , , ( 1 where η r is the reference efficiency of the module, η pc is the efficiency of smoothing and conditioning power, γ is the factor of density of a cell in module (also called packing factor), κ is the temperature coefficient of the array, A PV is the photovoltaic area, T a is the instantaneous ambient temperature, T r is the reference temperature, T c is the monthly temperature, and NOCT is the normal operating cell temperature, which T a,NOCT is 20°C and irradiance is 800W/m 2 for a wind speed of 1 m/s.The equivalent circuit of the solar panel is shown in Fig. 2. The solar irradiance can be forecasted by statistical methods such as autoregressive moving average, or machine learning algorithms such as support vector machine.In this paper, solar irradiance was modeled using feedforward neural network.
R s
Fig. 2. PV equivalent electric circuit
Modeling of Wind Energy system
Wind speed is a renewable source of energy.Using aerodynamic techniques, one can design a rotor that converts wind speed into electric power.Although wind speed has some advantages like energy density and an excellent return on investment, it has some disadvantages like required periodic maintenance, the difficulty of installation, and intermittency.That is, it has a significantly variable output that is difficult to predict and might cause instability of the grid operation.To model such a source, we need to model the electric behavior of the wind turbine generator, as well as characterize of wind speed to capture the fluctuation.
a-Wind turbine model
Ref [4] gives the model of power energy of wind turbine.The height and speed characterization of the wind turbine are the main factors of the output power.The relationships are given by the power-law equation: where V z , V i , Z, and Z i , are the wind speed at the hub, wind speed at reference, hub height, and reference height, respectively.The output power of the turbine generator is given by where V r is the rated speed at which the wind turbine generates maximum power, V ci is the cut-in speed at which the wind turbine generates minimum power, co V is cut-out speed, and r P is the rated power.Fig. 3 shows the wind turbine characterization.The actual output power of wind turbine after considering the loss and the total swept area is given by where η is the efficiency of the wind turbine and A w is the total swept area [10].
b-Wind speed model
There are many wind speed modeling tools, such as autoregressive moving average (ARMA), hidden Markov models (HMM), and support vector machine (SVM), and many more.In this paper, a feedforward neural network is used to model the wind speed.More details about implementation can be found in section 5
General Modelling Using Neural Networks
Figure 4 shows the overall system modeled using time series neural networks.Each neural network was trained with labeled data before plugging it into the system.All networks are feedforward networks trained using the backpropagation algorithm.The solar power was trained using a feedforward neural network with 12 input neurons, 21 hidden neurons, and 1 output, which corresponds to the solar power.The input variables are the solar irradiance, the corresponding time, the humidity, and the temperature.The previous values of solar irradiance were taken into account.Choosing the best window size was based on trial and error.The best window size is three.The model was trained using the Levenberg-Marquardt backpropagation algorithm to update the weight vector.The wind power was trained using a similar neural network with 15 input neurons, 18 hidden neurons, and 1 output neuron, which is the wind power.The input to this network is the wind speed and direction, and the corresponding time.Previous values of wind speed and direction were taken into account to predict the next value of wind power.The wind power model was trained using the Levenberg-Marquardt backpropagation algorithm, as in solar power modeling.Modeling the load demand was the most difficult task because load demand contains human behavior of turning on and off devices which is highly unpredictable, and feature extraction can be a tedious task.However, a feedforward neural network with more neurons can represent the data.The inputs of the neural networks are power demand, weather, and corresponding time.The network architecture has 15 input neuron, 25 hidden neurons, and one output neuron, which is load demand.The best window size is also 3. The storage elements were modeled using a neural network with 6 input neurons, 12 hidden neurons, and an output neuron, which is the state of charge of the storage element.The best window size is 2. All of the previous models were trained using labeled data that were preprocessed and cleaned.Then, this data was divided as follows: 70% of the dataset is for training, 30% for test, and 15% was for validation.All of the hidden neurons are a hyper tangent activation function, which has a value between -1 and 1.This is helpful because some variables have two directions such as the battery current, where the negative current means the battery is charging and positive means the battery is discharging.The activation function of the output layer is linear function.The training was stopped early to prevent overfitting, and to make the model have a better generalization.Simulating the microgrid with neural network can make it treated as an SoS, where each source is an independent and the system is capable of adding extra sources.All sources perform the big task which is power balance between generation and load demand.Other tasks such as increasing the economic benefits can also be achieved by controlling these renewable sources.
Main Grid
Figure 4: Microgrid modeling and simulation using neural networks
Control
Several different types of controllers can be found in literature, as shown in Fig. 5.These control techniques are suitable for working in SoSs.Hierarchical control uses different layers to control the grid.Typically, it consists of three layers: the primary layer, secondary layer, and tertiary layer.The primary layer is responsible for load sharing where droop control is used.Also, it is in charge of stabilizing the voltage and frequency.The secondary is in charge of checking the primary control errors.The tertiary layer monitors the flow from utility to the grid and vice versa [7].In this paper, a primary control was implemented to balance the power between generation and load demand.
Case Study
This section illustrates a practical example of a microgrid.Missouri S&T solar village is a small-scale microgrid which is consisted of four solar homes tied to the grid.These homes are also backed up by a storage system of two 960 V batteries and a fuel cell.The main aim of this project is to better utilize the energy locally rather than sending it to the main grid.The overall microgrid is a possession of Missouri S&T and the energy supplier is Rolla Municipal Utilities (RMU).Currently, Missouri S&T is responsible for the payment the electricity usage, and the village is occupied by tenants.More information about the smart grid components is listed in Table 2 [11][12].These components include a natural gas fuel cell, solar generation, lithium ion battery, and automated smart switchgear [13].Figure 6 shows the picture of the microgrid, and Fig. 7 shows the simulation diagram.The solar village is simulated using Simulink with the same parameters listed in Table 2. Different PV data sources were obtained from [14][15][16].The RMU was considered an ideal source with 13.8 kV and 60 Hz.The transmission line was simulated using PI section line with non-ideal components.The rest of the system was considered a constant load with active and reactive power equal to 1 MW and 1 kVAR, respectively.The solar village is connected to the transmission line using a distribution transformer.The distribution transformer is connected to phase A of the transmission line, and it is rated for 50 kVA power.The secondary side of the transformer is center tapped, which there are two phases at the secondary side with the 120V magnitude and opposite polarity.The load profile is the output of a neural network that was trained using data from [17].The time resolution of the load profile is one minute.
Simulation Results
This section presents Missouri S&T microgrid simulation.Figure 8 shows the power consumption of each house, solar power, and generation from RMU.The usual goal is to control the battery and maximize the performance of the system.However, the battery in this simulation was eliminated so that the system is grid connected without battery storage.The goal, then, is to reduce the consumption from power grid by using available power from the solar panels as follows: The simulation setup in the previous section was run for 24 hours.The output waveforms after completion are shown in Fig. 9.The voltage of phase A and phase B is 120 V, equal in magnitude and opposite in polarity.It also shows the current passing through the distribution transformer, where I A , I B , and I N are the phase A current, phase B current , and neutral current, respectively.The power consumption seen by the grid at each house is also shown in Fig. 9., as well as the power loss in the transformer P transformer .From the waveforms, one can see that the system is balanced, and each house utilizes the renewable energy effectively.The voltages are constant and steady, and the currents are within transformer limits.
Conclusion
This paper presented modeling and simulation of microgrid.This microgrid was treated as an SoSs and controlled to be able to utilize different energy sources.A practical example from Missouri S&T was implemented and simulated.The results were presented and to see that it utilized the renewable energy coming from the solar panels and optimally distributed it between homes.The neural networks were used to model the output power of microgrid components.Each component was treated as an autonomous system.These autonomous components were collaborating to achieve the overall goal, which is supplying the electric load.Simulink model and results are discussed for grid tied microgrid with no storage element.Future work includes simulating Missouri S&T with the battery storage elements and implementing battery control algorithm.
Table 1 :
Characteristics of SoSs CharacteristicDefinition independence All subsystems work independently and have no interference with other subsystems
Table 2 :
Parameters of the S&T microgrid | 2019-02-17T14:16:43.757Z | 2017-01-01T00:00:00.000 | {
"year": 2017,
"sha1": "eea7a7ca8cf770962af917ed4af44d9a8f956b42",
"oa_license": null,
"oa_url": "https://doi.org/10.1016/j.procs.2017.09.053",
"oa_status": "GOLD",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "c4361bb6bb5bfa49bc848633557b0a8fd95b82a2",
"s2fieldsofstudy": [
"Engineering",
"Computer Science",
"Environmental Science"
],
"extfieldsofstudy": [
"Computer Science"
]
} |
227065178 | pes2o/s2orc | v3-fos-license | Prognostic significance of pre-treatment serum Cyfra21-1 as a tumor marker in patients with oropharyngeal squamous cell carcinoma treated with concurrent chemoradiotherapy
Background Oropharyngeal squamous cell carcinoma (OPSCC) is a kind of squamous cell carcinoma of head and neck, and its incidence is on the rise in recent years. A variety of prognostic markers for OPSCC have been reported in many studies, but they are expensive or difficult to obtain. So, we retrospectively studied the prognostic significance of cytokeratin 19 soluble fragment (Cyfra21-1) in patients with OPSCC, in order to provide theoretical basis for accurate prognosis assessment. Methods A retrospective analysis of the clinicopathological data of 85 OPSCC patients with concurrent radiotherapy and chemotherapy (CRT) admitted from January 2010 to June 2017. Serum Cyfra21-1 levels were measured before treatment. Analyze the relationship between Cyfra21-1 and clinical pathological characteristics of patients. The receiver operating characteristic (ROC) curve was used to calculate the cut-off value of Cyfra21-1. The Cox proportional hazard model was used to conduct univariate and multivariate analysis of related prognostic factors, and to determine the factors related to overall survival (OS) and progression-free survival (PFS). Results The cutoff value for Cyfra21-1 was 2.93 ng/mL. The baseline data of patients in different Cyfra21-1 groups were balanced and comparable. In the univariate and multivariate analyses, it was found that Cyfra21-1 was associated with OS and PFS. A measurement of Cyfra21-1 ≥2.93 ng/mL indicated poor OS (P<0.001) and PFS (P=0.001). After adjusting for age and disease stage, Cyfra21-1 can independently affect the OS (HR =3.57, 95% CI: 1.60–7.99, P=0.002) and PFS (HR =2.89, 95% CI: 1.41–5.91, P=0.004) of patients with OPSCC treated with CRT. Conclusions Pre-treatment Cyfra21-1 can be used as a prognostic marker for patients with OPSCC treated with CRT, which has important clinical significance.
Introduction
Oropharyngeal squamous cell carcinoma (OPSCC) accounts for about 2% of malignant tumors in the whole body (1), and the overall prognosis is poor. The 5-year overall survival (OS) rate of patients is only about 50% (2,3), among which the 5-year survival rate of advanced stage patients is <40% (1), while that of early stage patients can reach >80% (1,4). Therefore, early detection and intervention is important to improve the survival rate and quality of life of patients with OPSCC (1,5). OPSCC patients are usually treated according to the 8th edition of AJCC staging, including comprehensive treatment based on surgery and CRT.
It is reported that many indicators play an important role in the diagnosis and treatment of OPSCC, including, circulating neutrophil count (CNC), p16 protein and so on. Recently, the study of serum tumor markers in the early detection of malignant tumors has attracted increased attention (4,6). A serum tumor marker is a kind of abnormal substance produced by the expression of tumor related genes or the body's response to tumors during the occurrence and proliferation of malignant carcinomata (7). Cytokeratin 19 soluble fragment (Cyfra21-1) is a newly established monoclonal antibody; it is a serum cytokeratin 19 fragment, which has specific reaction with cytokeratin 19 (8). A recent meta-analysis revealed that Cyfra21-1 has a relatively high diagnostic value for detecting non-small cell lung cancer (NSCLC), especially in patients with squamous cell carcinoma (9). Recently, Many studies have explored the correlation between Cyfra21-1 and the clinical outcomes of NSCLC patients (10)(11)(12).
It has also been proven that Cyfra21-1 can independently be used as a prognostic indicator of a variety of malignant tumors. Numerous studies have shown that it has important prognostic significance for middle and advanced cancers, such as esophageal cancer, bladder cancer, epithelial ovarian cancer, and intrahepatic cholangiocarcinoma (13)(14)(15)(16). Lei et al. found that Cyfra21-1 could be used to predict the early recurrence of OPSCC by comparing between groups, and the concentration of Cyfra21-1 was related to the clinicopathological stage of OPSCC. At present, there is no relevant literature on the application of Cyfra21-1 for survival analysis of OPSCC patients. Therefore, we performed survival analysis and multivariate regression analysis of the prognostic significance of Cyfra21-1 in patients with OPSCC treated with concurrent chemoradiotherapy (CRT), in order to provide theoretical basis for accurate prognosis assessment We present the following article in accordance with the REMARK reporting checklist (available at http://dx.doi.org/10.21037/ atm-20-6124).
Patients
The clinical data of 85 patients diagnosed as OPSCC who received CRT treatment in the Oncology radiotherapy Department of Tianjin Medical University Cancer Hospital from January 2010 to June 2017 were retrospectively analyzed. The study was approved by the ethics committee of the Tianjin Medical University Cancer Institute and Hospital (No. bc2020110). A signed written informed consent agreement was provided by all patients before participation in the study. All procedures performed in this study involving human participants were in accordance with the Declaration of Helsinki (as revised in 2013). Inclusion criteria for this study: (I) primary OPSCC as confirmed by biopsy; (II) complete clinical data; (III) Cyfra21-1 was tested during the week prior to initiation of treatment; (IV) receive CRT treatment. Exclusion criteria include: (I) combination with other systemic malignancies; (II) incomplete clinical data; (III) co-existing with other respiratory diseases.
Treatment
All patients received concurrent radiochemotherapy, in accordance with the 8th edition of the National Comprehensive Cancer Network guidelines. The specific plan stipulates that all patients are treated with intensity modulated radiotherapy (IMRT) 1×/day and 5×/week. The prescription dose of primary tumor focus in the oropharynx is 69.96 Gy/2.12 Gy/33 f. The prescription dose of highrisk lymphatic drainage area is 60.06 Gy/1.82 Gy/33 f. The prescription dose of low-risk lymphatic drainage area is 50.96 Gy/1.82 Gy/28 f. All patients were treated with concurrent chemotherapy, cisplatin 40 mg/m 2 weekly for 4 cycles.
Blood tests
Blood samples were taken from all patients who met the inclusion criteria one week prior to treatment. The patient's blood samples were kept in test tubes treated with ethylenediamine tetraacetic acid. Then, Cyfra21-1 was detected with an automatic chemiluminescence immunoassay analyzer (Roche cobase801, Germany).
The quantitative patient serum and horseradish peroxide (HRP) were added to the solid phase white opaque microplate coated with antibodies, and the molecules in the serum specifically bound to the conjugate of horseradish peroxidase and the antibodies on the solid phase carrier. Separate and wash the unreacted free components. Then, luminol luminescence base solution is added, and the free energy released by the chemical reaction is used to excite the intermediate from the ground state to the excited state, and the energy is released in the form of photons. At this time, the microorifice plate is placed in the analyzer, and the photon number of each hole is read out by the photon counter through the three-dimensional transmission system inside the instrument. The molecular concentration in the sample was quantitatively analyzed according to the mathematical model established by the standard. Finally, print the data report.
Follow-up
After the end of CRT treatment, the patients were followed up using the Internet medical APP. All patients should go to the hospital for physical examination every 3 months and imaging examination every 6 months. If abnormalities are detected, determine whether further treatment is needed and, if so, give appropriate treatment according to treatment guidelines. All patients were followed for at least 3 years, or until death, disease progression, or study deadline (June 2020).
Statistical analysis
OPSCC patients were staged according to the 8th edition of the American Joint Committee on Cancer staging system (patients with unknown p16 status were classified as p16-negative). According to the disease stage, all patients were divided into the early group (stage I and II) and the advanced group (stage III and IV).
The study endpoints were (overall survival) OS and progression-free survival (PFS); The patient's death due to any cause is defined as OS, and the patient's local progression, distant metastasis, or death due to disease progression is defined as PFS.
Candidate variables include sex, age, smoking, alcohol consumption, primary tumor, disease stage, p16 status and comorbidity.
Cyfra21-1 is used as predictive variable, OS and PFS as outcome variable. Draw the receiver operating characteristic (ROC) curve, determine the optimal cutoff point based on the Youden index, and then convert Cyfra21-1 into a categorical variable. The survival of the patients was analyzed by Kaplan-Meier method, and the survival rate between groups was compared by the log-rank test. We used univariate and multivariate Cox proportional hazard models to analyze the impact indicators related to OS and PFS in patients, and to test the significance of Cyfra21-1 on the prognosis of patients. The double-tailed p value of 0.05 was considered to be statistically significant. All statistical analyses were carried out using SPSS version 26.0 (IBM Corporation, Armonk, NY, USA).
Patient characteristics
A total of 85 patients with OPSCC were included in this study. 12.9% of them were female, and 61.2% were <60 years old. Of the patients, 71.8% had smoked and 57.6% had drunk alcohol. Tonsil cancer (58.8%), soft palate cancer (22.4%) and root of tongue cancer (18.8%) were identified by their anatomical sites.16.5% of patients were in the early stage (stage I or II).Other systemic diseases were present in 40% of the participants. Among these patients, 22.4% of patients tested positive for p16, and 11.8% tested negative for p16; the p16 status of the remaining patients was unknown.
The best cutoff point for Cyfra21-1 was determined for both OS ( Figure 1) and PFS (Figure 2) according to Youden's index (cutoff value =2.93 ng/mL). Transformation of Cyfra21-1 into a classification variable was then made Table 1 shows the patients' baseline clinical characteristics according to Cyfra21-1. The baseline data of patients in different Cyfra21-1 groups were balanced and comparable, and there was no significant difference in clinical characteristics between the 2 groups.
Kaplan-Meier curve analysis revealed that the OS of patients with high levels of Cyfra21-1 (≥2.93) was significantly shorter than that of patients with low Cyfra21-1 (<2.93) (log-rank P<0.001) (Figure 3). The 3-year OS probability of patients in the low Cyfra21-1 group was 79.3%, compared with 40% in the high Cyfra21-1 group. Also, patients with high Cyfra21-1 had significantly shorter PFS than patients with low Cyfra21-1 (log-rank P=0.001) (Figure 4). The 3-year PFS rate was 69.0% and 35.1% in the low and high Cyfra21-1 groups, respectively. hazard models were used to analyze the indicators that independently affect OS and PFS (
Discussion
In our study, we performed a retrospective analysis of clinical data from 85 patients with OPSCC to determine the prognostic significance of pre-treatment Cyfra21-1 in patients receiving CRT for OPSCC. We found that in OPSCC patients treated with CRT, pre-treatment Cyfra21-1 independently affected the patient's OS and PFS.
The main component of cell filaments is cytokeratin, and each epithelial cell is a differentiation dependent combination of two or more cytokerins. Cytokeratin 8,18,and 19 are expressed in monolayers of the epithelium, including the bronchial epithelium, and in malignant tumors derived from these cells (17). After protein hydrolysis and degradation, insoluble cytokeratin filaments are released into the blood from dead or dying cells in the form of soluble fragments. Therefore, they can be used as serum tumor markers (18). The release of Cyfra21-1 has been closely related to apoptosis during tumor growth (19). The elevated level of serum Cyfra21-1 indicates that CK-19 is degraded into soluble fragments in the blood, which increases the level of serum Cyfra21-1, and suggests the possibility of abnormal epithelial cells differentiating into cell carcinoma. The increase of Cyfra21-1 level is closely related to the metastatic site and histological grade of lung cancer. In lung cancer patients, Cyfra21-1 is an independent predictor of poor prognosis (20). High expression of Cyfra21-1 is an important factor affecting the survival of patients with advanced NSCLC (21). The preoperative serum Cyfra21-1 level was an independent predictor of lymph node metastasis in patients with esophageal squamous cancer (13). Rapid and quantitative detection of urinary Cyfra21-1 is a non-invasive approach for the diagnosis and prognosis of bladder cancer (14). In epithelial ovarian cancer, Cyfra21-1 was a poor prognostic factor associated with OS (15).
In several studies, Cyfra21-1 has very important clinical significance in the diagnosis and prognosis of patients with OPSCC. For instance, Rao et al. reported that increased serum Cyfra21-1 in patients with laryngeal cancer indicates increased likelihood of lymph node metastasis and recurrent metastasis (22 Our research had some limitations, including the retrospective nature of this study. A large number of patients may have been excluded due to incomplete clinical data, resulting in a certain degree of selection bias. In addition, our study did not consider any other factors that might potentially affect the expression of Cyfra21-1. In addition, the sample size we studied is not very large, which may be the reason why the area under the ROC curve is relatively small. In order to understand in more detail the guiding significance of Cyfra21-1 on the survival and progression of patients with OPSCC, we will design better prospective studies with sufficient sample size in the future.
Conclusions
Our results show that the increase of pre-treatment serum Cyfra21-1 was independently associated with the deterioration of OS and PFS in OPSCC patients treated with CRT. As an economical pre-treatment serum biomarker, pre-treatment Cyfra21-1 can be used as a prognostic marker for patients with OPSCC treated with CRT, which has important clinical significance. | 2020-11-05T09:09:42.247Z | 2020-10-01T00:00:00.000 | {
"year": 2020,
"sha1": "3c1e0698a1cf1446be3adb3f20d3ecc1f71e5f0b",
"oa_license": "CCBYNCND",
"oa_url": "https://atm.amegroups.com/article/viewFile/54398/pdf",
"oa_status": "GOLD",
"pdf_src": "Adhoc",
"pdf_hash": "f6fb235e0165c7a68a5e97291e990b377324e6f0",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
270999382 | pes2o/s2orc | v3-fos-license | Efficacy of estimations of Hartmann–Shack sensors in small pupil sizes
PURPOSE: The purpose of this study was to evaluate the in vivo efficacy of the estimations of wavefront analyzers using Hartmann–Shack technology to measure optical aberrations when the pupil size is smaller than the evaluated pupil area. METHODS: Patients implanted with the monofocal ZCB00 intraocular lens (Johnson and Johnson) were examined with the KR-1W Wavefront Analyzer (Topcon) without pharmacological mydriasis and with it afterward. Optical aberrations were analyzed considering a 4-mm pupil and a 6-mm pupil for both examinations. RESULTS: Sixty-six eyes of 33 patients with a mean axial length of 23.35 ± 0.91 mm were assessed. The mean pupil diameter at the baseline examination was 5.05 ± 0.88 mm and under pharmacological mydriasis, it was 6.29 ± 0.84 mm. Outcomes were similar with and without dilation in the 4-mm comparison. However, there was a great disparity in the 6-mm comparison. Most of the values obtained under mydriasis were statistically lower than at baseline (P < 0.05). CONCLUSION: The iris interferes with measurements of wavefront aberrations, and therefore, real pupil size should always be checked before evaluating optical aberrations with Hartman–Shack sensors. When pupil size is smaller than the analyzed diameter, ocular, and internal, and sometimes, corneal aberrations are estimated far more positive than real values.
IntroductIon
V isual acuity (VA) has traditionally been the main measurement to evaluate an optical system such as the human eye.However, a good VA might not be associated with good vision in some patients and this is why optical quality is becoming increasingly important.It encompasses some aspects such as contrast sensitivity, color discrimination, halos perception, or optical aberrations.
Optical aberrations are the result of disorders in the light wavefront along its way to its focus on the retina.Total optical aberrations are divided into corneal and internal, depending on their origin.Most of them are included in the Zernike polynomials, which are a mathematical classification according to the radial and azimuthal degrees of the aberrated wavefront.Some others were excluded from this classification such as the distortion or the Petzval field curvature.Thus, those included in that classification can be divided into low-and high-order aberrations (HOAs).The former include defocus, astigmatism, and tilt.They represent the biggest part of total aberrations in human eyes and are usually easily rectifiable with optical means.They are called low-order aberrations because they represent the first and second orders in the Zernike polynomials.On the other hand, the latter represent a low percentage of the total in human eyes; they are not easily rectifiable with optical means and are composed of a larger number of aberrations ranging from the third order onward.Therefore, third-order aberrations include trefoil and coma; fourth-order aberrations include tetrafoil, secondary astigmatism, and spherical aberration; fifth-order ones include pentafoil, secondary trefoil, and secondary coma.The higher the order is, the lower impact they have on vision.
Optical aberrations vary with pupil diameter, [1,2] increasing with larger pupil sizes.This fact together with modifying lighting conditions is used in optical benches to assess how visual quality in a human eye might be affected.Despite optical benches not being always optically perfect, they are indeed much more than human eyes.It should be taken into account that there is also a great variation in biometry features and pupil sizes between patients.
Wavefront aberrations are becoming increasingly important, especially in refractive surgery, [3][4][5] and HOAs such as spherical aberration have an impact on vision which can be partially compensated with the implantation of an appropriate intraocular lens (IOL). [6]Although its quantity is relatively small, HOAs may lead to a negative result in visual quality which cannot be modified with spectacles or contact lenses.
Wavefront analyzers are optical devices whose main purpose is to evaluate optical aberrations in patients.There are different types of analyzers depending on the method they use, such as Hartmann-Shack, ray tracing, or Tscherning.In daily practice, aberrations are usually measured under scotopic conditions but patients are not always examined under pharmacological mydriasis.As far as we know, no previous studies have evaluated the efficacy of Hartmann-Schack sensors in small pupil sizes in real patients and not in optical benches.Our aim in this study is to evaluate the in vivo efficacy of the estimations of wavefront analyzers using Hartmann-Shack technology when pupils do not reach a required diameter.
Methods
This study was approved by the local Ethics Committee and followed the Declaration of Helsinki.Inclusion criteria were patients bilaterally implanted with monofocal ZCB00 IOL (Johnson and Johnson) between 2 and 3 months before and age lower than 75 years old.Patients were excluded in case of corneal astigmatism higher than 1 diopters, spherical refractive error higher than 3 diopters previous to cataract surgery, amblyopia, and any ophthalmological pathology.
The ZCB00 IOL belongs to the TECNIS platform.It is a biconvex hydrophobic acrylic one-piece IOL whose optics have a diameter of 6 mm, it has an ultraviolet filter and an aspheric anterior surface.According to the manufacturer, it has a spherical aberration of − 0.27 μm to compensate its positive corneal equivalent.
Instruments
Axial length (AL) was measured with IOLMaster 500 (Carl Zeiss, Jena, Germany).Patients were examined with the KR-1W Wavefront Analyzer (Topcon Medical Systems, Tokyo, Japan), which integrates Hartmann-Shack technology.The Hartmann-Shack sensor consists of an array of lenses of the same focal length, which are crossed by the light beams coming from inside the eye.
Each lens is focused onto a photon sensor, then the deviation between the focused light beam and its ideal position is analyzed, and this is how optical aberrations of the wavefront are calculated.
Patients were examined under scotopic conditions.Afterward, another mydriasis examination was performed, which was achieved with the topical instillation of tropicamide.The optical aberrations considered for this study were the ones obtained within the central 4-and 6-mm diameters and all of them are expressed as root mean square data in μm.We used the following parameters to define the aberrations: total HOAs, third-order aberrations, fourth-order aberrations, trefoil, coma, tetrafoil, secondary astigmatism, and spherical aberration.
Statistical analysis
Statistics were calculated with the SPSS software for Windows (SPSS Inc., Chicago, IL, USA).First, it was verified whether the sample adjusted to normality, and as it did not, Wilcoxon test for paired data was performed.Lineal regression analysis was performed afterward.Means and standard deviations were calculated for every variable.Figures were created with Microsoft Excel (Microsoft).
results
Sixty-six eyes of 33 patients implanted with the ZCB00 IOL were included in this study.Twelve were males and 21 were females.The mean age was 62.58 ± 6.97 years old, the mean AL was 23.35 ± 0.91 mm, and the mean power of the implanted IOL was 22.00 ± 2.62 D. At the baseline examination, the mean pupil diameter was 5.05 ± 0.88 mm, and under pharmacological mydriasis, it was 6.29 ± 0.84 mm.Table 1 displays aberratometric outcomes considering a 4-mm pupil and Table 2 considering a 6-mm pupil.Statistically significant differences are highlighted in gray.Few differences were found in the 4-mm comparison but most of the aberrations differed in the 6-mm comparison.The values obtained under mydriasis were always lower or more negative than the ones at baseline.Figures 1 and 2 show these differences between both examinations.Lineal regression analyses were performed between mean pupil size and the change between baseline and mydriatic examinations within the 6-mm analysis.No statistically significant result was obtained with corneal astigmatism or with corneal coma.As for the change in ocular total HOA, a moderate correlation was found (0.62, P = 0.00) with the regression line ocular total HOA change = 3.14 -0.55 × pupil size shown in Figure 3.As for the change in internal total HOA, a moderate correlation was found (0.60, P = 0.00) with the regression line internal total HOA change = 3.18 -0.57× pupil size shown in Figure 4.
dIscussIon
Pupil diameter is a key factor for quantitative evaluations of optical aberrations. [1,2]The larger a pupil is the more aberrations alter vision.This fact becomes quite relevant at night and that is why vision may decline more than expected in some cases. [7]In this study and as expected, optical aberrations increased with a larger analyzed pupil diameter.The outcomes in a 6-mm pupil were higher than in a 4-mm one.
As not all the patients may reach the established pupil diameter in the software analysis, it is important to know what happens in these cases.In addition, pharmacological mydriasis is not always an option because of the drug duration or because of a narrow anterior chamber.Our study is trying to assess the efficacy of the Hartmann-Shack technology in these cases.Including only patients implanted with the same monofocal IOL avoids any bias due to cataracts [8] or to the combination of a diffractive IOL and a device using infrared light. [9]e KR-1W Wavefront Analyzer is an optical aberrometer whose repeatability and reproducibility have been proven [10][11][12] and its outcomes have been compared with other devices. [13]t provides reliable measurements but they should not be interchangeable between devices. [10,11,14]Its automated refractor is reliable despite cataracts or corneal refractive surgery. [15]en considering a 4-mm pupil, aberrations did not differ too much from baseline to pharmacological mydriasis, although corneal tetrafoil and internal spherical aberration have significantly more negative values after dilation.Far more remarkable are the outcomes in 6-mm pupil, in which nearly all aberrations decreased significantly.It might be explained as a result of an accuracy loss of the software when it estimates aberrations in a pupil area covered by the iris.Nevertheless, corneal aberrations are also affected, although to a much lesser extent.If we examine these results in detail, we can see that differences have been found in ocular third, fourth, trefoil, and secondary astigmatism but not in their corneal or respective internal values.A possible explanation would be that despite laying beneath the cornea, the iris may affect all aberrometric measurements, and the way the internal software estimates those aberrations tends always to higher values than the real ones.Another explanation would be that all those points covered by the iris are taken as really highly aberrated, without making any estimation.These theories are supported by the positive correlation between pupil diameter and the difference in total ocular and internal HOA between the examinations in 4 and 6 mm pupils, which is shown in Figures 3 and 4.
The difference between 6 mm and the mean pupil diameter at the baseline examination is 0.95 mm, whereas the mean pupil size under tropicamide is larger than 6 mm.As for the evaluation in 4 mm, both examinations have always a diameter higher than those 4 mm.Thus, it is clearly noticed that the iris is responsible for these results.Hardly any differences were found in the 4-mm evaluation because the mean pupil diameter was higher all the time, and a great number of differences were found in the 6-mm evaluation because there was a difference of 0.95 mm between the baseline and the mydriatic examinations.
As far as we know, this is the first publication bringing out this event in any wavefront analyzer.The pupillary shift between photopic and scotopic conditions may be an underlying factor contributing to the differences we observed. [16]This would explain those differences found in the 4-mm analysis but the great disparity in the 6-mm analysis still remains unclear.Hao et al. reported higher internal HOA with the KR-1W than with the iTrace, which did not happen when evaluating corneal outcomes. [13]They attributed this event either to a difference in the algorithm locating the chief ray or to the different optical principles used.This chief ray is placed on the retina and all aberrations are calculated through the corneal center.Wu et al. investigated the relationship between the measured area of wavefront aberrations with this technology and ablation parameters and myopic laser refractive surgery. [17]They found that the measured pupil area was always smaller than the real pupil area, and therefore, aberrations were underestimated.Our investigation is just the opposite, when pupil size does not reach the analyzed size, it can be noticed that smaller or
conclusIon
In daily practice, this fact implies that real pupil size should always be checked before evaluating optical aberrations with Hartman-Shack sensors.Larger pupil sizes may imply higher aberrations than measured but in the case of smaller pupil sizes, real ocular, internal, and sometimes, corneal aberrations are more negative than measured.
Further research should be performed to assess whether this fact also occurs with other wavefront analyzers.
Financial support and sponsorship
Nil.
Figure 3 :
Figure 3: Regression line within 6-mm analysis between pupil diameter and the change in ocular total high-order aberration.RMS: Root mean square
Figure 4 :
Figure 4: Regression line within 6-mm analysis between pupil diameter and the change in internal total high-order aberration.HOA: High-order aberration, RMS: Root mean square | 2024-07-07T15:53:26.944Z | 2023-08-08T00:00:00.000 | {
"year": 2023,
"sha1": "11fd7c8365c168d5177265c0a93836cb859e8c9e",
"oa_license": "CCBYNCSA",
"oa_url": null,
"oa_status": "CLOSED",
"pdf_src": "PubMedCentral",
"pdf_hash": "e3d649acfeaa191fefec78ba8b31af91e8fa8aa5",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
247839840 | pes2o/s2orc | v3-fos-license | Singularly Perturbed Averaging with Application to Bio-Inspired 3D Source Seeking
We analyze a class of singularly perturbed high-amplitude, high-frequency oscillatory systems that arises in extremum seeking applications. We provide explicit formulas for averaging and establish the convergence of the trajectories of this class of systems to the trajectories of a suitably averaged reduced order system by combining the higher order averaging theorem with singular perturbation techniques. Finally, we propose a novel bio-inspired 3D source seeking algorithm and establish its singular practical stability.
I. INTRODUCTION
Averaging techniques have been widely used in the construction and the stability analysis of solutions to time varying differential equations [1], [2], [3], [4], [5], [6]. The rigorous application of the method of averaging begins by writing the system on the form: possibly via coordinate changes and time scaling, where the vector field f 1 is periodic in τ and ε is a small parameter. When f 0 = 0, the system is said to be on the averaging canonical form and the averaging theorem can be directly applied [2,Chapter 2]. If f 0 = 0, the Variation of Constants (VOC) formula may be used to force the system on the averaging canonical form [2, Section 1.6],[7, Section 9.1]. However, even when the vector field f 0 is linear and time invariant, i.e. f 0 (ζ, τ ) = Aζ for some matrix A, the VOC formula is practically useful only when the eigenvalues of the matrix A are purely imaginary. This is due to the fact that the pullback of the vector field f 1 under the flow of f 0 will contain exponentially growing terms (see the discussion in [2, Section 1.7]).
In this manuscript, we analyze a class of singularly perturbed high-frequency, high-amplitude oscillatory systems described by equation (2) which naturally arises in extremum seeking applications [8], [9], and can be put on the form (1). Yet, the VOC formula is not useful in analyzing this class of systems for the reasons outlined in the previous paragraph. Moreover, recent results in the literature such as the singularly perturbed Lie Bracket Approximation framework [10], [11] do not capture the stability properties as we illustrate below.
This work was supported by NSF Grant CMMI-1846308 M. Abdelgalil To analyze the behavior of this class of systems, we combine the higher order averaging theorem [2] with singular perturbation techniques [12] in a way that accounts for the interaction between the fast periodic time scale and the singularly perturbed part of the system. Furthermore, we propose a novel 3D source seeking algorithm for rigid bodies with a non-collocated sensor. The proposed algorithm is inspired by the chemotactic strategy of sea urchins sperm cells for seeking the egg in 3D [13], [14], and it utilizes the special structure of the matrix group SO (3). We prove the practical stability of the proposed algorithm using the singularly perturbed averaging results we state here.
II. SINGULARLY PERTURBED SECOND ORDER AVERAGING
In this section, we state the main theorem of our work. Consider the interconnection of systems on the form: where x, x 0 ∈ R n , y, y 0 ∈ R m , t, t 0 ∈ R, ω ∈ (0, ∞), and the maps f, g are given by: We adopt the following assumptions on the regularity of the right-hand side of equation (2): , and g i (·, ·, τ + T ) = g i (·, ·, , τ ), ∀τ ∈ R, 4) f 3 and g 3 are locally Lipschitz continuous in x, y and jointly continuous in all of their arguments, 5) where the time-varying vector fieldsf i are defined by: and Id is the identity matrix. In a companion paper [15], see also [16], we showed that the higher order averaging theorem may be applied to the reduced order system (3) to obtain the reduced order averaged system: where the vector fieldf(·) is given by: Under Assumption 2.1, we have the following theorem concerning the relation between the stability of the system (2) and the reduced order averaged system (9): Theorem 2.1: Let Assumption 2.1 be satisfied, and suppose that a compact subset S ⊂ R n is globally uniformly asymptotically stable for the reduced order averaged system (9). Then, S is singularly semi-globally practically uniformly asymptotically stable for the original system (2).
Remark 2: We note that our definition of singular semiglobal practical uniform asymptotic stability, which can be found in the appendix A, is slightly different from that in [10]. The proof of Theorem 2.1 proceeds by establishing each part of Definition A.1 similar to [17], [10], relying on Proposition B.1 which we can be found along with a proof sketch in the appendix B. We omit the proof of the theorem from this manuscript to be included in the extended version.
III. 3D SOURCE SEEKING
Source seeking is the problem of locating a target that emits a scalar measurable signal, typically without global positioning information [18], [19]. Interestingly, microorganisms are routinely faced with the source seeking problem. In particular, sea urchin sperm cells seek the egg by swimming up the gradient of the concentration field of a chemical secreted by the egg [14], [13]. The sperm cells do so by swimming in helical paths that dynamically align with the gradient. In this section, we propose a bio-inspired 3D source seeking algorithm for rigid bodies with a non-collocated signal strength sensor that partially mimics the strategy of sperm cells for seeking the egg.
The kinematics of a rigid body in 3D space are given by: where p denotes the position of a designated point on the body with respect to a fixed frame of reference, R relates the body frame to the fixed frame, and v and Ω are the linear and angular velocities in body coordinates, respectively. The map • : R 3 → R 3×3 takes a vector Ω = [Ω 1 , Ω 2 , Ω 3 ] ∈ R 3 to the corresponding skew symmetric matrix.
We assume a vehicle model in which the linear and angular velocity vectors are given in body coordinates by: where e i for i ∈ {1, 2, 3} are the standard unit vectors. Remark 3: This model is a natural extension of the unicycle model to the 3D setting. It is well known that this system is controllable using depth one Lie brackets [20].
Let c : R 3 → R be the signal strength field emitted by the source, and consider the case of a non-collocated signal strength sensor that is mounted at p s , where: Assumption 3.1: Suppose that the signal strength field c ∈ C 3 (R 3 ; R) is radially unbounded, ∃!p * ∈ R 3 such that ∇c(p) = 0 ⇐⇒ p = p * , and it satisfies c(p * ) − c(p) ≤ κ ∇c(p) 2 , ∀p ∈ R n and κ > 0. Now, consider the following control law: where y ∈ R 2 , and A, B, C are given by: The static part of this controller, i.e. equation (14) is a 1D extremum seeking control law [21]. Note that other choices of this control law are possible [22]. The dynamic part of this controller, i.e. the equations (15)- (16), is a narrow band-pass filter centered around the frequency ω. The motivation to consider this setup is that in the presence of noise, a narrow band-pass filter centered around the dither frequency ω optimally extracts the gradient information in the measured output while attenuating noise. In addition, assume that the distance r specifying the offset of the sensor from the center of the frame is such that r = 1/ √ ω. This assumption may seem artificial at first glance, though its implication is clear; we require that as the frequency of oscillation ω tends to ∞, the distance r from the center of the vehicle is small enough so as not to amplify unwanted nonlinearities in the signal strength field. Alternatively, one may consider this assumption as a "distinguished limit" [23] for the perturbation calculation in the presence of the two parameters ω and r. Under these assumptions, we have the following proposition: Proposition 3.1: Let Assumption 3.1 be satisfied, and let r = 1/ √ ω. Then, the compact subset S = {p * } × SO (3) is singularly semi-globally practically uniformly asymptotically stable for the system defined by equations (11)- (17).
Proof: Let R 0 = exp (ωt e 1 ) , Q = RR 0 , and compute: Let Λ(y, ωt) = C y R 0 e 3 and observe that: To simplify the presentation, we embed SO(3) into R 9 by partitioning the matrix Q = [q 1 , q 2 , q 3 ], and defining the state vector q = [q 1 , q 2 , q 3 ] . Restrict the initial conditions for q to lie on the compact submanifold M = where δ ij is the Kronecker symbol and ijk is the Levi-Civita symbol. On R 3 × M × R 2 , the system is governed by: For more details on this embedding, see the companion paper [15]. The signal strength field can be expanded as a series in 1/ √ ω using Taylor's theorem: where the remainder ρ is Lipschitz continuous in all of its arguments, and ∇ 2 c(p)[w] = w ∇ 2 c(p)w. Now, observe that the system governed by the equations (22)-(24) belongs to the class of systems described by (2). Hence, we may employ Theorem 2.1 in analyzing the stability of the system. In order to proceed, the reduced order averaged system needs to be computed. Due to space constraints, we leave the computations as an exercise for the interested reader in the light of equations (4)-(8), and we provide only the end result of the computation: 1q 2 ∇c(p) Equivalently, this system can be written as: where the average angular velocity vectorΛ is given by: We claim that the compact subset S is globally uniformly asymptotically stable for the reduced order averaged system (25)-(26). To prove this claim, we use the negative of the signal strength field as a Lyapunov function V c (p) = c(p * )− c(p). Observe that the system (25)-(26) is autonomous, and so the function V c is indeed a Lyapunov function for the compact subset S due to Assumption 3.1 [12]. We proceed to compute the derivative of V c : , and observe that S ⊂ N , and that S is an invariant subset of the reduced order averaged system (25)-(26). Suppose that a trajectory (p(t),Q(t)) of the system (25)-(26) exists such that (p(t),Q(t)) ∈ N \S, ∀t ∈ I, where I is the maximal interval of existence and uniqueness of the trajectory. Such a trajectory must satisfy: The differentiability of the trajectories allows us to compute the derivative of this identity and obtain that: which simplifies to: Recalling equation (27), we see that: Hence, the equation (31) necessitates that: which is clearly in contradiction with Assumption 3.1. Accordingly, it follows from LaSalle's Invariance principle [12, Corollary 4.2 to Theorem 4.4] that the compact subset S is globally uniformly asymptotically stable for the system (25)-(26). Hence, we conclude by Theorem 2.1 that the subset S is singularly semi-globally practically uniformly asymptotically stable for the original system defined by (11)- (17). Remark 4: If we attempt to apply the framework of singularly pertubed Lie Bracket Approximation introduced in [10] to the system (22)-(24), then the quasi-steady state of the system will be y = [c(p s ), c(p s )] . Hence, according to [10], the reduced order system is: which yields the Lie Bracket system: It is clear that the compact subset S is not asymptotically stable for the Lie Bracket system, and so the framework in [10] does not capture the stability of this system.
IV. NUMERICAL SIMULATIONS
We demonstrate our results by providing a numerical example. Consider the signal strength field given by c(p) = − log (1 + p p/2), which represents a stationary source located at the origin. We take the initial conditions as p(0) = [6, 2, −2] and R(0) = I 3×3 , and the frequency as ω = 4π. The numerical simulations are shown in Fig.(1). Observe that the behavior near the source is nontrivial, i.e. there is a limit cycle. However, in the limit ω → ∞ and r = O(1/ √ ω), this complex behavior does not appear in the reduced order averaged system.
V. CONCLUSION
In this manuscript, we analyzed a class of singularly perturbed high-amplitude, high-frequency oscillatory systems that naturally arises in extremum seeking applications and stabilization by oscillatory controls. We combined singular perturbation with the higher order averaging theorem in order to capture the stability properties of this class of systems. As an application, we proposed a novel 3D source seeking algorithm for rigid bodies with a non-collocated sensor inspired by the chemotaxis of sea urchin sperm cells.
ACKNOWLEDGEMENT
The authors like to acknowledge the support of the NSF Grant CMMI-1846308.
APPENDIX B TRAJECTORY APPROXIMATION
For the purpose of brevity, we state here some notations that may enhance the readability of the proof. Whenever a Lipschitz property of a map f over a subset K is employed, the corresponding Lipschitz constant is labelled as L f,K . Similarly, when a uniform bound is employed, it is labelled as B f,K . Sometimes we use M f,K as a generic constant when a mix of the two properties is used. Finally, we may omit mentioning the map in the constant label when it is too long or when it is clear from the context. Under Assumption 2.1, we have a trajectory approximation result between the original system (2) and the reduced order averaged system (9): Proposition B.1: Let Assumption 2.1 be satisfied, and suppose that a compact subset S ⊂ R n is globally uniformly asymptotically stable for the averaged reduced order system (9). Then, there exist constants λ > 0 and γ > 0 such that for every bounded subset B x ×B z ⊂ R n ×R m , ∀t f ∈ (0, ∞), and ∀D ∈ (0, ∞), there exists ω * ∈ (0, ∞) such that ∀ω ∈ (ω * , ∞), ∀t 0 ∈ R, ∀(x 0 , y 0 − ϕ 0 (x 0 )) ∈ B x × B z , and ∀t ∈ [t 0 , t 0 + t f ], unique trajectories of the system (2) exist and satisfy: Proof: We apply the time scaling τ = ω(t − t 0 ), and we let ε = 1/ √ ω. In contrast to the standard singular perturbation analysis which starts with a coordinate shift for the singularly perturbed part of the system from y to y−ϕ 0 (x) (e.g. [12,Chapter 11], [10, Section I]), we augment the standard coordinate shift with a near-identity part: where the maps ϕ i (x, τ ) for i ∈ {1, 2} are yet to be determined. This is coordinate shift is inspired by the standard near identity transform common in the higher order averaging literature [2, Section 2.8]. Observe that under this coordinate and time scale change, we have: where the vector fields v i and h i for i ∈ {1, 2} are given by: Now, we let ϕ 1 (x, τ ) and ϕ 2 (x, τ ) be the solutions of the linear non-homogeneous two point boundary value problems: The following lemma is a simple consequence of Assumption 2.1 and standard linear systems theory: Lemma B.1: Let Assumption 2.1 be satisfied. Then, the non-homogeneous BVPs (45)-(46) have unique solutions ϕ i ∈ C 3−i (R n ; R m ) defined by: where Id is the identity map on R m .
Proof: The result can be verified by direct substitution, and the regularity of the solutions follows from Assumption 2.1.
With this choice of the maps That is, the origin z = 0 is an equilibrium point for the boundary layer model: Moreover, it can be shown that the vector fields h i for i ∈ {1, 2} are Lipschitz continuous and bounded on every compact subset K ⊂ R n × R m , uniformly in τ , for some Lipschitz constants L hi,K > 0 and bounds B hi,K > 0, and that the remainder terms h 3 and v 3 are continuous and bounded on any compact subset K ⊂ R n × R m uniformly in τ ∈ R and ε ∈ [0, ε 0 ] for some ε 0 > 0. Next, we have the following lemma: Lemma B.2: Let Assumption 2.1 be satisfied, and suppose that a compact subset S ⊂ R n is globally uniformly asymptotically stable for the averaged reduced order system (9). Then, there exist constants λ > 0 and γ > 0 such that for every bounded subset B x ×B z ⊂ R n ×R m , ∀t f ∈ (0, ∞), and ∀D ∈ (0, ∞), there exists ε * ∈ (0, ε 0 ) such that ∀ε ∈ (0, ε * ), ∀(x 0 , z 0 ) ∈ B x × B z , and ∀τ ∈ [0, t f /ε 2 ], unique trajectories of the system (40) exist and satisfy: The full proof of this Lemma is rather long. So we include it here for review purposes, but it will be replaced by a sketch in the final manuscript to conform with the page limit. The full proof will appear in a journal version of the current manuscript. The proof combines ideas from [10, Lemma 1] and [2, Lemma 2.8.2].
Next, we define an ε-dependent time τ ε by requiring that the following inequality is satisfied: and observe that this is always possible for ε > 0.
In fact, it can be shown that τ ε = max{(3/(2λ)) log((γ c/α 2 )/(α ε)), 0} satisfies the inequality (58). Now, we show that ∃ε 4 ∈ (0, ε 3 ) such that τ ε < τ D , ∀ε ∈ (0, ε 4 ). To obtain a contradiction, suppose that there exists a bounded subset B x × B z ⊂ R n × R m , and a D ∈ (0, ∞), such that ∀ε 4 ∈ (0, ε 3 ), ∃ε ∈ (0, ε 4 ) such that τ ε ≥ τ D . We estimate the difference where B v+f,K is a uniform upper bound on the norm of the integrand inside the compact subset K whose existence is guaranteed by Assumption 2.1. Now, observe that lim ε→0 τ ε ε = 0, and so ∀D ∈ (0, ∞), ∃ε 4 ∈ (0, ε 3 ) such that B v+f,K τ ε ε ≤ D/2, ∀ε ∈ (0, ε 4 ). Hence, we have that ∀ε ∈ (0, ε 4 ), x(τ D ) −x(τ D ) ≤ D/2 which contradicts the definition of τ D . Accordingly, we have that for all bounded Next, we show that ∃ε 5 ∈ (0, ε 4 ) such that t f /ε 2 < τ D ∀ε ∈ (0, ε 5 ). To obtain a contradiction, suppose that there exists a bounded subset B x × B z ⊂ R n × R m , a t f ∈ (0, ∞), and a D ∈ (0, ∞), such that Since lim ε→0 τ ε ε = 0, we conclude that when τ < τ ε, the difference x(τ ) −x(τ ) can be made arbitrarily small by choosing ε small enough. Second, for τ > t ε , we have that: s), s)) ds +B v+f,K τ D ε 3 , which leads to the estimate: . We proceed to estimate I 1 as follows: where I i for i ∈ {2, 3, 4, 5} are given by: (v 2 (x(s), z(s), s)) − v 2 (x(s), 0, s))ds (64) Observe that v i (x, 0, s) =f i (x, s), and so we have that: We estimate each of the integrals above, starting by I 2 , I 4 and I 5 , which can be estimated as: where L v1,K , L v2,K , L f2,K > 0 are Lipschitz constants. Next, we estimate I 3 . We proceed by dividing the interval I = [τ ε , τ ] into sub-intervals of length T and a left over piece: where T i = τ ε + i T , and k(ε) is the unique integer such that k(ε)T ≤ τ < k(ε)T + T . Then, we split I 3 into a sum of sub-integrals: The part of the integral on the leftover piece can be bounded independently from ε as follows: Next, we employ Hadamard's lemma to obtain: where the matrix valued map F 1 is given by: Through adding and subtracting a term, we may write: where the term ∆ i [F 1 ] is given by: Next, since the matrix-valued map F 1 is periodic with zero average over its third argument when the other arguments are fixed, we have that for any fixed w. Thus, we may write: ). The fundamental theorem of calculus yields: Through integration by parts, we obtain: where we have that: The boundary term coming out of the integration by parts vanishes because the right factor vanishes at s = T i−1 and the left factor vanishes at s = T i , leaving only the integral terms. Using Lipschitz continuity and boundedness on compact subsets, it is not hard to see that: for some Lipschitz constant L K and consequently the integral term I 3 satisfies the bound: Combining (59), (61), (68), (70), (69), and (91), in addition to the fact that τ ε < τ D , ∀ε ∈ (0, ε 4 ), we can show that the following estimate holds: x Now, remember that in order to obtain a contradiction we assumed that τ D ≤ t f /ε 2 , and so we will have: (94) where the function δ(ε) is given by: δ(ε) = M K,1 ε + M K,2 τ ε ε + M K,3 t f ε + M K,4 t f ε 1 2 | 2022-04-01T01:15:35.048Z | 2022-03-31T00:00:00.000 | {
"year": 2022,
"sha1": "2381288ade62ab3c775a002ddddc7c9205309518",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "2381288ade62ab3c775a002ddddc7c9205309518",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Computer Science",
"Mathematics"
]
} |
270418741 | pes2o/s2orc | v3-fos-license | Analysis of e-Learning in Undergraduate Medical Education During Pandemic: Future Perspectives For Post-Covid Era
Background: The global Covid-19 pandemic has had a profound effect on medical education. Social distancing leads to the implementation and development of e-learning, especially in the medical education field. E-learning is an innovative learning method using technology that allows students to learn flexibly because learning materials are visualized in various formats that students easily access. Aims: This article aims to help researchers map and visualize research data related to the use of e-learning for the effective medical education in the future. Methods: This article uses the bibliometric analysis method. It is a quantitative method for analyzing bibliographic data of published articles from reputable sources using the VOSviewer app. We analyzed 949 high-quality research articles from the Scopus database on August 14, 2022. Results: It shows that the publications related to the implementation of e-learning in medical education during the Covid-19 pandemic began to increase from 2020 to 2021 and decrease in 2022. Research that focuses on developing e-learning methods in undergraduate medical students needs to be conducted in the future as part of academic development for the medical education system. Conclusion: The medical community needs to continue paying attention to research the implementation of e-learning in medical education as related research begins to decline in 2022. E-learning also beneficial for students in terms of knowledge, skills, and attitudes. Even after the pandemic, it is expected that e-learning will still be a beneficial mode of study.
PRACTICE POINTS
• The medical community needs to continue paying attention to research the implementation of e-learning in medical education as related research begins to decline in 2022.• Even after the pandemic, it is expected that e-learning will still be a beneficial mode of study.
• Based on our bibliometric analysis, researchers in medical education field needs to focus on how to maximize the benefits of e-learning methods in medical education in the post-covid era because e-learning methods has potency to develop a better medical education in the future.• E-learning is beneficial for students in terms of knowledge, skills, and attitudes.
INTRODUCTION
The Covid-19 pandemic that has hit the whole world has hampered the education sector. 1,2The direct impact on the medical education sector is the occurrence of learning from distance for medical students. 3This condition raises public concerns about the possibility that medical science is not conveyed properly to medical students.Because, in fact, medical learning requires not only materials and books but also guidance on examination skills training from doctors and lecturers in the medical faculty directly. 4Medical students must have sufficient skills and knowledge to become competent in the medical field. 5Thus, the medical learning system during the Covid-19 pandemic has become a topic that has been widely studied for the last three years.
Several studies from journals, reviews, and articles show that the effectiveness of medical education will affect the national health level. 6,7Furthermore, doctors are on the frontline of national health services, so medical students with good medical knowledge will become expert doctors in the future. 8Due to that, we must ensure learning methods in medicine during a pandemic can distribute medical knowledge to students with the unchanged quality compared to before the pandemic occurred.So, the pandemic should not hinder the course of medical education.Therefore, a support system is needed to facilitate medical education during the Covid-19 pandemic. 9learning appears as a learning tool during pandemic. 10E-learning is a teaching process that integrates any form of technology, it describes an educational setting in which teaching and learning takes place within an internet-based environment.E-learning is also called online learning because it requires internet to be accessed. 11,12E-learning increases educational effectiveness and learning opportunities.Knowledge can be stored at a Web page and can be updated and maintained regularly.E-learning is part of the information and communication technology, enabling students to learn anytime and anywhere. 13learning has two types: synchronous and asynchronous. 14Synchronous means the learning process occurs simultaneously between the educator and the learner.Therefore, it enables direct interaction between educators and learners online.Synchronous training is a picture of a real classroom, but it is virtual, and all students are connected via the internet. 15Asynchronous means the learning process does not happen at the same time between the learner and the educator.This e-learning type is popular because students can access learning materials anywhere and anytime.Students can carry out learning and complete it at any time according to a predetermined schedule range. 14,16Learning can take the form of recorded videos and podcasts, reading, animation, simulation, educational games, tests, quizzes, and assignment collection.The components of e-learning are e-learning infrastructure, e-learning systems, applications, and e-learning content. 17storically, there have been two common e-learning modes: distance learning and computer-assisted instruction. 18Distance learning uses information technologies to deliver instruction to learners who are at remote locations from a central site.Computerassisted instruction (also called computer-based learning and computer-based training) uses computers to aid in the delivery of stand-alone multimedia packages for learning and teaching.These two modes are subsumed under e-learning as the Internet becomes the integrating technology. 18learning systems and applications are often referred to as The Learning Management System (LMS), which is a software system that virtualises the conventional teaching and learning process for administration, documentation, reports of a training program, classrooms, online events, e-learning programs, and training content, for example, all features related to the management of the teaching and learning process such as how to manage the classroom, the creation of materials or, discussion forums, grading systems, online examination systems that are all axle the internet, as well as the usage of the telehealth. 19learning is beneficial for medical education in terms of knowledge, skill and attitudes. 20One of the e-learning technologies that is currently developing is virtual reality.This technology has begun to be widely used since the pandemic.In terms of knowledge and analyse large amounts of scientific data.This article discusses the trend research, journal publication, potential topics in the future of e-learning in medical education journal publications.
In addition, the article aims to help researchers map and visualize research data related to the use of e-learning for the effective medical education in the future.Map and visualisation of the research data is important.Data visualization helps to tell stories by curating data into a form easier to understand, highlighting the trends and outliers.A good visualization tells a story, removing the noise from data and highlighting useful information.It is a technique that helps to communicate insights from data by presenting it in a way that is easy to understand, highlighting the important trends and patterns while removing irrelevant information from research data.Bibliometric analysis will be able to tell a story by presenting the data clearly and concisely, making it easier for the audience to understand the message behind the research data. 31
METHODS
Initially, we searched the Scopus database, and the assessment of the obtained documents was divided into three phases.Based on figure 1, First phase, we determined the search criteria to identify records in the Scopus database.In the second phase, documents are exported to VOSviewer software for bibliometric analysis of journals, authors, keywords, and countries.In third phase, we analyze the data to identify the main themes discussed in the research developed on e-learning in medical education during the covid-19 pandemic.skills, virtual reality (VR) is a part of e-learning that has the potential to revolutionize the way students learn. 21VR technology allows students to enter computer-generated 3D worlds and interact with them using special devices, including VR headsets.This immersive technology is increasingly popular and has a lot of potential to increase learning effectiveness by creating interactive learning modules that combine 3D graphics with real-world experiences. 22rtual learning has become an important part of medical education, especially since the pandemic. 23By using VR, anatomy teaching can be done with virtual cadavers. 24In addition, virtual reality can be used by students to practice cardiopulmonary resuscitation skills even though they are far from the hospital and get input from instructors. 25Research shows that VR simulations can be a useful additional learning tool for medical students. 26In terms of attitudes, study showed that student motivation and outcomes are significantly affected by e-learning systems. 27wever, learning using e-learning has some positive impacts, such as accessibility, flexibility, and easier feedback. 28Some benefits of e-learning in medical education are shortening learning time, making medical students' study costs more economical, and facilitating interaction between students and materials independently.Hence, students have more control over their learning period. 29In the context of medical education, learning with e-learning can help students to improve their medical knowledge and skills, as well as prepare them for future challenges.
However, we recognise the need for more research on e-learning in medical education. 30Due to that, we conduct a bibliometric analysis to explore and
The Visualization of the Keywords Related to E-learning in Medical Education from Scopus Database Between 2020-2022
VOSviewer shows the network visualisation of keywords related to e-learning in medical education.Each topics were symbolised as circles.The bigger the circle, the more important the keyword is.The frequency of occurrence determines the size of the keywords.Technically, the keyword of Covid-19 was the top research topic since it has the highest frequency of occurrence, followed by the keyword of "e-learning".Each circle has a different colour belonging to its cluster.Based on VOSviewer, there are 6 clusters of the research topic trends related to e-learning in the medical education field.The author keywords of the papers are clustered into six groups.Of the 1586 keywords, 89 keywords met the threshold.The keyword "Covid-19" appeared most, with 381 (20%) cooccurrences, followed by e-learning (181, 9,7%), medical education (181, 7,8%), education (90, 4,8%) and pandemic (68, 3,6 %).
Overlay Visualization of the Keywords using VOSviewer
The overlay visualization image illustrates the novelty of research on e-learning in medical education.VOSviewer maps that publications in Scopus related to the use of e-learning in medical education began to grow rapidly in research in 2020.VOSviewer depicts data that appeared at the beginning of 2020 in dark green.Meanwhile, data that appears in 2021 is bright yellow.Only a small amount of data is in light color.This shows that research related to the potential of e-learning in medical education is being conducted less and less as the Covid-19 pandemic is over.In fact, research related to e-learning must be developed continuously for the development of medical education.
Initially, we searched the Scopus database, and the assessment of the obtained documents was divided into three phases.Based on figure 1, First phase, we determined the search criteria to identify records in the Scopus database.In the second phase, documents are exported to VOSviewer software for bibliometric analysis of journals, authors, keywords, and countries.In third phase, we analyze the data to identify the main themes discussed in the research developed on e-learning in medical education during the covid-19 pandemic.
Bibliometrics is an analytical method used to identify trends in scientific publications, encourage the creation of systematic research on a topic.Bibliometric analysis provides analysis and new perspectives for future research, and guarantee the quality of information results produced.Bibliometric analysis is an important tool for researchers and educators in relevant fields.It could helps identify the most impactful papers within a specific field of research, explore the impact of a set of researchers, or the impact of a particular paper.We collected the documents in this study from the Scopus data platform.
Scopus is an ideal database for bibliometric analysis covering information published in indexed journals in several fields of knowledge.This database has been widely used in the bibliometric analysis.We conducted this search on August 14, 2022, by inputting the keywords "e-learning" AND "medical" AND "covid".All collected datas are exported as a "Scopus delimiting" file containing "Full cited references" used for co-authorship and cooccurrence analysis.
Thus, generating a network map of authors, countries, and keywords is possible.In addition, from the citation analysis, a network map of scientific journals is generated.VOSviewer software (version 1.6.18,Leiden University, Leiden, Netherlands) is used for mapping and visualising bibliometric networks.The output results are displayed in interlocking circles to see the relationship between the bibliometric data.The distance between two or more circles indicates the strength of the relationship between the terms represented.Different colours represent different term groups.In addition, the size of the
The Trend Research of E-Learning in Medical Education during The Covid-19 Pandemic
Generally, research about E-learning in medical education during the Covid-19 pandemic in the scopus database started to grow for the first time since 2020 (total of 260 publications), and the research trend continues to increase in terms of the number of publications until 2021 (460 publications).The increase in journal publications during 2020-2021 was 76.9%, and the decrease in publications from 2021-2022 was 52%.The medical community needs to continue paying attention to research the implementation of e-learning in medical education as related research begins to decline.
Countries with Most Publications in Scopus Related to E-learning in Medical Education
Figure Over the past three years, distance learning in clinical medical education has been carried out due to the COVID-19 pandemic. 32,33,34Medical education is important because it teaches the patients knowledge in the medical field and examination skills.Distance learning in medical education raises concerns about whether medical science can be absorbed and understood well by students or not. 35Furthermore, there are numerous difficulties that both students and teachers encounter when participating in online medical education during Covid-19 pandemic. 36,37,38n response to this problem, e-learning has emerged as a solution for medical learning. 39ectronic learning (e-learning) is a type of learning process that uses internet technology to facilitate, deliver, maximise, and enable distance learning to run. 40E-learning helps to form a long-lasting digital transformation in medical education that fosters openness and communication between medical students and their instructors. 41,42Over time, there has been considerable growth in research related to e-learning in medical education, including an increased variety of articles, reviews, and journals.In response to that, we conducted a bibliometric analysis related to e-learning in medical education during the covid-19 pandemic.Because the implementation of e-learning during the pandemic could be a source of evaluation for the medical learning system and also the development of better medical education in the future. 43,44 have analysed e-learning in medical education during the Covid-19 pandemic publications in Scopus.Publications on e-learning research under COVID-19 grew rapidly in 2020.When we searched medical education-related publications using the same database, we obtained as many as 949 related publications.This shows that publication on relations between e-learning, medical education, and Covid-19 research only account for 0,2% of COVID-19 research, which is a very small portion.Moreover, the trend of research between 2020 and 2022 shows an increase in journal publications during 2020-2021 (76.9%) and a decrease in publications from 2021-2022 (52%).Research related to the potential of e-learning in medical education is being conducted less and less as the Covid-19 pandemic is over.
This can be attributed to the decrease in the medical community's attention to the importance of electronic learning as the support system for the development of medical education.In fact, research related to e-learning must be developed continuously for the development of medical education.However, we still get some research hotspots through the cooccurrence clustering of keywords and gain knowledge through network visualisation maps from VOSviewer.We found that the keywords Covid-19, e-learning, and education were visualised with large circle sizes and different colours.This suggests that all three topics are important and correlated with our bibliometric analysis topic and the size of the circle displayed indicates the frequency of occurrence.The topics belong to different clusters because they have different colours.
To deepen our knowledge related to journals in the field of e-learning, we conducted a quantitative analysis on six keyword cooccurrence clusters.We get the word Covid-19 apparent the most with 20% co-occurrences, followed by e-learning, medical education, education, and pandemic.The keyword 'Covid-19' has become dominant because it has been an infectious disease for the past three years and requires deeper research through journal publications.Furthermore, the keyword medical education also often appears in this topic.This is due to the importance of medical education in developing competent doctors. 45For future research, researchers are prioritized to take topics from keywords with a high frequency of cooccurrences.This is because the higher the frequency of keyword occurrence, the keyword mostly becomes the main topic of discussion and is related to important issues nowadays.anatomy, surgery and suturing techniques.Teachers and students can enter the virtual world with virtual anatomy cadaver. 49 terms of skills, the teacher can model the procedure for suturing a wound virtually and then the students follow it.The use of e-learning technology in the form of virtual reality can facilitate educators to teach various skills using tools and mannequins in the virtual world. 50Apart from that, Cardiopulmonary Resuscitation skills can also be learned using virtual reality. 25Teachers can practice cardiopulmonary resuscitation methods using certain scenarios.Emergency incident scenarios can be created in the virtual world so that students are able to practice resuscitation skills like in the real world. 48In terms of attitudes, students who study e-learning show an increase in learning attitudes in a positive direction.Learning attitudes are the driving force of learning behavior attitudes and motives for seeking knowledge in students.
Student learning always involves their own cognitive abilities, as well as other abilities such as: motivation, study habits, self-mastery and control, empathy and several social skills. 27A student is said to have good study habits, if the student has a good way of learning and a learning atmosphere that supports learning.E-learning is here to create a good and interesting learning atmosphere for medical students, thereby increasing student motivation to study medical material in more depth.The material contained in e-learning can be accessed anytime and anywhere, thereby increasing student activity in learning. 27is bibliometric research directs teachers or researchers to research e-learning more comprehensively.So that in the future, learning using an electronic learning system can advance the medical learning system.However, there are consequences in implementing e-learning as a modality in medical learning.The Faculty of Medicine needs to conduct training on the use of online learning tools as well as student orientation. 51pending money to purchase premium software packages are advised since they might assist users get around several restrictions.Medical Schools are advised to purchase license for premium software to detect student cheating and plagiarism. 52r the post covid era, our bibliometric research reveals that the topic about the usage of e-learning for undergraduate medical education needs further research, since e-learning has potency to improve and enrich the quality of the medical education system.E-learning is a learning method by utilizing online electronic devices.The emergency implementation of e-learning during a pandemic illustrates that e-learning can facilitate the learning process for medical students, we can even make use of it after the pandemic is over.E-learning provides a flexible learning experience for students.Students able to listen to lectures repeatedly via recordings, enabling more comprehensive learning.Through the development of electronic learning modalities for example 'virtual reality' and 'metaverse' , learning does not have to be done in the classroom, but can be done remotely. 46Examinations do not have to use paper and assessments are carried out quickly by an electronic system. 47rthermore, e-learning offers several benefits, such as easy access to educational material, the ability to study in a preferred environment, and the ability to study while maintaining social distancing and continuing medical duty.E-learning can also be used to simulate emergency situations and provide direct experience to the undergraduate medical students. 48This can help develop critical teamwork skills among them.E-learning is a promising tool for the future of medical education.It provides better access to learning resources, is beneficial for reaching dispersed audiences, and can be used to simulate emergency situations and provide direct experience to the medical students.
E-learning has a major impact on medical students in terms of knowledge, skills, and attitudes.The benefit of e-learning in the field of medical education is the use of virtual reality methods.Virtual reality is a part of e-learning.The virtual reality component consists of a virtual reality headset and a user's head position sensor.This sensor can adjust the user's head position and the virtual world.This technology allows users to enter the virtual world and carry out medical training and education activities in cyberspace. 49From a scientific perspective, virtual reality technology can be used to deliver material on
CONCLUSIONS
This article uses some quantitative data to reinforce and call more researchers to pay attention to aspects of e-learning in medical education, as this is a field related to the development of the future health systems and healthcare providers.It is important to know that medical learning methods are important for producing competent graduates in the field of medicine.
Through bibliometric analysis and visualization methods, we came to the following conclusions: (a) The medical community needs to continue paying attention to research the implementation of e-learning in medical education as related research begins to decline since 2022, (b) Related research focuses on the use of e-learning during the pandemic shows that it also beneficial even after pandemic, (c) The United States of America still dominating this field of research, so other country need to do more research about this topic, (d) There are still several potential topics for research related to the e-learning field.
RECOMMENDATIONS
Based on our bibliometric analysis, researchers in medical education field needs to focus on how to maximize the benefits of e-learning methods in medical education in the post-covid era because e-learning methods has potency to develop a better medical education in the future.
Figure 1 .
Figure 1.The Methodological Phase, Steps, and Analysis Criteria Applied to This Work ) Cluster 1 (red) involved keywords related to medical education during the pandemic, (b) Cluster 2 (green) involved keywords related to methods of e-learning in medical education, (c) Cluster 3 (blue) involved keywords related to branches of medical education that was affected during covid-19 pandemic, (d) Cluster 4 (yellow) involved keywords related to the media used in e-learning during covid-19 pandemic, (e) Cluster
Figure 2 .
Figure 2. Network visualization of Keyword in VOSviewer (A); Overlay visualization of Keyword using VOSviewer (B); Publication trend about e-learning in medical education from Scopus database (C); Country with Most Publication related to E-learning in Medical Education in Scopus database between 2020-2022 (D) 53Era" written byMukhtar et al. (2020) has been cited 279 times representing the highest cited article overall in this field, which was published in 2020 by the Pakistan Journal of Medical Science.Meanwhile, the article entitled "Medical and Surgical Education Challenges and innovations in the COVID-19 era: A systematic review" published by In Vivo journal is in second place.The most cited articles were mostly published in 2020. | 2024-06-13T15:23:43.437Z | 2024-06-11T00:00:00.000 | {
"year": 2024,
"sha1": "27093a93a77c13c7631bea483cfdf813bacec8ad",
"oa_license": "CCBYSA",
"oa_url": "https://doi.org/10.22146/jpki.88670",
"oa_status": "GOLD",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "498ab6a25bba9be30906d98aff1e4daf50d47962",
"s2fieldsofstudy": [
"Medicine",
"Education",
"Computer Science"
],
"extfieldsofstudy": []
} |
15521369 | pes2o/s2orc | v3-fos-license | Oxazin-5-Ones as a Novel Class of Penicillin Binding Protein Inhibitors: Design, Synthesis and Structure Activity Relationship
Penicillin binding proteins (PBPs) are normal constituents of bacterial which are absent in mammalian cells. The theoretical binding modes of known oxazin-5-ones toward the protein were used as a guide to synthesis new inhibitors. Structural studies of protein-ligand complexes revealed that conformational discrepancies of the derivatives in the protein’s binding site gave rise to the variation in their inhibition constant which ranged from 68.58 μM to 2.04 mM. Biological assay results further confirmed the antibiotic potencies of the studied compounds. Although the outcome of biological screening does not parallel computational predictions, the results obtained from both methods suggest that the oxazin-5-one derivatives are potential PBP inhibitors, hence interesting antibiotic lead agents.
Nowadays, computational methods are routinely employed in drug development processes due to their reliability, time and cost effectiveness [15][16][17]. These methods involve; calculation of pharmacokinetic parameters of chemical compounds using molecular descriptors, pharmacophore screening, docking and binding free energy calculations of a given interaction. Information derived from the binding mode of docked compound has been employed as a guide in structural optimization processes. [18][19].
In the present work, we used results of binding mode predicted from docking calculations of two parent molecules to guide the synthesis of new oxazin-5-ones via palladium catalyzed atom at 6-position with lipophilic and/or an extended hydrophilic moieties could lead to improved potency due to the presence of TRP233 and SER62 residues.
Docking Calculations
To probe the C6 position, compounds 3 to 13 were synthesized (Figs 2 and 3) and docked toward PBP binding site. It was observed that compounds 4, 9, 7 and 12 inhibited the activity of the studied target (K i values ranging from 96.31 to 68.58 μM) ( Table 1) more than compounds 1 and 2, but their relatively poor ligand efficiencies (21-23 kcal/mol per non-H atom) could pose a challenge [23]. Activities were greatly reduced in other derivatives with compound 10 having K i as poor as 2.04 mM.
Assessment of Oral Bioavailability Property
Criteria proposed by Lipinski in his popular "rule of five" (ro5) alongside total polar surface area (TPSA) property were used to assess the oral bioavailability potential of the newly synthesized oxazin-5-ones [24]. Total polar surface area (TPSA) is frequently used in drug design as surrogate property for cell permeability with a rule-of-thumb that a molecule with a TPSA of less than 140 Å 2 would be able to permeate the cell. TPSA has also been used as a surrogate for penetrating the blood-brain-barrier (BBB). Van de et al [25] demonstrated that for a drug molecule to cross the central nervous system, the cut-off for TPSA should be 90 Å 2 . This implies that all compounds can penetrate blood-brain barriers, hence can be used in treating brain cells infections.
According to Lipinski's ro5, derived from 90 th percentile of drug candidates that reached phase II clinical trials, to be drug-like, a drug candidate should have lipophilicity (log P) 5, molecular weight (MW) 500, number of hydrogen bond acceptor (HBA) 10, and number of hydrogen bond donor (HBD) 5. The rule claims that drug candidate which violates more than one property will have bioavailability problem. Table 2 showed that all the compounds are drug-like with respect to ro5. Veber et al [26] observed number of rotatable bond (NRB) experimentally influences bioavailability in rats. Therefore, NRB 10 has been recommended for good oral bioavailability property. Again all the compounds respected NRB criteria for drug-likeness.
Binding Mode Prediction
The docked poses of all the derivatives toward PBP binding site is shown in Part 4a in S1 Fig. It was observed that the compounds adopted varying preferential conformations within the PBP Their styryl and phenylethynyl moieties were accommodated within the protein groove surrounded by PHE120, VAL302, GLN303, and LEU214 residues. Double bonds are longer than triple bonds. Therefore, compounds 3 and 8 docked deeper into the PBP hydrophobic pocket and hence, made a stronger interactions (546.24 and 893.70 μM respectively) than compounds 5 and 10 (931.82 μM and 2.04 mM respectively). In fact, it appeared the ability to dock deep into PBP binding cavity is a necessary criteria for interaction with the protein because the better inhibitory activities of compounds 6 and 11 than those of compounds 5 and 10 could be attributed to the length of compounds 6 and 11 hexynyl substituent (Part 4d in S1 Fig). In general, the derivatives demonstrated no significant improved affinity for the studied target over the known oxazines (compounds 1 and 2).
Biological Screening
All the derivatives were screened in vitro against selected bacterial following Bauer etal method [27] and results are shown in Table 3. In general, the synthesized derivatives manifested appreciable activity but not in consonant with the docking calculation results. Perhaps PBP was not the drug target inhibited by the derivatives in the whole cell assays and hence the variation in their results. Apart from compounds 3 and 4, the synthesized derivatives exhibited activity against both Gram-positive and Gram-negative bacteria. With exception of compounds 7, 12 and 13 the rest compounds appeared to be generally more active for Gram-positive than Gram-negative bacteria. The compounds (8-13) derived from 6-chloro-5H-naphtho[2,1-b]pyrido[3,2-e] [1,4]oxazin-5-one whose structure contain nitrogen hetro-atom in position-10 of the molecule exhibited enhanced activity compared to those derived from 6-chloro-5H-benzo[a]phenoxazine substrate which has no N-hetero-atom at position 10. Compounds 12 and 13 particularly seem to have broad activity for Gram-positive and Gram-negative bacteria and this was attributed partly to heterocyclic thiophenyl and furanyl moieties contained in the molecules. In addition, the MICs of the compounds were higher than the reference drugs. However, the MIC of compounds 5 and 10 are close to that of tetracycline for B. cereus and S. aureus bacteria respectively.
It can be observed in this study that the results of biological assay and in silico screening do not parallel. This is often the case when comparing the results of in-silico screening, which focuses on a particular enzyme, with a whole organism in vitro testing. The reason could be that the enzyme used in the in silico study might not be in vitro mechanism of the drug candidate action [28].
Experimental Section General Information
All chemicals were purchased from Aldrich Chemical Company UK and were used without further purification. Otherwise stated all compounds were synthesized and characterized in the School of Chemistry of Cardiff University UK. Melting points was determined with a Fischer-Johns apparatus. 1 H and 13 C NMR data were recorded with Brucker DPX 400 MHz spectrometers relative to TMS as internal standard. All and chemical shifts reported in ppm (δ) and coupling constants (J), reported in Hz. Multiplicity is indicated using the following abbreviations: br, for broad; s, for singlet; d, for doublet; t, for triplet; dd, for doublet of doublets and; m, for Table 3
Molecular Modeling
The x-ray crystal structure of PBP (DDTP) with its co-crystallized inhibitor was retrieved from protein data bank (PDB code 1CEF) [20]. Molecular operating environment (MOE) was used to treat the complex dimers as described in our earlier work [29] and to generate the three dimensional structures of the benzophenoxazines. To an oven dried 10 mL RB flask containing 2 mL of CH 3 CN and 1 mL of water was added RX (1 mmol), RB(OH) 2 (1.2 mmol), K 3 PO 4 (588mg, 3 mmol) and the reaction mixture gradually warm to 40°C while stirring under nitrogen atmosphere. Pd(OAc) 2 (8.92mg, 4 mol%), X-Phos (32.5mg, 7 mol%) were added and reaction vessel cork with rubber septum. The entire reaction mixture was heated at 80°C within 5-8 h, and then cooled to room temperature. Solvent evaporated in vacuum and crude product extracted from water with DCM (10 mL x 4). The combined organic extracts were dried with MgSO 4 and concentrated in vacuum. Crude product was purified by flash column chromatography on silica gel.
General Procedure II (Sonogashira Cross-Coupling reactions). Acetonitrile (3 mL) was degassed for 0.5 h before injection into an oven-dried 10 mL RB flask fitted with a rubber septum already charged with Pd(OAc) 2 (8.9 mg, 4 mol%), X-Phos (32.5mg, 7 mol%), RX (1mmol) and K 3 PO 4 (588mg, 3mmol), under an atmosphere of nitrogen. The reaction mixture was stirred and warmed to 50°C during which time 1-alkyne (1.5 mmol) was gradually injected via syringe. The reaction temperature was maintained for 0.5 h before being increased to 80°C. Stirring was continued for 5-8 h then the mixture was cooled to room temperature after reaction completion as monitored by TLC. Water (10 mL) was added mand product extracted with dichloromethane (4 x 10 mL). The combined organic extracts were dried (MgSO 4 ) and concentrated in vacuum. The crude product was separated by flash chromatography on silica gel using petroleum ether-ethyl acetate mixtures.
General Procedure III (Stille Cross-Coupling Reactions). An oven-dried 10 mL RB flask was charged with Pd(OAc) 2 (8.92 mg, 4 mol%) and X-Phos (32.5 mg, 7 mol%) and covered with rubber septum. The vessel was evacuated and back-filled with N 2 thrice before injecting of CH 3 CN (2mL) and H 2 O (1 mL) (both solvents degassed for 30 min) and the reaction mixture warmed to 50°C within 10 min. Rubber septum quickly removed to add chlorophenothiazine (1mmol) and K 3 PO 4 (318 mg, 1.5 mmol), and replaced before injecting tributylthienylstannane or tributylfuranylstannane (1.2 mmol). The temperature was gradually increase to and maintained 80°C. The reaction was terminated in 5 h and the crude product extracted from water (10 mL) four times with DCM. The combined organic extract was dried with MgSO 4 and concentrated in vacuum. The crude product was purified by flash chromatography on silica gel using petroleum ether-ethyl acetate eluent.
General Antimicrobial Sensitivity Testing of Compounds
A pure culture of human pathogenic microbes was obtained from culture collection center, Bishop Shahanan Hospital, Nsukka, Enugu State. The agar cup diffusion method was applied to determine the sensitivity of compounds against bacteria using Muller Hinton Agar. The MHA plates were inoculated with 1 x 10 4 CFU culture of test organism. After which cups were made in each sector after previously dividing the plate into six segments and labeled. Using the sterile pipette, each cup was filled with four drops of compound (0.1 mg/ml). Pre-diffused time of 30 min was allowed before all the plates were incubated at 37°C for 24 h for bacteria. After incubation the inhibition zone diameter (IZD) resulting were measured and result recorded after subtracting the diameter of the cork borer. The cork borer used to make the cup is 8 mm in diameter. The procedure was repeated for tetracycline (standard bacteria) and DMSO (solvent).
Minimum Inhibitory Concentration (MIC) Testing
The method used to determine the MIC was the same as for general sensitivity testing except serial dilution of 0.1 mg/ml DMSO solution of each sample was carried out to have 0.05, 0.025, 0.0125, 0.00625 mg/mL solutions. Fours drops of each dilution were added to the corresponding cup previously cut in the Mueller Hinton Agar (MHA) plate. The plates were incubated at 37°C for 24 h for bacteria and 48 h for fungi. The diameter of zone of inhibition was measured and the value subtracted from the diameter of the borer to give the inhibition zone diameter (IZD). The graph of IZD 2 against the log of concentrations was plotted for each plate containing a specific compound and a microorganism. The anti-log of the intercept on x-axis gives the MIC. The procedure was repeated for tetracycline.
Conclusion
The ease of accessing PBPs from periplasm and its absence in mammalian cells make them target of choice in search for antibiotics. The history of oxazines as having chemotherapeutic potential informed its usage as a parent molecule in the current study. The binding modes of two known oxazin-5-ones were used to guide the synthesis of derivatives. Evaluation of their SAR revealed that the analogues adopted a unique preferential configuration within the binding site cavity of the protein different from that of their parent molecules. This may account for the variation observed in their degree of PBP inhibition. Four of the analogues exhibited improved potencies over the parent molecules and were also drug-like according to Lipinski's ro5. The biological assay results confirmed the antibiotic potencies of the derivatives, but were not in tandem with the computational predictions. Medicinal Chemists could take advantage of the ligand interaction motifs identified in this study in rational optimization by chemical modification of the compounds. | 2018-04-03T01:28:34.055Z | 2016-10-17T00:00:00.000 | {
"year": 2016,
"sha1": "1c3ac3467a2664edd47b75dd939c85c483bf665d",
"oa_license": "CCBY",
"oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0163467&type=printable",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "1c3ac3467a2664edd47b75dd939c85c483bf665d",
"s2fieldsofstudy": [
"Chemistry",
"Biology"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
} |
248185628 | pes2o/s2orc | v3-fos-license | Rheological and Tribological Properties of Lithium Grease and Polyurea Grease with Different Consistencies
: The rheological properties of lithium grease and polyurea grease at different temperatures and consistencies were determined with a rotary rheometer. The plateau moduli of the greases were calculated, and the mechanism of influence of consistency and temperature on the rheological properties of the greases was explained. The tribological and wear properties of the two greases were measured by high-temperature friction and wear tester. The friction and wear mechanisms are probed by the rheological properties of lubricating grease. The results show that the plateau modulus G N can be used to assess the structural strength of different greases, but it can only assess the degree of entanglement of the same grease. The higher the consistency of the grease, the larger the apparent viscosity, structural strength, and yield stress. The rheological properties of PAO-polyurea grease are greatly affected by temperature, but its structural strength is better than that of mineral oil-lithium grease. The consistency of mineral oil-lithium grease is expected to affect the friction coefficient and wear through its influence on the grease’s structural strength and film-forming ability. For PAO-polyurea, the consistency in a certain range has little effect on the friction coefficient and wear resistance.
Introduction
The use of lubricating grease can reduce friction and wear on the relative moving surface. Consistency, as a key parameter pertaining to any grease, is affected by thickener concentration, manufacturing processes and other factors. Grease is widely used in the lubrication of 90% of ball bearings due to its advantages of sealing gaps and preventing pollutants from entering equipment [1,2]. Lithium grease and polyurea grease are the two most widely used types [3], with good mechanical stability. Lubrication is critical for reducing frictional losses in electric motors and transmissions, and as the trend toward electric vehicles drives, the performance of greases has become more complex [4,5]. However, the mechanism of the influence of consistency on the rheological properties and friction and wear properties of lubricating greases are yet to be elucidated. It is necessary to understand the rheological, tribological and wear properties of greases.
Lubricating grease is a colloidal dispersion system, in which the thickener fiber traps the base oil [6][7][8]. Through the study of the rheological properties of lithium grease, many factors, such as additives, thickener type and concentration were found to affect the rheological behavior of grease [7,[9][10][11][12]. On this basis, Mao et al. [13], Couronne et al. [14], and Sánchez et al. [15] studied the relationship between the rheological properties and microstructure of lithium grease. Xu et al. [16] evaluated the effects of the amine molecular structure on the rheological properties of polyurea grease and found that the concentration of thickener and the viscosity of base oil will significantly affect the microstructure of grease. Maciej et al. [17] assessed the boundary characteristics of lithium-based grease and polyurea grease on different materials, finding that different surface materials have different adsorption capacities for thickener fibers and form different boundary layers. The microstructure of grease will further affect its tribological properties. Fan et al. [3] compared the tribological properties of lithium grease and polyurea grease and found that thickener not only determines the physical properties of grease but is was also involved in the lubrication process for reducing friction and wear. In steel/steel friction pairs, the lubrication performance of polyurea grease is better than that of lithium grease. Cousseau et al. [18] found that the interaction between grease thickener and base oil exerts a significant influence on bearing friction torque. Gonçalves et al. [19] studied the influence of grease thickener content on friction torque on rolling bearings, finding that the increase of thickener content will reduce the sliding coefficient of friction and oil-bleeding capacity of grease. Some studies show that good additives can improve the tribological properties of grease [1,20,21]. Li et al. [1] found that by adding a few layers of graphene (FLG) to lithium grease, it can improve its anti-friction and anti-wear properties. Wu et al. [21] found that CuO nanoparticles can improve the tribological properties of polyurea grease, inhibiting bearing vibration. The tribological properties of lubricating grease are rarely combined with rheological properties and are rarely described systematically.
Lubricating grease tends to separate out the lubricating oil under certain temperatures or shear stresses. The separation of soap and oil directly leads to a change of consistency and loss of grease. Trace oil separation can keep equipment lubricated, but excessive oil separation caused by increasing temperature will both thicken and harden a grease, resulting in thermal aging [22]. The thermal-aging temperatures of lithium grease and polyurea grease at different temperatures are 130 • C and 280 • C, respectively. The high temperature in the running process of roller bearings will lead to the thermal aging of grease, which is one of the most important reasons for severe local wear of roller bearings [23]. Pan et al. [24] simulated the static thermal degradation of grease in a drying oven; the results show that lithium grease has better anti-friction and anti-wear properties after heat treatment at 120 • C. The temperature will affect the rheological properties and microstructure of grease [15]; it is, therefore, important to explore the rheological properties of grease at different temperatures.
In the present work, the rheological properties and tribological properties of lubricating greases were measured. The influence mechanism of consistency and temperature on the rheological properties and tribological properties and the correlation between the rheological properties and tribological properties were investigated. The research results based on the rheological and tribological characteristics of grease will provide a theoretical basis for selecting the most appropriate grease for use under different working conditions.
Materials
Seven types of grease samples were selected as research objects. According to ISO 2137-1985 (E), the cone penetration of grease is the depth after the standard cone falls freely in a standard grease cup filled with the tested grease for 5 s at 25 • C, and the unit is 0.1 mm. Cone penetration data are used to indicate the consistency of grease. All grease samples were provided by Sinopec Lubricating Oil Co., Ltd. (Tianjin, China). The viscosity of the base oil is 48 mm 2 /s at 40 • C. The nomenclature and basic parameters of these seven grease samples are listed in Table 1. The basic parameters of base oil are given in Table 2. The selected greases are mainly used for sealed bearings. In NGLI grades 2 and 3, four cone penetration values are selected as the research object. Lithium grease with different consistencies is named Li-230 to Li-300, respectively. The polyurea greases with different consistencies are named Po-230 to Po-300, respectively. The samples are milky white with a hard texture and good stability.
Microstructure Characterization
The microstructures of the greases were observed by scanning electron microscopy (SEM, Hitachi Regulus 8220, Tokyo, Japan). Before observation, the grease was soaked with petroleum ether and a mixing of grease and the organic solvent was promoted by means of ultrasonic dispersion. Static 2H was used to facilitate the extraction of the base oil from grease by petroleum ether. Further centripetal separation of the mixture of grease and petroleum ether was conducted by means of a high-speed centrifuge. The separated thickener was then dissolved in petroleum ether and a suspension was prepared by ultrasonic dispersion; this step was repeated three or four times until the base oil of the grease was completely extracted; the specimen was then coated with gold in preparation for examination by SEM.
Rheological Test
To investigate the effects of temperature, consistency and the type of grease on the rheological properties, steady-state rheological and dynamic rheological tests of grease were conducted using a rheometer (MCR302, Anton Paar, Graz, Austria) at 30 • C, 70 • C, and 130 • C. The maximum torque applied by the rheometer was 200 mN·m with a torque resolution of 0.1 nN·m. The steady-state rheological experiment was conducted at a controlled shear strain rate, and the main parameters were apparent viscosity η ap , shear stress τ, and shear strain rate . γ. Dynamic rheological experiments were conducted under sinusoidal vibration of the sample within a certain frequency range to study dynamic effects thereon. Cone plate cp-25 and flat plate pp-25 were selected for the rotor. At the beginning of the test, the grease was pre-sheared for 120 s, then allowed to stand for 30 min to eliminate damage to the grease when the rotor was pressed down onto the specimen.
In steady-state tests, the shear strain rate ranged from 0.1 to 1000 s −1 , apparent viscosities η ap at different shear strain rates were obtained, the flow curve was generated, and the yield stress τ y was determined. In dynamic rheological experiments, the strain ranged from 0.01% to 100%, the constant frequency was 10 rad/s, and the linear viscoelastic region of the grease was determined. In the linear viscoelastic state, a frequency-scanning experiment was conducted, and the frequency ranged from 0.1 to 100 rad/s, and the constant strain was 0.1%. All specimens were taken from the same sampling position in the same batch, and each was tested at least three times to reduce the influence of errors in the testing process.
Tribological Test
Before the experiment, the film thickness ratio λ is obtained by calculating the ratio of the minimum oil film thickness to the surface roughness of the friction pair, as shown in Equation (1).
In Equation (1): h min is the minimum oil film thickness of the grease (µm), σ 1 and σ 2 is the roughness Rq of the surfaces of the two friction pairs. The minimum oil film thickness under point contact can be estimated by Hamrock-Dowson formula [25].
In Equation (2): U(= ηU/(E * R)) is the dimensionless speed parameter, G(= αE * ) is the dimensionless material parameter, W = W/ E * R 2 is the dimensionless load parameter. k is the ellipticity. E* is the equivalent elastic shear modulus. R is the equivalent radius of curvature.
The tribological properties of greases with different consistencies were measured by high-temperature friction and wear tester (CSM THT 1000 • C, Anton Paar, Graz, Austria). The contact form of the friction pair was of the ball-disk-type. The lower specimen was a GCr15 metal disk measuring 55.0 mm × 10 mm, and the upper specimen was a Q195 carbon steel ball with a diameter of 6 mm, on which the load W was 5 N, the maximum Hertz contact stress was 1.17 GPa, the rolling speed U was 1.26 m/s, the rotation radius was 20 mm, the applied frequency was 10 Hz, for a duration of 1 h at a temperature of 25 • C. The surface roughness Rq of the ball and disk measured by the 3D white light interference profilometer was about 0.40 µm and 0.26 µm, respectively. The film thickness ratios of the two greases were 0.23 and 0.21, respectively. It is generally believed that λ < 1, the lubrication status is boundary lubrication.
Before the experiment, we placed the ball and disk specimens into petroleum ether for ultrasonic cleaning (three times), then fixed the disk specimens and evenly coated an appropriate amount of grease over the friction surface before starting the test after fixing the ball specimen. The morphologies of the worn surfaces were examined in optical microscopy (MIT500, Cnoptec, Chongqing, China), and the wear spot area was analyzed to measure the anti-wear performance of grease in a more intuitive manner. Tribological experiments were carried out three times to avoid randomness in the experiment.
Microstructure and Plateau Modulus of Greases and Their Correlation
The microstructure of the grease is shown in Figure 1. The lithium grease, with a consistency of 230 and 300, is demonstrated in Figure 1a,b, respectively. The thickener fiber of the lithium grease shows a highly entangled reticulated structure. The microstructure of polyurea grease is very different from lithium grease, as shown in Figure 1c,d. Its thickener fiber is strip-shaped or rod-shaped and contains more oil. It is difficult to clean the base oil in polyurea grease with an organic solvent. The greater the consistency (the smaller the cone penetration) of lithium grease and polyurea grease, the thicker the fibers that were present per unit volume, the finer the structure, and the higher the range of entanglement between fibers.
The better to describe the microstructures of soap fibers, the plateau modulus G N is introduced to represent the degree of entanglement between grease soap fibers. G N can be obtained by frequency-scanning experiments and extrapolation from the data. This gives the storage modulus corresponding to when the loss coefficient reaches the minimum value, as shown in Equation (3). Figure 2b shows the plateau modulus values of polyurea greases at 30 °C, and 130 °C. The plateau moduli of the two greases increase with increasing consi at the same temperature because the greater the consistency, the more soap fibers a higher the degree of entanglement between fibers, as evinced by the SEM images shows that the plateau modulus can be used to characterize the microstructure of cating grease. The larger the plateau modulus, the more entangled the grease thic fiber. At the same time, the plateau modulus of the grease at low temperature is than that at high temperature because, with the increase in temperature, the interm lar force decreases, and the degree of entanglement of the grease fiber decreases, an plateau modulus decreases. Comparing Figure 2a,b, the plateau modulus of li grease is much lower than that of polyurea grease, but the SEM image ( Figure 1) do show the degree of entanglement of lithium grease soap fiber as lower than that of urea grease. For the same type of grease, the degree of entanglement of soap fibers is the probability of chemical and physical cross-linking between fibers is greater, the tural framework is more stable, and the structural strength is greater. Through the r of Figures 1 and 2, we obtained new conclusions that are different from past liter The plateau modulus GN can be used to assess the structural strength of different gr but it can only assess the degree of entanglement of the same grease. For example, i ure 1. In Figure 2, the structural strength of the polyurea greases is stronger than t lithium greases, but the degree of entanglement of the polyurea greases is not as go that of the lithium greases. At different temperatures, the plateau modulus of li grease is slightly different; the plateau modulus of polyurea decreased with the incr temperature. This shows that, with the increase in temperature, the degree of ent ment of the soap fiber of polyurea grease decreases to a significant extent, while t the soap fiber of lithium grease decreases slightly, which shows that the degree of ener fiber entanglement of mineral oil-lithium grease is less affected by temperatur Figure 2b shows the plateau modulus values of polyurea greases at 30 • C, 70 • C, and 130 • C. The plateau moduli of the two greases increase with increasing consistency at the same temperature because the greater the consistency, the more soap fibers and the higher the degree of entanglement between fibers, as evinced by the SEM images. This shows that the plateau modulus can be used to characterize the microstructure of lubricating grease. The larger the plateau modulus, the more entangled the grease thickener fiber. At the same time, the plateau modulus of the grease at low temperature is larger than that at high temperature because, with the increase in temperature, the intermolecular force decreases, and the degree of entanglement of the grease fiber decreases, and the plateau modulus decreases. Comparing Figure 2a,b, the plateau modulus of lithium grease is much lower than that of polyurea grease, but the SEM image ( Figure 1) does not show the degree of entanglement of lithium grease soap fiber as lower than that of polyurea grease. For the same type of grease, the degree of entanglement of soap fibers is high, the probability of chemical and physical cross-linking between fibers is greater, the structural framework is more stable, and the structural strength is greater. Through the results of Figures 1 and 2, we obtained new conclusions that are different from past literature: The plateau modulus G N can be used to assess the structural strength of different greases, but it can only assess the degree of entanglement of the same grease. For example, in Figure 1. In Figure 2, the structural strength of the polyurea greases is stronger than that of lithium greases, but the degree of entanglement of the polyurea greases is not as good as that of the lithium greases. At different temperatures, the plateau modulus of lithium grease is slightly different; the plateau modulus of polyurea decreased with the increasing temperature. This shows that, with the increase in temperature, the degree of entanglement of the soap fiber of polyurea grease decreases to a significant extent, while that of the soap fiber of lithium grease decreases slightly, which shows that the degree of thickener fiber entanglement of mineral oil-lithium grease is less affected by temperature, and the degree of thickener fiber entanglement of PAO-polyurea grease is greatly affected by temperature. the degree of thickener fiber entanglement of PAO-polyurea grease is greatly affected by temperature.
Apparent Viscosity of Grease
Consistency is an important factor that affects the performance of the grease. The consistency of grease is expressed by cone penetration. Figures 3-8 shows the maximum relative standard deviation (RSD) of each curve. Figures 3 and 4 show the variation of the apparent viscosity of the lithium greases and polyurea with the shear strain rate at different temperatures: the apparent viscosity of grease decreases with the increase in shear strain rate (i.e., shear-thinning). With the increase in shear strain rate, the resistance of thickener fibers along the flow direction is small, and the viscosity decreases. With increasing shear strain rate, the height, width and length of the thickener fibers may decrease, and the thickener fibers themselves may fracture and rupture, gradually orienting their alignment, whereupon the shear-thinning effect gradually decreases, especially at very high shear strain rates (the apparent viscosity of the grease decreases slightly). As can be seen from Figures 3 and 4, the higher the consistency (the smaller the cone penetration), the higher the apparent viscosity at the same shear strain rate. That is, under the premise of the same base oil viscosity, the increase in grease consistency will increase the apparent viscosity of the grease. From the SEM images and GN values, it can be found
Apparent Viscosity of Grease
Consistency is an important factor that affects the performance of the grease. The consistency of grease is expressed by cone penetration. Figures 3-8 shows the maximum relative standard deviation (RSD) of each curve. Figures 3 and 4 show the variation of the apparent viscosity of the lithium greases and polyurea with the shear strain rate at different temperatures: the apparent viscosity of grease decreases with the increase in shear strain rate (i.e., shear-thinning). With the increase in shear strain rate, the resistance of thickener fibers along the flow direction is small, and the viscosity decreases. With increasing shear strain rate, the height, width and length of the thickener fibers may decrease, and the thickener fibers themselves may fracture and rupture, gradually orienting their alignment, whereupon the shear-thinning effect gradually decreases, especially at very high shear strain rates (the apparent viscosity of the grease decreases slightly).
the degree of thickener fiber entanglement of PAO-polyurea grease is greatly affected by temperature.
Apparent Viscosity of Grease
Consistency is an important factor that affects the performance of the grease. The consistency of grease is expressed by cone penetration. Figures 3-8 shows the maximum relative standard deviation (RSD) of each curve. Figures 3 and 4 show the variation of the apparent viscosity of the lithium greases and polyurea with the shear strain rate at different temperatures: the apparent viscosity of grease decreases with the increase in shear strain rate (i.e., shear-thinning). With the increase in shear strain rate, the resistance of thickener fibers along the flow direction is small, and the viscosity decreases. With increasing shear strain rate, the height, width and length of the thickener fibers may decrease, and the thickener fibers themselves may fracture and rupture, gradually orienting their alignment, whereupon the shear-thinning effect gradually decreases, especially at very high shear strain rates (the apparent viscosity of the grease decreases slightly). As can be seen from Figures 3 and 4, the higher the consistency (the smaller the cone penetration), the higher the apparent viscosity at the same shear strain rate. That is, under the premise of the same base oil viscosity, the increase in grease consistency will increase the apparent viscosity of the grease. From the SEM images and GN values, it can be found As can be seen from Figures 3 and 4, the higher the consistency (the smaller the cone penetration), the higher the apparent viscosity at the same shear strain rate. That is, under the premise of the same base oil viscosity, the increase in grease consistency will increase the apparent viscosity of the grease. From the SEM images and G N values, it can be found that the high consistency of soap fibers has a high degree of entanglement, and the resistance of soap fibers is large along the direction of flow, thus increasing the apparent viscosity. The shear-thinning rate of lubricating grease with different consistencies is similar, that is, the consistency exerts little influence on the shear-thinning rate.
Temperature is also an important factor affecting the rheological properties of grease. With the increase in temperature, the apparent viscosity of grease decreases, because even if the chemical structure of grease used at a higher temperature does not change, the change of physical entanglement of the associated colloidal dispersion system will weaken the original performance of the grease, resulting in the decrease of viscosity. With the increase in shear strain rate, the curve shows an unstable change, which is due to the shear yielding of the grease; the fiber structure of the grease is cut, and the apparent viscosity changes suddenly. The difference is that the apparent viscosity of polyurea grease is very unstable after shear yield at high temperature, and the data fluctuate, which may be related to the structure of polyurea grease having been significantly affected by temperature (the plateau modulus is significantly affected by temperature). At room temperature, the apparent viscosity of lithium grease is lower than that of polyurea grease. When the temperature is greater than 70 • C, the plateau modulus of polyurea grease decreases greatly, that is, the degree of entanglement of soap fiber decreases greatly. However, the plateau modulus of lithium grease decreases very little, that is, the degree of entanglement of soap fiber decreases very little. When the temperature is 70 • C or 130 • C, the apparent viscosity of polyurea lubricated grease decreases far too much and it tends to be less than the apparent viscosity of lithium grease. This is because the structural strength of PAO-polyurea is greatly affected by temperature. Because the apparent viscosity is related to the filmforming properties and friction characteristics of the grease, the apparent viscosity of PAO-polyurea grease is more affected by temperature than mineral oil-lithium grease. The film-forming properties and friction properties of PAO-polyurea grease are more affected by temperature than mineral oil-lithium grease. Polyurea is easier to form a film, but the friction is relatively large. that the high consistency of soap fibers has a high degree of entanglement, and the resistance of soap fibers is large along the direction of flow, thus increasing the apparent viscosity. The shear-thinning rate of lubricating grease with different consistencies is similar, that is, the consistency exerts little influence on the shear-thinning rate. Temperature is also an important factor affecting the rheological properties of grease. With the increase in temperature, the apparent viscosity of grease decreases, because even if the chemical structure of grease used at a higher temperature does not change, the change of physical entanglement of the associated colloidal dispersion system will weaken the original performance of the grease, resulting in the decrease of viscosity. With the increase in shear strain rate, the curve shows an unstable change, which is due to the shear yielding of the grease; the fiber structure of the grease is cut, and the apparent viscosity changes suddenly. The difference is that the apparent viscosity of polyurea grease is very unstable after shear yield at high temperature, and the data fluctuate, which may be related to the structure of polyurea grease having been significantly affected by temperature (the plateau modulus is significantly affected by temperature). At room temperature, the apparent viscosity of lithium grease is lower than that of polyurea grease. When the temperature is greater than 70 °C, the plateau modulus of polyurea grease decreases greatly, that is, the degree of entanglement of soap fiber decreases greatly. However, the plateau modulus of lithium grease decreases very little, that is, the degree of entanglement of soap fiber decreases very little. When the temperature is 70 °C or 130 °C, the apparent viscosity of polyurea lubricated grease decreases far too much and it tends to be less than the apparent viscosity of lithium grease. This is because the structural strength of PAOpolyurea is greatly affected by temperature. Because the apparent viscosity is related to the film-forming properties and friction characteristics of the grease, the apparent viscosity of PAO-polyurea grease is more affected by temperature than mineral oil-lithium grease. The film-forming properties and friction properties of PAO-polyurea grease are more affected by temperature than mineral oil-lithium grease. Polyurea is easier to form a film, but the friction is relatively large.
Rheological Properties of Grease
In the steady-state test, Figures 5 and 6 show the changes in shear stress with the shear strain rate of lithium grease and polyurea grease, which reflects the resistance of the fluidity. It can be seen that lithium grease and polyurea grease have common characteristics: the shear stress increases with the consistency of the grease due to enhancement in inner compositions interactions. When the temperature is 30 °C the interaction between grease molecules is strong, the entanglement degree of soap fiber is high, and the wall surface only has weak adsorption to grease molecules; the wall slip will occur, that is, the
Rheological Properties of Grease
In the steady-state test, Figures 5 and 6 show the changes in shear stress with the shear strain rate of lithium grease and polyurea grease, which reflects the resistance of the fluidity. It can be seen that lithium grease and polyurea grease have common characteristics: the shear stress increases with the consistency of the grease due to enhancement in inner compositions interactions. When the temperature is 30 • C the interaction between grease molecules is strong, the entanglement degree of soap fiber is high, and the wall surface only has weak adsorption to grease molecules; the wall slip will occur, that is, the shear stress will decrease in the initial stage. When the shear stress reaches the platform area, the corresponding value is the yield stress τ y under the steady-state test of grease, and then the shear stress increases with the increase of the shear strain rate. At this stage, the grease begins to flow. In this process, the grease has experienced wall slip-a solid-like deformation-yield-flow. When the temperature is 70 • C and 130 • C, the intermolecular force in the grease decreases, and the influence of the wall surface on the adsorption of grease molecules increases, so there is no wall-slip phenomenon at the beginning. However, with the increase of the shear strain rate, the shear stress increases, and the adsorption of the molecules on the wall surface is difficult to maintain, thus the wall surface slip begins to appear after the grease reaches the first turning point (upper yield point). When the second turning point (lower yield point) is reached, the grease begins to flow, and the shear stress at this turning point is the yield stress τ y . In this process, the grease has experienced a solid-like deformation-yield coupling with wall slip-flow. The yield stress of lithium and polyurea grease increases with the increase of consistency, which means that the greater the consistency of the grease, the worse the fluidity. It should be pointed out that the entanglement between the thickener fibers of polyurea with a consistency of 300 is lower at 30°C, so the curve shape is similar to that at the high temperature shown in Figure 6. shear stress will decrease in the initial stage. When the shear stress reaches the platform area, the corresponding value is the yield stress τy under the steady-state test of grease, and then the shear stress increases with the increase of the shear strain rate. At this stage, the grease begins to flow. In this process, the grease has experienced wall slip-a solidlike deformation-yield-flow. When the temperature is 70 °C and 130 °C, the intermolecular force in the grease decreases, and the influence of the wall surface on the adsorption of grease molecules increases, so there is no wall-slip phenomenon at the beginning. However, with the increase of the shear strain rate, the shear stress increases, and the adsorption of the molecules on the wall surface is difficult to maintain, thus the wall surface slip begins to appear after the grease reaches the first turning point (upper yield point). When the second turning point (lower yield point) is reached, the grease begins to flow, and the shear stress at this turning point is the yield stress τy. In this process, the grease has experienced a solid-like deformation-yield coupling with wall slip-flow. The yield stress of lithium and polyurea grease increases with the increase of consistency, which means that the greater the consistency of the grease, the worse the fluidity. It should be pointed out that the entanglement between the thickener fibers of polyurea with a consistency of 300 is lower at 30 ℃, so the curve shape is similar to that at the high temperature shown in Figure 6. shear stress will decrease in the initial stage. When the shear stress reaches the platform area, the corresponding value is the yield stress τy under the steady-state test of grease, and then the shear stress increases with the increase of the shear strain rate. At this stage, the grease begins to flow. In this process, the grease has experienced wall slip-a solidlike deformation-yield-flow. When the temperature is 70 °C and 130 °C, the intermolecular force in the grease decreases, and the influence of the wall surface on the adsorption of grease molecules increases, so there is no wall-slip phenomenon at the beginning. However, with the increase of the shear strain rate, the shear stress increases, and the adsorption of the molecules on the wall surface is difficult to maintain, thus the wall surface slip begins to appear after the grease reaches the first turning point (upper yield point). When the second turning point (lower yield point) is reached, the grease begins to flow, and the shear stress at this turning point is the yield stress τy. In this process, the grease has experienced a solid-like deformation-yield coupling with wall slip-flow. The yield stress of lithium and polyurea grease increases with the increase of consistency, which means that the greater the consistency of the grease, the worse the fluidity. It should be pointed out that the entanglement between the thickener fibers of polyurea with a consistency of 300 is lower at 30 ℃, so the curve shape is similar to that at the high temperature shown in Figure 6. In the dynamic test, the variation of storage modulus G and loss modulus G with strain or stress can be obtained. G reflects the internal elastic potential energy of the material during deformation and is related to the retention capacity of the grease; G"refers to the energy dissipated as heat during deformation. Both are key indices used to measure the viscoelastic properties of lubricating greases. The higher the viscoelasticity, the worse the fluidity. In dynamic experiments, the region before the storage modulus is decreased by 10% and is defined as the maximum linear viscoelastic region Ld. The strain-scanning curves of lithium grease and polyurea grease with different consistencies at different temperatures are shown in Figures 7 and 8, respectively: in the L d region, the storage modulus of the grease at all temperatures is higher than the loss modulus, and it is in a solid state. The greater the consistency, the higher the storage modulus and loss modulus. It can be found that the storage modulus and loss modulus of polyurea grease are higher than those of lithium grease for the same consistency by comparing Figures 7 and 8. In the dynamic test, the variation of storage modulus G′ and loss modulus G′′ with strain or stress can be obtained. G′reflects the internal elastic potential energy of the material during deformation and is related to the retention capacity of the grease; G''refers to the energy dissipated as heat during deformation. Both are key indices used to measure the viscoelastic properties of lubricating greases. The higher the viscoelasticity, the worse the fluidity. In dynamic experiments, the region before the storage modulus is decreased by 10% and is defined as the maximum linear viscoelastic region Ld. The strain-scanning curves of lithium grease and polyurea grease with different consistencies at different temperatures are shown in Figures 7 and 8, respectively: in the Ld region, the storage modulus of the grease at all temperatures is higher than the loss modulus, and it is in a solid state. The greater the consistency, the higher the storage modulus and loss modulus. It can be found that the storage modulus and loss modulus of polyurea grease are higher than those of lithium grease for the same consistency by comparing Figures 7 and 8. With the increase in shear strain, the storage modulus decreases, and the loss modulus increases, indicating that the grease is undergoing a transition to a liquid state. The intersection of the storage modulus curve and the loss modulus curve is defined as the flow point, and the stress corresponding to the flow point is defined as the cross-stress τco. There is a positive correlation between the magnitude of the cross-stress and the resistance to flow, which is of great significance to the study of the fluidity of grease. Figure 9a shows the cross-stress of lithium grease with different consistencies at 30, 70, and 130 °C. Figure In the dynamic test, the variation of storage modulus G′ and loss modulus G′′ with strain or stress can be obtained. G′reflects the internal elastic potential energy of the material during deformation and is related to the retention capacity of the grease; G''refers to the energy dissipated as heat during deformation. Both are key indices used to measure the viscoelastic properties of lubricating greases. The higher the viscoelasticity, the worse the fluidity. In dynamic experiments, the region before the storage modulus is decreased by 10% and is defined as the maximum linear viscoelastic region Ld. The strain-scanning curves of lithium grease and polyurea grease with different consistencies at different temperatures are shown in Figures 7 and 8, respectively: in the Ld region, the storage modulus of the grease at all temperatures is higher than the loss modulus, and it is in a solid state. The greater the consistency, the higher the storage modulus and loss modulus. It can be found that the storage modulus and loss modulus of polyurea grease are higher than those of lithium grease for the same consistency by comparing Figures 7 and 8. With the increase in shear strain, the storage modulus decreases, and the loss modulus increases, indicating that the grease is undergoing a transition to a liquid state. The intersection of the storage modulus curve and the loss modulus curve is defined as the flow point, and the stress corresponding to the flow point is defined as the cross-stress τco. There is a positive correlation between the magnitude of the cross-stress and the resistance to flow, which is of great significance to the study of the fluidity of grease. Figure 9a shows the cross-stress of lithium grease with different consistencies at 30, 70, and 130 °C. Figure With the increase in shear strain, the storage modulus decreases, and the loss modulus increases, indicating that the grease is undergoing a transition to a liquid state. The intersection of the storage modulus curve and the loss modulus curve is defined as the flow point, and the stress corresponding to the flow point is defined as the cross-stress τ co . There is a positive correlation between the magnitude of the cross-stress and the resistance to flow, which is of great significance to the study of the fluidity of grease. Figure 9a shows the cross-stress of lithium grease with different consistencies at 30, 70, and 130 • C. Figure 9b shows the cross-stresses of polyurea greases with different consistencies at 30, 70, and 130 • C. The two greases have similar characteristics, such that, at the same temperature, the greater the consistency, the greater the cross-stress τ co ; at different temperatures, the crossstress of the grease with the same consistency decreases with the increase of temperature. The difference between the two greases is such that at 30 • C and 70 • C, the cross-stress of polyurea grease is much greater than that of lithium grease; however when the temperature rises to 130 • C, the cross-stress of polyurea grease is an approach to that of lithium grease, which means, the fluidity of lithium grease is better than the polyurea grease. The higher the temperature, the smaller the difference in fluidity between the two greases. the greater the consistency, the greater the cross-stress τco; at different temperatures, the cross-stress of the grease with the same consistency decreases with the increase of temperature. The difference between the two greases is such that at 30 °C and 70 °C, the crossstress of polyurea grease is much greater than that of lithium grease; however when the temperature rises to 130 °C, the cross-stress of polyurea grease is an approach to that of lithium grease, which means, the fluidity of lithium grease is better than the polyurea grease. The higher the temperature, the smaller the difference in fluidity between the two greases.
It can be seen from the above analysis that the higher the consistency of lithium grease and polyurea grease or the lower the temperature, the greater the shear stress and viscoelasticity. The difference is such that the fluidity of polyurea grease is lower than that of lithium grease, and its viscoelasticity is higher than that of lithium grease. With the increase in temperature, the fluidity of the two greases gradually increases, and their viscoelasticity gradually decreases.
Yield Stress and Structural Strength of Grease
Accurate structural strength assessments cannot be obtained only by means of the modulus of the plateau: the shear strength of the grease must be expressed by the magnitude of the yield stress. From the analysis of rheological properties, a yield stress exists in both steady-state and dynamic tests. The two yield stresses were compared, and the structural strength of the grease was measured.
In steady-state experiments, the yield stress τy of lithium grease and polyurea grease at various temperatures is shown in Figure 10a,b, respectively. At 30 °C, the yield stress of polyurea grease is much greater than that of lithium grease, indicating that the fluidity of lithium grease is better than that of polyurea grease at room temperature. At 70 °C, the yield stress of polyurea grease is close to that of lithium grease. At 130 °C, which is beyond the working temperature of the lithium grease, the yield stress of lithium grease with different consistencies is similar, which indicates that the structure of lithium grease soap fibers has been destroyed. At this temperature, the yield stress decreases, but it has no physical significance. The yield stress of polyurea grease with different consistencies remains quite different, that is, the microstructure of the polyurea grease is not destroyed at 130 °C.
In dynamic tests, the shear stress corresponding to Ld represents the yield stress under dynamic test conditions τd (Figure 11a,b): similar to the yield stress measured under steady-state conditions, the higher the consistency, the greater the yield stress. Comparing It can be seen from the above analysis that the higher the consistency of lithium grease and polyurea grease or the lower the temperature, the greater the shear stress and viscoelasticity. The difference is such that the fluidity of polyurea grease is lower than that of lithium grease, and its viscoelasticity is higher than that of lithium grease. With the increase in temperature, the fluidity of the two greases gradually increases, and their viscoelasticity gradually decreases.
Yield Stress and Structural Strength of Grease
Accurate structural strength assessments cannot be obtained only by means of the modulus of the plateau: the shear strength of the grease must be expressed by the magnitude of the yield stress. From the analysis of rheological properties, a yield stress exists in both steady-state and dynamic tests. The two yield stresses were compared, and the structural strength of the grease was measured.
In steady-state experiments, the yield stress τ y of lithium grease and polyurea grease at various temperatures is shown in Figure 10a,b, respectively. At 30 • C, the yield stress of polyurea grease is much greater than that of lithium grease, indicating that the fluidity of lithium grease is better than that of polyurea grease at room temperature. At 70 • C, the yield stress of polyurea grease is close to that of lithium grease. At 130 • C, which is beyond the working temperature of the lithium grease, the yield stress of lithium grease with different consistencies is similar, which indicates that the structure of lithium grease soap fibers has been destroyed. At this temperature, the yield stress decreases, but it has no physical significance. The yield stress of polyurea grease with different consistencies remains quite different, that is, the microstructure of the polyurea grease is not destroyed at 130 • C. one direction; in the dynamic test, the soap fiber oscillates dynamically and shears back and forth. In practice, dynamic reciprocating shear is often easier to make the grease flow, than single direction shear, so the yield stress in the dynamic test will be less than that in a steady-state test. The comparison between Figure 11a,b shows that the dynamic yield stress of polyurea is greater than that of lithium grease in the experimental temperature range, which indicates the structural strength of polyurea grease is higher than that of lithium grease. It can be seen from the above analysis that the similarities between lithium grease and polyurea grease are as follows: the higher the consistency or the lower the temperature, the higher the yield stress and structural strength. When the temperature rises to 130 °C, the performance of polyurea grease remains good. The structural strength of PAOpolyurea grease is higher than that of mineral oil-lithium grease; the higher the temperature, the smaller the structural strength between the two greases.
Tribological Properties of Grease
The grease forms a thin film on the friction surface, which can prevent direct contact between elements of the friction pair and reduce the friction and wear. The friction and In dynamic tests, the shear stress corresponding to L d represents the yield stress under dynamic test conditions τ d (Figure 11a,b): similar to the yield stress measured under steady-state conditions, the higher the consistency, the greater the yield stress. Comparing the yield stress τ d in the dynamic test with the yield stress τ y in the steady-state test, τ d is shown to be much less than τ y . In the steady-state experiment, the soap fiber is pulled in one direction; in the dynamic test, the soap fiber oscillates dynamically and shears back and forth. In practice, dynamic reciprocating shear is often easier to make the grease flow, than single direction shear, so the yield stress in the dynamic test will be less than that in a steady-state test. The comparison between Figure 11a,b shows that the dynamic yield stress of polyurea is greater than that of lithium grease in the experimental temperature range, which indicates the structural strength of polyurea grease is higher than that of lithium grease. the yield stress τd in the dynamic test with the yield stress τy in the steady-state test, τd is shown to be much less than τy. In the steady-state experiment, the soap fiber is pulled in one direction; in the dynamic test, the soap fiber oscillates dynamically and shears back and forth. In practice, dynamic reciprocating shear is often easier to make the grease flow, than single direction shear, so the yield stress in the dynamic test will be less than that in a steady-state test. The comparison between Figure 11a,b shows that the dynamic yield stress of polyurea is greater than that of lithium grease in the experimental temperature range, which indicates the structural strength of polyurea grease is higher than that of lithium grease. It can be seen from the above analysis that the similarities between lithium grease and polyurea grease are as follows: the higher the consistency or the lower the temperature, the higher the yield stress and structural strength. When the temperature rises to 130 °C, the performance of polyurea grease remains good. The structural strength of PAOpolyurea grease is higher than that of mineral oil-lithium grease; the higher the temperature, the smaller the structural strength between the two greases.
Tribological Properties of Grease
The grease forms a thin film on the friction surface, which can prevent direct contact between elements of the friction pair and reduce the friction and wear. The friction and It can be seen from the above analysis that the similarities between lithium grease and polyurea grease are as follows: the higher the consistency or the lower the temperature, the higher the yield stress and structural strength. When the temperature rises to 130 • C, the performance of polyurea grease remains good. The structural strength of PAO-polyurea grease is higher than that of mineral oil-lithium grease; the higher the temperature, the smaller the structural strength between the two greases.
Tribological Properties of Grease
The grease forms a thin film on the friction surface, which can prevent direct contact between elements of the friction pair and reduce the friction and wear. The friction and wear characteristics of greases with different consistencies were studied, and the influence of the rheological properties of greases on friction and wear characteristics were assessed.
Under the conditions of 5 N, 10 Hz, 25 • C, for 60 min, the tribological properties of lithium grease and polyurea grease on bearing steel/carbon steel friction pairs were measured. Figure 12 shows the average value of three friction experiments. The wear spot diameter (WSD) of the ball specimen can better reflect the anti-wear performance of greases. The wear surfaces of the ball specimens lubricated with lithium grease and polyurea grease were compared, and the average values of the three measurements were taken to represent the wear spot diameter (Figure 13). Figure 14 shows the relationship between the ball wear spot diameter and consistency, as well as the relationship between the ball wear rate and consistency.
Coatings 2022, 11, x FOR PEER REVIEW 12 of 15 wear characteristics of greases with different consistencies were studied, and the influence of the rheological properties of greases on friction and wear characteristics were assessed. Under the conditions of 5 N, 10 Hz, 25 °C, for 60 min, the tribological properties of lithium grease and polyurea grease on bearing steel/carbon steel friction pairs were measured. Figure 12 shows the average value of three friction experiments. The wear spot diameter (WSD) of the ball specimen can better reflect the anti-wear performance of greases. The wear surfaces of the ball specimens lubricated with lithium grease and polyurea grease were compared, and the average values of the three measurements were taken to represent the wear spot diameter (Figure 13). Figure 14 shows the relationship between the ball wear spot diameter and consistency, as well as the relationship between the ball wear rate and consistency. The friction test results for lithium grease are shown in Figure 12a: For Li-230, the COF peak at the initial run-in stage is almost absent, while the peak for Li-300 is the largest. This shows that at the beginning of the test, Li-230 forms a more stable grease film compared to Li-300. After a short initial period, the friction and wear on the friction pair surfaces enter into a steady-state period for Li-230 and Li-250, and the COF values for Li-230 and Li-250 tend to be stable, and their coefficient of friction (COF) is about 0.11, showing good friction stability. It is worth noting that the COF value for Li-300 is larger than normal in the whole test, which indicates that a stable grease film is not formed. The wear spots and wear rate for Li-230, Li-250 and Li-300 are shown in Figures 13a-c and 14, respectively; the smaller the consistency, the larger the wear spot diameter and wear rate, and the parallel groove for Li-300 is significantly larger than that for Li-230 and Li-250. According to the calculation results of the film thickness ratio λ in Section 2.4, the lubrication state between the friction pairs in the friction experiment is boundary lubrication. The frictional stress arises from asperity interaction and the shear stress in the grease. The increase in thickener consistency promotes better separation of surfaces. The thickener consistency for Li-300 is too small compared to Li-230 and Li-250, to form a thick thickener particle layer on the ball and disk surfaces, which results in more asperity contact and greater friction and wear. The stability of the friction coefficient during the whole experiment time for Li-230 and Li-250 shows that a thick layer of thickener particles is formed on the surface of the disk to reduce asperity interaction.
From the rheological results, it can be seen that the yield stress τy and the storage modulus G′ characterizing the strength of grease structure for Li-230 is slightly larger than that for Li-250, which means the shear resistance for Li-230 is slightly larger, therefore, the COF of Li-230 is slightly larger than that of Li-250. However, the wear spot diameter and wear rate for Li-230 are smaller than that for Li-250, which results from the better separation of surfaces of friction pair by the thicker grease film formed through Li-230 with a greater consistency. The friction test results for polyurea grease are illustrated in Figure 12b: compared with lithium grease, its consistency has little effect on its friction performance. As shown in Figures 13d-f and 14, the reduction in the consistency did not cause a significant change in the diameter of the wear scar and wear rate. The friction curves for polyurea greases also fluctuate, and the wear for polyurea greases is larger than that for lithium grease. These indicate that the three polyurea greases did not form stable oil film, which is related to the rheological properties of polyurea grease. High-structural stability of polyurea grease, which is reflected in high storage modulus and yield stress values, leads to a decrease in oil-bleeding ability, which causes a reduction in contact replenishment, which may result in a film thickness much lower than the estimated value. The friction test results for lithium grease are shown in Figure 12a: For Li-230, the COF peak at the initial run-in stage is almost absent, while the peak for Li-300 is the largest. This shows that at the beginning of the test, Li-230 forms a more stable grease film compared to Li-300. After a short initial period, the friction and wear on the friction pair surfaces enter into a steady-state period for Li-230 and Li-250, and the COF values for Li-230 and Li-250 tend to be stable, and their coefficient of friction (COF) is about 0.11, showing good friction stability. It is worth noting that the COF value for Li-300 is larger than normal in the whole test, which indicates that a stable grease film is not formed. The wear spots and wear rate for Li-230, Li-250 and Li-300 are shown in Figures 13a-c and 14, respectively; the smaller the consistency, the larger the wear spot diameter and wear rate, and the parallel groove for Li-300 is significantly larger than that for Li-230 and Li-250. According to the calculation results of the film thickness ratio λ in Section 2.4, the lubrication state between the friction pairs in the friction experiment is boundary lubrication. The frictional stress arises from asperity interaction and the shear stress in the grease. The increase in thickener consistency promotes better separation of surfaces. The thickener consistency for Li-300 is too small compared to Li-230 and Li-250, to form a thick thickener particle layer on the ball and disk surfaces, which results in more asperity contact and greater friction and wear. The stability of the friction coefficient during the whole experiment time for Li-230 and Li-250 shows that a thick layer of thickener particles is formed on the surface of the disk to reduce asperity interaction.
From the rheological results, it can be seen that the yield stress τ y and the storage modulus G characterizing the strength of grease structure for Li-230 is slightly larger than that for Li-250, which means the shear resistance for Li-230 is slightly larger, therefore, the COF of Li-230 is slightly larger than that of Li-250. However, the wear spot diameter and wear rate for Li-230 are smaller than that for Li-250, which results from the better separation of surfaces of friction pair by the thicker grease film formed through Li-230 with a greater consistency.
The friction test results for polyurea grease are illustrated in Figure 12b: compared with lithium grease, its consistency has little effect on its friction performance. As shown in Figures 13d-f and 14, the reduction in the consistency did not cause a significant change in the diameter of the wear scar and wear rate. The friction curves for polyurea greases also fluctuate, and the wear for polyurea greases is larger than that for lithium grease. These indicate that the three polyurea greases did not form stable oil film, which is related to the rheological properties of polyurea grease. High-structural stability of polyurea grease, which is reflected in high storage modulus and yield stress values, leads to a decrease in oil-bleeding ability, which causes a reduction in contact replenishment, which may result in a film thickness much lower than the estimated value. | 2022-04-16T15:14:02.664Z | 2022-04-13T00:00:00.000 | {
"year": 2022,
"sha1": "89614d42119d176148db4017cab026b5088b6a9a",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2079-6412/12/4/527/pdf?version=1649847591",
"oa_status": "GOLD",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "6cf51036ab18ba332278754e30ebfd89d582b993",
"s2fieldsofstudy": [
"Materials Science"
],
"extfieldsofstudy": []
} |
56367682 | pes2o/s2orc | v3-fos-license | TRANSVERSE SHEAR MODULUS OF ELASTICITY FOR THIN-WALLED CORRUGATED CORES OF SANDWICH BEAMS . THEORETICAL STUDY
The subject of the paper are four corrugated cores in form of circular arcs, a sin wave, trapezoids and an odd function. Transverse shear modules of these corrugated cores are analytically determined. A comparative analysis of these transverse shear modules is presented. Areas of cross sections of the corrugated cores are constant. The theoretical study shows considerable sensitivity of the shear modulus to shape of the corrugation.
Introduction
Theoretical fundamentals for sandwich structures were initiated in the mid of the 20th century.The shear modulus of a core of a sandwich structure considerably affects its bending and buckling.Libove and Hubka (1951) presented the primary elaboration related to analytical study of elastic constants for corrugated cores of sandwich plates.Carlsson et al. (2001) reviewed and adapted previous analytical approaches to the analysis of elastic stiffnesses of corrugated core sandwich panels into the framework of the first-order shear deformation laminated plate theory.Buannic et al. (2003) computed the effective properties of corrugated core sandwich panels using homogenisation theory.Motivated by the results of numerical simulations, Aboura et al. (2004) examined behaviour of the linear homogeneous cardboard analytically, and confirmed good agreeement of the results.A numerical approach to evaluate the stiffness parameters for corrugated board was described by Biancolini (2005).Cheng et al. (2006) proposed finite element analysis approach to evaluate equivalent elastic properties of complex sandwich structures.The elastic bending of unstiffened and stiffened corrugated plates was studied by Peng et al. (2007), where a mesh-free Galerkin method was applied in the analysis.A method for the modelling of a corrugated board panel was outlined by Isaksson et al. (2007) -corrugated core sandwiches were homogenized and the panels transformed to equivalent homogeneous layers with effective equal properties.Talbi et al. (2009) presented an analytical homogenization model for a corrugated cardboard and its numerical implementation with a shell element.An analytical model for the compressive and shear response of monolithic and hierarchical corrugated composite cores was developed by Kazemahvazi and Zenkert (2009).Kress and Winkler (2010) studied the problem of finding a substitute material model for describing the load response of globally flat corrugated sheets made of multidirectional laminates.They determined the maximal possible deformations of a corrugated sheet where the corrugation pattern consisted of two circular segments.Pan et al. (2008) investigated the transverse shear mechanical behaviour and failure mechanism of aluminum alloy honeycomb.He et al. (2012) presented a semi-analytical method for bending analysis of the sandwich panel with a core of triangular-shape, honeycomb-shape and X-shape.Aganovic et al. (1996) presented the equilibrium displacements corresponding to Koiter's shell model.The sequence of shells was considered as a slight periodical perturbation of the middle surface of the plate was shown to converge to the equilibrium displacement of the classical plate model.Corresponding corrector-type results were proved by the homogenization method.Michalak (2001) presented such a form of the mezo-shape function for a mezostructural model, which is suitable for quantitative analysis of dynamic behaviour of a wavy-plate.Governing equations of the averaged theory of wavy-plates were obtained for different forms of the mezo-shape functions for in-plane and out-of-plane displacements of the plate.The work does not address the averaged values of modules determined, for example, with the use the averaged theory or the asymptotic homogenization method, which are presented by Aganivic et al. and by Michalak.
The subject of the theoretical study presented in this paper are four corrugated cores in form of circular arcs, a sin wave, trapezoids and an odd function.The transverse shear modulus for each core is analytically determined.
Analytical description of shear moduli for corrugated cores 2.1. Corrugation of the core in form of circular arcs
The corrugated core between two faces undergoes shearing as shown in Fig. 1.Geometrical relations for the circular arc (Fig. 1) are as follows: -radius of the circular arc -complementary angle of the circular arc where: The basic system of forces for the half-pitch of the circular arc corrugation (Fig. 1) with the reaction enables one to formulate the bending moment The elastic strain energy where a is width of the corrugated core in the x-axis direction.
The displacement v B (Fig. 1) is determined on the basis of Castigliano's second theorem where The shear strain in the yz-plane is as follows From Hooke's law the shear modulus of elasticity for the circular arc corrugation is where the dimensionless shear modulus is The cross section area of the circular arc corrugation for one pitch (Fig. 1) amounts to where the dimensionless area
Corrugation of the core in form of a sin wave
The function of the corrugation is where η = y/b 0 denotes the dimensionless coordinate.
The force system for the half-pitch of the sin wave corrugation (Fig. 2) is similar to that of the circular arc corrugation with reaction (2.3).The bending moment is Then, the elastic strain energy where c 0 = π(1 − x t0 )/x b0 is the dimensionless parameter.The displacement v B (Fig. 2) on the basis of Castigliano's second theorem is as follows where The shear strain in the yz-plane is as follows (2.17) Thus, by analogy to expressions (2.8) and (2.9), the dimensionless shear modulus is (2.18) The cross section area of the circular arc corrugation for one pitch (Fig. 2) is where and the dimensionless area (2.20)
Corrugation of the core in form of trapezoids
Geometrical relations for the trapezoid (Fig. 3) are as follows where Fig. 3. Scheme of the corrugated core of the trapezoid shape The force system for the half-pitch of the sin wave corrugation (Fig. 3) is similar to that of the circular arc corrugation with the reaction (2.3).The normal force and the bending moment in the trapezoidal corrugated core are The elastic strain energy with consideration of the tension and bending energy is as follows where s t = t c √ C t /4 is the length of the trapezoid arm.
The shear strain in the yz-plane, by analogy to expressions (2.6) or (2.16), is written where Thus, the dimensionless shear modulus is The cross section area of the trapezoid corrugation for one pitch (Fig. 3) is where the dimensionless area is The function of the corrugation is where the odd function is in the following form and k f is dimensionless parameter.Thus, by analogy to the sin wave corrugation the dimensionless shear modulus is where and the dimensionless area where Expressions (2.10), (2.18), (2.25) and (2.30) for the dimensionless shear moduli and (2.12), (2.20), (2.27) and (2.31) for dimensionless areas serve as a basis of comparative analysis of the four shapes of the corrugated cores.
Comparative analysis of shear moduli of the corrugated cores
Values of deflections and critical loads of sandwich structures are related to the values of the core shear moduli.Maximization of the value of the shear modulus results in the maximum value of rigidity of the sandwich structure.In consequence, the quality measure of the corrugated core is the value of the dimensionless shear modulus for a constant value of the dimensionless area of a single pitch.The comparative analysis is carried out for the following example data: thickness of the core t c = 12.2 mm, corrugation pitch b 0 = 28 mm and dimensionless area of the single corrugation pitch A (c) 0 = 0.2.The geometric size and dimensionless transverse shear moduli of the studied cores calculated based on the above data are as follows: • the circular arc shape of the corrugation (Fig. 1) (Thickness of the corrugated sheet t 0 = 0.760 mm, radius of the circular arcs (2.1) R 0 = 7.14 mm, complementary angle (2.2) β = 0.2006 rad, and dimensionless shear modulus (2.10) G (ca) yz = 0.00170.) • the sin wave shape of the corrugation (Fig. 2) (The tickness of the corrugated sheet t 0 = 0.799 mm, and the dimensionless shear modulus (2.18) G (sin) yz = 0.00851.) • the trapezoid shape of the corrugation (Fig. 3) (The maximum value of the dimensionless shear modulus G (trap) yz,max = 0.1755 occurs for the thickness of the corrugated sheet t 0 = 0.815 mm and the length of trapezoid parallel sides b 1 = 0.9271 mm.) • the odd function shape of the corrugation (Fig. 4).
The maximum value of the dimensionless shear modulus G (odd−f ) yz,max = 0.2707 occurs for the thickness of the corrugated sheet t 0 = 0.816 mm and the dimensionless parameter k f = −507.9.It can be noticed that for core corrugations in sin wave and odd function shapes, the values of the shear moduli are equal to G In this case, the graph of the sinusoidal shape of core corrugation coincides with the graph of the odd function (Fig. 5).
The shear moduli of the corrugated cores with circular arcs or sin wave shapes for any data are constant ( G (ca) yz = 0.00170, G (sin) yz = 0.00851 are valid for the examplary data).The shapes of these corrugations are uniquely defined for the assumed data.However, the trapezoid or the odd function shapes are not uniquely defined for the assumed data as their shapes may be controlled by varying the length of the trapezoid parallel sides b 1 (Fig. 3) or the dimensionless parameter k f of the function (2.29).The results of numerical calculations for these shapes of corrugations (Table 1 and Table 2) are shown in Fig. 6.It can be noticed that in the extreme case, the shapes of core corrugations in of the trapezoid and odd function type are similar.
Conclusions
The theoretical studies of four corrugated cores allows one to draw the following conclusions: • the core in form of circular arc is the most susceptible to shearing when the value of dimensionless transverse shear modulus of elasticity is the lowest: G (ca) yz = 0.00170, • the core in form of the sin wave is more resistant to shearing than the circular arc core when the transverse shear modulus of elasticity is higher: G (sin) yz = 0.00851, • the trapezoidal core is much more resistant to shearing than the two above, the maximum value of the transverse shear modulus is G (trap) yz,max = 0.1755, nevertheless, the shear modulus is sensitive to variation of the corrugated sheet thickness t 0 , • the core having shape of an odd function is distinguished by the greatest resistance to shearing, the maximum value of the transverse shear modulus is G (odd−f ) yz,max = 0.2707, however, it is very sensitive to the change of the corrugated sheet thickness t 0 .
The theoretical studies show significant differences between the four shapes, including the three basic ones: circular arcs, sin wave and trapezoid.
Fig. 1 .
Fig. 1.Scheme of the corrugated core of the circular arcs shape
Fig. 4 .
Fig. 4. Scheme of the corrugated core in form of an odd function shape 00851 for the sheet thickness t 0 = 0.799 mm.Moreover, for the trapezoidal corrugation and sheet thickness t 0 = 0.780 mm, the value of the shear modulus G (trap) yz = 0.00849 approximates the above values of G
Fig. 6 .
Fig. 6.Shear moduli for the trapezoid and odd function core shapes
Fig. 7 .Fig. 8 .
Fig. 7. Comparison of the two corrugation shapes for the trapezoid (b 1 = 0.9271 mm) and the odd function at extremum
Table 2 .
Values of t 0 , k f and dimensionless shear modulus | 2018-12-15T09:06:56.464Z | 2014-10-01T00:00:00.000 | {
"year": 2014,
"sha1": "e01e1257b0a54af6240fffbb4e8ccd91913510f3",
"oa_license": "CCBYNC",
"oa_url": "http://ptmts.org.pl/jtam/index.php/jtam/article/download/v52n4p971/1887",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "e01e1257b0a54af6240fffbb4e8ccd91913510f3",
"s2fieldsofstudy": [
"Engineering"
],
"extfieldsofstudy": [
"Materials Science"
]
} |
4796759 | pes2o/s2orc | v3-fos-license | Criteria for evaluating programme theory diagrams in quality improvement initiatives: a structured method for appraisal
Abstract Background Despite criticisms that many quality improvement (QI) initiatives fail due to incomplete programme theory, there is no defined way to evaluate how programme theory has been articulated. The objective of this research was to develop, and assess the usability and reliability of scoring criteria to evaluate programme theory diagrams. Methods Criteria development was informed by published literature and QI experts. Inter-rater reliability was tested between two evaluators. About 63 programme theory diagrams (42 driver diagrams and 21 action–effect diagrams) were reviewed to establish whether the criteria could support comparative analysis of different approaches to constructing diagrams. Results Components of the scoring criteria include: assessment of overall aim, logical overview, clarity of components, cause–effect relationships, evidence and measurement. Independent reviewers had 78% inter-rater reliability. Scoring enabled direct comparison of different approaches to developing programme theory; action–effect diagrams were found to have had a statistically significant but moderate improvement in programme theory quality over driver diagrams; no significant differences were observed based on the setting in which driver diagrams were developed. Conclusions The scoring criteria summarise the necessary components of programme theory that are thought to contribute to successful QI projects. The viability of the scoring criteria for practical application was demonstrated. Future uses include assessment of individual programme theory diagrams and comparison of different approaches (e.g. methodological, teaching or other QI support) to produce programme theory. The criteria can be used as a tool to guide the production of better programme theory diagrams, and also highlights where additional support for QI teams could be needed.
Introduction
Quality improvement (QI) initiatives have grown rapidly in number in response to the need to reduce unwarranted variation and improve quality and value of care. Despite this growth [1,2], evidence of sustained benefits remains limited [3][4][5][6][7][8].
A recurring challenge to improvement practice and evaluation is that QI initiatives often lack a clear programme theory linking interventions directly to intended outcomes [9,10]. Accurately defining hypothesised relationships (cause and effect) provides a comprehensive and prioritised list of interventions, plus support for subsequent monitoring of implementation and effectiveness [11], and consideration of how interventions may translate to other contexts [12]. Well-defined programme theory allows the social challenges of QI to be addressed by creating a shared aim among all who will be impacted by the proposed service change and increasing staff engagement to support implementation [13,14].
Several conceptual models exist to identify and articulate programme theory including driver diagrams, action-effect diagrams and logic models [15][16][17][18][19]. Although differences among these approaches exist, key features of these include the ability to: • help a group to explore the factors that they believe need to be addressed in order to achieve a specific overall goal or outcome, • show how the factors are connected, • act as a communication tool for explaining a change strategy and • provide the basis for a measurement framework.
Evidence suggests that programme theory remains underdeveloped and/ or poorly articulated [19]. To date the practical application of such models in frontline healthcare settings has been poorly studied; consequently, there is a lack of information about how to evaluate the quality of programme theory diagrams. A systematic method of assessing programme theory quality would guide better use from initial setup through implementation and potentially maximise benefits in routine practice. Clear evaluation criteria would also provide a future research method to determine factors that best facilitate programme theory articulation, and assess the overall impact of programme theory on QI conduct.
The objective of this study was to develop scoring criteria to assess the quality of programme theory diagrams; to test the usability and inter-rater reliability of scoring programme theory diagrams and to assess whether the criteria could be used to compare different approaches to constructing programme theory diagrams.
Overview
Theoretical and practical benefits of programme theory were codified into scoring criteria. The usability of these criteria was tested by assessing programme theory diagrams of different types and from different organisational sources and calculating inter-rater reliability between two scorers.
To compare and identify strengths and weaknesses of different approaches to constructing programme theory diagrams, this study compared driver diagrams to action-effect method diagrams, and compared diagrams generated within a single organisational context (NIHR CLAHRC NWL) to diagrams generated in other settings.
Criteria development
Theoretical and practical benefits of programme theory were codified into scoring criteria. The criteria were based on established literature describing the theorised benefits of driver diagrams [15,17] and more general theory about the aspects of pre-planning deemed important for the long-term success of QI [1, 5-8, 10-14, 20-23]. This theoretical knowledge was combined with the practical knowledge gained from experience of Collaboration for Leadership in Applied Health Research and Care Northwest London (CLAHRC NWL) staff involved in supporting planning, conduct and evaluation of improvement initiatives.
Iterative development of the criteria was led by one author (L.I.) who undertook informal interviews with CLAHRC NWL staff and appraisal of the proposed criteria. L.I., a postdoctoral mixed methods researcher, was not previously involved in development or teaching of the production of programme theory diagrams using either driver diagrams or the action-effect method. Interviews were conducted after the production of all diagrams. Two scorers (L.I. and L.L.) then tested the criteria with 10 sample diagrams. L.L., a PhD student in healthcare QI and registered nurse, had not previously been involved in development of the action-effect method or scoring criteria. After a further four cycles of criteria testing and clarity-based modification between L.I. and L.L., inter-rater reliability was 92% on an expanded set of 10 sample diagrams.
Setting
The NIHR commissioned regional CLAHRCs to support the systematic and effective translation of research into practice, and to improve the quality of care for patients (NIHR, 2011). In CLAHRC NWL, a suite of QI methods supported initiatives to deliver care improvements. This approach was driven by an overarching research agenda to investigate the application and impact of QI methods in healthcare.
Four rounds of QI projects (March 2009-September 2013) were selected by a competitive process open to healthcare organisations in NWL. About 55 initiatives were selected from primary, secondary, mental health and public healthcare settings covering diverse clinical topics. The initiatives established frontline QI teams which engaged multidisciplinary staff and patients, and were supported by CLAHRC NWL in training, facilitation and expert support to use QI methods.
Diagram inclusion
About 63 programme theory diagrams were selected for assessment with the scoring criteria. About 22 driver diagrams (produced between 2009 and 2011), 21 action-effect diagrams (produced between 2011 and 2014) in QI initiatives affiliated with NIHR CLAHRC NWL and 20 driver diagrams from a systematic search of driver diagrams published externally between 2009 and 2011.
CLAHRC NWL diagrams
In two rounds of CLAHRC NWL QI projects (March 2009-April 2011), teams were encouraged to use a suite of QI methods including driver diagrams [17] (Section 1, Appendix A). We refer to diagrams produced in this phase as CLAHRC NWL driver diagrams.
In the next two project rounds (April 2011-September 2013), the action-effect method [16] was iteratively co-developed with frontline QI teams to include greater clarity around diagram components, and how these components are distinguished and interrelate with each other. (Section 2, Appendix A). This provides the distinction between the 22 driver diagrams produced in Rounds 1 and 2, and 21 actioneffect diagrams produced in Rounds 3 and 4. Sections 1 and 2, Appendix A represent formal training received by teams using driver diagrams and action-effect methodology, respectively.
External driver diagrams A systematic search was conducted in January 2012 (concurrent with the end of CLAHRC NWL Round 2) for peer-reviewed journal articles containing the terms 'driver diagram' and 'health'. This search primarily produced articles advocating the use of driver diagrams with few examples of published driver diagrams produced to aid improvement in existing healthcare practice. To find a sample of diagrams produced in a similar context to CLAHRC NWL driver diagrams, we conducted a Google Image search in January 2012 for 'driver diagram' and 'health' and selected the first 20 published diagrams that indicated they had been produced as part of service improvement and redesign.
Application of scoring criteria to programme theory diagrams
The scoring criteria were used to assess the 63 diagrams by two authors (L.I. and L.L.). The scorers used the final criteria outlined in Table 1, with possible scores of zero (does not meet the requirement) to three (excellent example of requirement) for each question.
Total scores for each diagram were the composite sum of scores for each of the eight criteria questions giving a maximum score of 24. In attempt to blind scorers to diagram source, diagrams were unlabelled as to diagram type, and were grouped by clinical subject matter. This resulted in a shuffled but not strictly random ordering of diagram type. Due to stylistic and formatting similarities among diagrams of the same type, scorers may have inferred diagram type.
Our primary consideration in this paper was to determine whether the scoring criteria could be applied in a standardised way. Thus, inter-rater reliability was calculated between final scores given to each of 63 diagrams by the two scorers, using the ordinal score of Krippendorff's alpha [24].
Assessing and comparing approaches to developing programme theory diagrams Comparative analysis was performed between programme theory types and settings (external driver diagram, CLAHRC NWL driver diagram and action-effect diagram). Each diagram's total score and score for each individual criterion were calculated as the average between two scorers, comparing median scores for each set of diagrams. Significance of this comparison was tested using the nonparametric Kruskal-Wallis rank sum test with significance level of 0.05. Thematic assessment of results was used to consider the strengths and weaknesses of diagram types.
Results
Firstly, we present the scoring criteria with rationale as to their development. This is followed by an assessment of their reliability when applied to scoring a variety of programme theory diagrams. Finally, we consider the application of scoring criteria to compare different approaches to developing programme theory.
Programme theory scoring criteria
The programme theory scoring criteria are summarised in Table 1 and presented in full in Section 3, Appendix A.
Rationale for scoring criteria-compiling evidence and experience Scoring category: overall aim (Question 1). The need for healthcare improvement efforts to articulate an overall aim is well-documented [1, 5, 8, 15-17, 21, 25, 26]. The aim should be focused on service user needs [15][16][17]21] and agreed by all major stakeholders [1, 8, 15-17, 25, 26]. While some sources recommend including measures, interventions and timelines in the aim statement [15,17], this often conflicts with the objective of widespread engagement and agreement on the aim. Often, not all stakeholders agree that a particular intervention would be effective, or that a particular outcome measure is the most critical [16]. Furthermore, including these aspects in the aim statement serves to obscure cause-effect relationships and measurement concepts which should be made explicit through fully articulated programme theory. Thus, the quality criteria maintain that the overall aim should be stated separately from other components.
Scoring category: logical overview (Question 2). Programme theory dictates that broad categories of factors should be considered to For each of these eight questions score from 0-3 as follows: 0 = does not meet requirements; 1 = meets some of the criteria but has major issues, or a few instances meet requirements; 2 = largely meets requirements or most instances meet requirements and 3 = excellent example of requirement. ensure that there are no gaps in intervention planning [20,21]. Stakeholders should be confident that if each of these factors performs well in the system, the overall aim will be achieved. If these high-level factors are not considered systematically, it is difficult to determine whether they portray a comprehensive picture of all factors that could contribute to the aim [16].
Scoring category: clarity of concepts and cause-effect chains (Questions 3-5). Programme theory involves a clear articulation of how activities and interventions are logically proposed to achieve the overall aim [10,12,14,21,23,27,28]. This is a complex undertaking, often underspecified in improvement efforts [10,20,23]. Furthermore, tacit knowledge is a precarious method of storing information [22]. Thus, cause-effect chains to be fully and explicitly articulated [28] without relying on tacit information to follow the chain of cause and effect. Often proposed factors and interventions are themselves unclear, a related but distinct issue (Section 3, Appendix A). The evidence base is an important facet of programme theory as well [7,8,14,28,29] both in terms of articulating how components of the logic model are evidenced [14,28] as well as identifying the source and strength of the evidence [28,29] (recognising that even where logic models are evidence-based, this understanding can guide evaluation [30]).
Scoring category: measurement and evaluation (Questions 6-8). Clarity of plans for evaluation and measurement is an important attribute of programme theory [1, 5, 7, 11-13, 21-23, 27, 29, 30]. In addition to these plans being clear to all stakeholders, it is important for the measure concepts to be aligned with proposed cause-effect relationships [5,12,21,27,30] and involve both process and outcome measures [7,28]. Process measures help guide the implementation process and can provide rapid feedback to how well the intervention and implementation activities are working and guide adjustments. Outcome measures provide useful information about the impact of an intervention, which usually takes longer to assess.
Reliability of assessments using the programme theory scoring criteria
Inter-rater reliability between scores given to each diagram was 78% (Krippendorff's alpha, ordinal scale) which is within the acceptable range for criteria usability [24].
Use of scoring criteria to compare different approaches to developing programme theory Of a maximum overall score of 24, the median composite score for external driver diagrams was 6.25, for CLAHRC NWL driver diagrams 5.75 and for CLAHRC NWL action-effect diagrams 11.5 (Figure 1). The setting in which programme theory diagrams were conducted (CLAHRC NWL driver diagrams compared to external driver diagrams) had no significant effect with similar scores observed for each set. The type of diagram used had a significant but moderate difference with action-effect diagrams scored higher over both types of driver diagrams (chi-squared = 19.6941, df = 2, P < 0.0001). The average score for action-effect diagrams was under half the total possible score.
Comparing individual scoring criteria reveals that action-effect diagrams tended to perform better than driver diagrams, independent of setting, for the quality of overall aim, clarify of cause and effect relationships and distribution and clarify of measures. Both diagram types did not include explicit reference to existing evidence base supporting cause and effect chains. Boxplots for comparison of the individual scoring criteria assessments can be found in Section 4, Appendix A along with description of how rationale for assessments and examples of good practice.
Discussion
Failure to provide clear programme theory is linked with failure to deliver or sustain improvement. The criteria developed here are the first of their kind and can enable practitioners and researchers to assess the quality of programme theory output, regardless of which approach was used to construct the diagram. QI tools will be of maximum benefit to teams when they are used as intended, but evidence suggests that programme theory is often underdeveloped and poorly articulated [30]. These criteria will help guide high-quality production of programme theory diagrams, and provide a structured method for researchers to evaluate their use.
The criteria, including categories of overall aim, clarity of components and cause-effect chains, and measurement, were developed by building on the existing literature of programme theory. New concepts were introduced only when they were evidenced from practical experience and provided generalisable lessons. For example, the action-effect method guidance [16] expanded the definition of the overall aim, stating it should be based on the concept of 'To improve health for service-users'. This was informed by experience recognising the potential for a patient-centred aim to facilitate engagement with diverse stakeholder groups, and the need for this aim to be free from measures and interventions which may be inconsistent or controversial. The quality criteria proposed in this paper provide a foundation for future work to develop and achieve professional consensus on wider applicability and generalisability.
The criteria demonstrated good reliability for assessing individual programme theory diagrams, with inter-rater reliability of 78%. The criteria also show potential for comparing and contrasting different Figure 1 Boxplot diagram of total composite scores, averaged (mean) between scorers, for each diagram set (external driver diagram, CLAHRC NWL driver diagram and CLAHRC NWL action-effect diagram). For all boxplot diagrams, the median is marked by a thick horizontal line, the upper quartile by the box above the line and the lower quartile by a box below the line. Whiskers indicate values 1.5 times above and below the interquartile range. Circles indicate outlier cases between 1.5 and 3 times the interquartile range, and asterisks indicate outliers greater than 3 times the interquartile range.
approaches to constructing programme theory diagrams. Driver diagrams produced in different organisational settings received similar average scores suggesting that the scores reflect underlying attributes of the programme theory approach rather than specific variations by which the approach was applied in different settings. While the actioneffect diagrams in this sample scored significantly higher than the sampled driver diagrams, their median score was 11.5 out of 24 points, indicating further room for improvement. The criteria highlight those aspects of programme theory that require greatest improvement and these could be improved with targeted guidance, expert facilitation or support (e.g. clarity of evidence base, logical overview quality). A limitation of this study is that only two types of approaches to constructing programme theory were studied. Further research is required to explore the application of the scoring criteria in other approaches including logic models [18,19].
The data suggest that the construction of a quality programme theory diagram is conceptually difficult. Based on the experience of the authors, we suggest it requires significant expertise in QI methodology that cannot be provided through written instruction and light-touch facilitation alone. Further research is needed to review facilitation approaches or additional technical assistance necessary to improve the quality of programme theory diagrams. This research opportunity is provided by the existence of scoring criteria as a systematic method for determining the success of facilitation and technical assistance improving the quality of resulting programme theory diagrams.
Further research is needed to investigate whether expert facilitation or technical assistance can encourage engagement in programme theory and iterative development and use of the programme theory diagrams to support constructive dialogue and exchange of tacit knowledge between stakeholders, and to reduce the cognitive burden associated with diagram construction. Facilitation or technical assistance could lead to more substantial improvements in developing factors and causeeffect chains that are less reliant on tacit knowledge, link to the evidence base and build a robust evaluation framework [29]. This is theorised in the literature to lead to improvements in team functioning and buy-in to the improvement project as well as aiding the spread of success to other environments and initiatives [22,23,28]. One important consideration is the challenge of exposing tacit knowledge which, due to its 'sticky' nature, is often difficult for frontline staff to perceive and share with outsiders, and therefore presents a barrier to communicate their reasoning to a lay audience [31].
Conclusion
This is the first structured approach to assess the quality of programme theory developed by QI teams in practice. The scoring criteria incorporate a summary of published literature and practical experience regarding the benefits of programme theory.
The robustness and viability of the scoring criteria for practical application was demonstrated by 78% inter-rater reliability between two independent scorers. The scoring criteria were able to detect differences in diagram type (action-effect diagram versus driver diagram) independently of the setting in which diagrams were constructed.
Future uses include assessment of individual programme theory diagrams and comparison of different approaches (e.g. methodological and teaching) to produce programme theory. The criteria can be used as a tool to guide the production of better programme theory diagrams and also highlights where additional support for QI teams could be needed.
Supplementary material
Supplementary material is available at International Journal for Quality in Health Care online. | 2018-04-26T23:26:33.835Z | 2018-04-09T00:00:00.000 | {
"year": 2018,
"sha1": "ba8382c56b3118fbeafb19d2d907a610502ef91e",
"oa_license": "CCBYNC",
"oa_url": "https://academic.oup.com/intqhc/article-pdf/30/7/508/25507674/mzy063.pdf",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "ba8382c56b3118fbeafb19d2d907a610502ef91e",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Computer Science",
"Medicine"
]
} |
15636415 | pes2o/s2orc | v3-fos-license | Sequence Tutor: Conservative Fine-Tuning of Sequence Generation Models with KL-control
This paper proposes a general method for improving the structure and quality of sequences generated by a recurrent neural network (RNN), while maintaining information originally learned from data, as well as sample diversity. An RNN is first pre-trained on data using maximum likelihood estimation (MLE), and the probability distribution over the next token in the sequence learned by this model is treated as a prior policy. Another RNN is then trained using reinforcement learning (RL) to generate higher-quality outputs that account for domain-specific incentives while retaining proximity to the prior policy of the MLE RNN. To formalize this objective, we derive novel off-policy RL methods for RNNs from KL-control. The effectiveness of the approach is demonstrated on two applications; 1) generating novel musical melodies, and 2) computational molecular generation. For both problems, we show that the proposed method improves the desired properties and structure of the generated sequences, while maintaining information learned from data.
Introduction
The approach of training sequence generation models using likelihood maximization suffers from known failure modes, and it is notoriously difficult to ensure multi-step generated sequences have coherent global structure. For example, long short-term memory (LSTM) (Hochreiter & Schmidhuber, 1997) networks trained to predict the next character in sequences of text may produce text that has correct spelling, punctuation, and even a semblance of grammar, but the generated text shifts so rapidly from topic to topic, that it is almost completely nonsensical (see (Graves, 2013) for an example). Similar networks trained to predict the next note in a melody suffer from the same problem; the generated music has no consistent theme or structure, and appears wandering and random. In addition, these models are prone to excessively repeating the same output token, a problem that has also been noted in the context of recurrent dialog generation models (Li et al., 2016).
To ameliorate these problems we propose Sequence Tutor, a novel approach which uses RL to impose structure on a sequence generation RNN via task-specific rewards, while simultaneously ensuring that information learned from data is retained. This is accomplished by maintaining a fixed copy of a sequence generation RNN pre-trained on data, which is termed the Reward RNN. Rather than simply using the Reward RNN to supply part of the rewards to our model, we derive novel off-policy RL methods for sequence generation from KL-control that allow us to directly penalize Kullback Leibler (KL) divergence from the policy defined by the Reward RNN. As a byproduct of minimizing KL our objective includes an entropy regularization term that encourages high entropy in the distribution of the RL model. This is ideal for sequence generation tasks such as text, music, or molecule generation, in which maintaining diversity in the samples generated by the model is critical.
Sequence Tutor effectively combines both data and taskrelated goals, without relying on either as a perfect metric of task success. This is an important novel direction of research. Much previous work on combining RL and MLE has used MLE training simply as a way to bootstrap the training of an RL model (Ranzato et al., 2015;Bahdanau et al., 2016;Li et al., 2016), since training with RL from scratch is difficult. However, this approach does not encourage diversity of the generated samples, and can be problematic when task-specific rewards are incomplete or imperfect. Designing an appropriate reward definition is highly non-trivial, and often the hand-crafted rewards cannot be fully trusted (Vedantam et al., 2015;Liu et al., 2016). And yet, relying on data alone can be insufficient when the data itself contains biases, as has been shown for text data (Caliskan-Islam et al., 2016), or when domain-specific constraints cannot be encoded directly into MLE training. By learning a policy that trades off staying close to the data distribution while improving performance on specific metrics, Sequence Tutor reduces both of these problems.
This paper contributes to the sequence training and RL literature by a) proposing a novel method for combining MLE and RL training; b) showing the connection between KL control and sequence generation; c) deriving the explicit relationships among a generalized variant of Ψ-learning (Rawlik et al., 2012), G-learning (Fox et al., 2015), and Q-learning with log prior augmentation, and being the first to empirically compare these methods and use them with deep neural networks.
We explore the usefulness of our approach for two sequence generation applications. The first, music generation, is a difficult problem in which the aesthetic beauty of generated sequences cannot be fully captured in a known reward function, but in which models trained purely on data cannot produce well-structured sequences. Through an empirical study, we show that by imposing rules of music theory on a melody generation model, Sequence Tutor is able to produce melodies which are varied, yet more harmonious, interesting, and rated as significantly more subjectively pleasing than those of the MLE model. Further, Sequence Tutor is able to significantly reduce unwanted behaviors and failure modes of the original RNN. The effectiveness of Sequence Tutor is also demonstrated for computational molecular generation, a task in which the goal is to generate novel drug-like molecules with desirable properties by outputting a string representation of the molecule encoding. However, generating valid molecules can prove difficult, as it is hard for probabilistic models to learn all the constraints that define physically realizable molecules directly from data (Gómez-Bombarelli et al., 2016). We show that Sequence Tutor is able to yield a higher percentage of valid molecules than the baseline MLE RNN, and the generated molecules score higher on metrics of druglikeness and ease of synthesis.
Related Work
Recent work has attempted to use both MLE and RL in the context of structured prediction. While the attempts were successful, the problems of maintaining information about the data distribution and diversity in the generated samples were not addressed. MIXER (Mixed Incremental Cross-Entropy Reinforce) (Ranzato et al., 2015) uses BLEU score as a reward signal to gradually introduce a RL loss to a text translation model. Bahdanau et al. (2016) applies an actor-critic method and uses BLEU score directly to train a critic network to output the value of each word, where the actor is again initialized with the policy of an RNN trained with next-step prediction. Li et al. (2016) use RL to improve a pre-trained dialog model with heuristic rewards. These approaches assume that the complete task reward specification is available. They pre-train a good policy with supervised learning so that RL can be used to learn the true task objective, since it can be difficult to reach convergence when training with pure RL. However, the original MLE policy of these models is overwritten by the RL training process. In contrast, Sequence Tutor uses rewards to correct certain properties of the generated data, while learning most information from data and maintaining this information; an important ability when the true reward function is not available or imperfect.
Reward augmented maximum likelihood (RAML) (Norouzi et al., 2016) is an approach designed to improve MLE training of a translation model by augmenting the ground truth targets with additional outputs that are within a small edit distance, and performing MLE training against those as well. The authors show that their approach is equivalent to minimizing KL-divergence between an RL exponentiated payoff distribution based on edit distance, and the MLE distribution. In contrast, our goal is generation rather than prediction, and we train an RL rather than MLE model. The RAML approach, while an important contribution, is only viable if it is possible to generate additional MLE training samples that are similar in terms of the reward function to the ground truth (i.e. samples within a small edit distance). However in some domains, including the two explored in this paper, generating similar samples with high reward is not only not possible, but in fact constitutes the entire problem under investigation.
Finally, our approach is related to KL control (Todorov, 2007;Kappen et al., 2012;Rawlik et al., 2012), a branch of stochastic optimal control (SOC) (Stengel, 1986). There is also a connection between this work and Maximum Entropy Inverse RL (Ziebart et al., 2008), which can be seen as KL control with a flat, improper prior. From KL control, we take inspiration from two off-policy, model-free methods, Ψ-learning (Rawlik et al., 2012) and G-learning (Fox et al., 2015). Both approaches are derived from a KLregularized RL objective, where an agent maximizes the reward while incurring additional penalty for divergence from some prior policy. While our methods rely on similar derivations presented in these papers, our methods have different motivations and forms from the original papers. The original Ψ-learning (Rawlik et al., 2012) restricts the prior policy to be the policy at the previous iteration and solves the original RL objective with conservative, KLregularized policy updates, similar to conservative policy gradient methods (? Peters et al., 2010;Schulman et al., 2015). The original G-learning (Fox et al., 2015) penalizes divergence from a simple uniform prior policy in order to cope with over-estimation of target Q values. These tech-niques have not been applied to deep learning techniques or with RNNs, or as a way to improve a pre-trained MLE model. Our work is the first to explore these methods in such a context, and includes a Q-learning model with additional cross-entropy reward as a comparable alternative. To the best of our knowledge, our work is the first to provide comparisons among these three approaches.
There has also been prior work in the domain of generative modeling of music. Using RNNs for this purpose has been explored in a variety of contexts, including generating Celtic folk music (Sturm et al., 2016), or improvising the blues (Eck & Schmidhuber, 2002). Often, this involves training the RNN to predict the next note in a monophonic melody; however, as mentioned above, the melodies generated by this model tend to wander and lack musical structure. Some authors have experimented with encoding musical structure into a hierarchical RNN with layers dedicated to generated the melody, drums, and chords (Chu et al., 2016). Other approaches have examined RNNs with richer expressivity, latent-variables for notes, or raw audio synthesis (Boulanger-Lewandowski et al., 2012;Gu et al., 2015;Chung et al., 2015). Recently, Wavenet produced impressive performance in generating music from raw audio using convolutional neural networks with receptive fields at various time scales (van den Oord et al., 2016). However, the authors themselves note that "even with a receptive field of several seconds, the models did not enforce long-range consistency which resulted in second-to-second variations in genre, instrumentation, and sound quality" (p. 8).
Finally, prior work has successfully performed computational molecular generation with deep neural networks. Segler et al. (2017) demonstrated that an LSTM trained on sets of biologically active molecules can be used to generate novel molecules with similar properties. Gómez-Bombarelli et al. (2016) trained a variational autoencoder to learn a compact embedding of molecules encoded using the SMILES notation. By interpolating in the embedding space and optimizing for desirable metrics of drug quality, the authors were able to decode molecules with high scores on these metrics. However, producing embeddings that led to valid molecules was difficult; in some cases, as little as 1% of generated sequences proved to be a valid molecule encoding.
Background
In RL, an agent interacts with an environment. Given the state of the environment at time t, s t , the agent takes an action a t according to its policy π(a t |s t ), receives a reward r(s t , a t ), and the environment transitions to state, s t+1 .The agent's goal is to maximize reward over a sequence of actions, with a discount factor of γ applied to future rewards. The optimal deterministic policy π * is known to satisfy the following Bellman optimality equation, is the Q function of a policy π. In Deep Q-learning (Mnih et al., 2013), a neural network called the deep Q-network (DQN) is trained to approximate Q(s, a; θ), using the following objective, where β is the exploration policy, and θ − is the parameters of the target Q-network (Mnih et al., 2013) that is held fixed during the gradient computation. The target Q-network is updated more slowly than the Q-network; for example the moving average of θ can be used as θ − , as proposed by Lillicrap et al. (2015). Exploration can be performed with either the -greedy method or Boltzmann sampling. Additional techniques such as a replay memory (Mnih et al., 2013) are used to stabilize and improve learning.
Sequence Tutor
Given a trained sequence generation RNN, we would like to impose domain-specific rewards based on the structure and quality of generated sequences, while still maintaining information about typical sequences learned from data. Therefore, we treat the trained model as a black-box prior policy, and focus on developing a method that can tune some properties of the model without interfering with the original probability distribution learned from data. The separation between the trained sequence model and the tuning method is important, as it prevents RL training from overwriting the original policy. To accomplish this task, we propose Sequence Tutor. An LSTM trained on data supplies the initial weights for three networks in the model: a recurrent Q-network and target Q-network, and a Reward RNN. The Reward RNN is held fixed during training, and treated as a prior policy which can supply the probability of a given token in a sequence as originally learned from data.
To apply RL to sequence generation, generating the next token in the sequence is treated as an action a. The state of the environment consists of all of the tokens generated so far, i.e. s t = {a 1 , a 2 , ...a t−1 }. Given action a t , we would like the reward r t to combine information about the prior policy p(a t |s t ) as output by the Reward RNN, as well as some domain-or task-specific rewards r T . Figure 1 illustrates these ideas. Figure 1: An RNN pre-trained on data using MLE supplies the initial weights for the Q-network and target Q-network, and a fixed copy is used as the Reward RNN.
Q-learning with log prior augmentation
The simplest and most naïve way to incorporate information about the prior policy is to directly augment the taskspecific rewards with the output of the Reward RNN. In this case, the total reward given at time t becomes: where c is a constant controlling the emphasis placed on the task-specific rewards. Given the DQN objective in Eq. 2 and modified reward function in Eq. 3, the objective and learned policy are: π θ (a|s) = δ(a = arg max a Q(s, a; θ)).
This modified objective forces the model to learn that the most valuable actions are those that conform to the music theory rules, but still have high probability in the original data. However, the DQN learns a deterministic policy (as shown in Eq. 5), which is not ideal for sequence generation. Therefore, after the model is trained, we generate sequences by sampling from the softmax function applied to the predicted Q-values.
KL Control for Sequence Generation
If we cast sequence generation as a sequential decisionmaking problem and the desired sequence properties in terms of target rewards, the problem can be expressed as a KL control problem for a non-Markovian system. KL control (Todorov, 2007;Kappen et al., 2012;Rawlik et al., 2012) is a branch of stochastic optimal control (SOC) (Stengel, 1986), which studies an RL, or control, problem in which the agent tries maximizing its task reward while minimizing deviation from a prior policy. For our purposes, we treat a trained MLE sequence model as the prior policy, and thus the objective is to train a new policy, or sequence model, to maximize some rewards while keeping close to the original MLE model. We show that such KL control formulation allows us to derive additional variants of Q-learning with minimal modifications, which give rise to different properties. Let τ = {a 1 , a 2 , ..., a t−1 } represent the sequence, r(τ ) the reward of the sequence, p(τ ) be the prior distribution over τ given by the trained sequence model, and q(τ ) be the policy of the Sequence Tutor model. The objective is then to maximize the following expression with respect to q(τ ), where D KL represents the KL divergence of distributions: We express q(τ ) in terms of a parametrized recurrent policy π θ (a t |s t ), i.e. q(τ ) = T t=1 π θ (a t |s t ) where s t = {a 1 , a 2 , ..., a t−1 }, indicates that the system is non-Markovian. The prior policy is expressed similarly p(τ ) = T t=1 p(a t |s t ). The reinforcement learning objective is the following, where E π [·] below indicates expectation with respect to sequences sampled from π, The difference between this equation and Eq. 4 is that an entropy regularizer is now included, and thus the optimal policy is no longer deterministic. Below, we derive general temporal-difference based methods for the KL-control problem for sequence generation.
Recurrent Generalized Ψ-learning
Let V π (s t ) define the recurrent value function of the policy π θ , given by, We define the generalized Ψ function, analogous to Q function for KL control, as below. We call this generalized Ψ function, as it was introduced in deriving Ψ-learning (Rawlik et al., 2012), and the following derivation is a generalization to the Ψ-learning algorithm.
Ψ π (s t , a t ) = r(s t , a t )/c + log p(a t |s t ) + V π (s t+1 ) (8) Note that the state s t+1 is given deterministically by s t = {a 1 , a 2 , ..., a t−1 } and a t for sequence modeling, and thus the expressions do not contain the usual stochastic dynamics p(s t+1 |s t , a t ). The value function V π (s t ) can be recursively expressed in terms of Ψ π , Fixing Ψ(s t , a t ) = Ψ π (s t , a t ) and constraining π to be a probability distribution, the optimal greedy policy update π * can be derived, along with the corresponding optimal value function, π * (a t |s t ) ∝ e Ψ(st,at) (11) Given Eq. 8 and 12, the following Bellman optimality equation for generalized Ψ function is derived.
The Ψ-learning loss directly follows: y t = log p(a t |s t ) + r(s t , a t )/c + γ log a e Ψ − (st+1,a ) β corresponds to sampling sequence trajectories from an arbitrary distribution; in practice, the experience replay (Mnih et al., 2013). Ψ − indicates that it uses the target network. Ψ θ , i.e. π θ , is parametrized with recurrent neural networks, and for discrete actions, π θ is effectively a softmax layer on top of Ψ θ .
Recurrent G-learning
We can derive another algorithm by parametrizing Ψ θ indirectly by Ψ θ (s t , a t ) = log p(a t |s t ) + G θ (s t , a t ). Substituting into above equations, we get a different temporaldifference method: y t = r(s t , a t )/c + γ log a p(a |s t+1 )e G − (st+1,a ) and π θ (a t |s t ) ∝ p(a t |s t ) exp(G θ (s t , a t )) This formulation corresponds to G-learning (Fox et al., 2015), which can thus be seen as a special case of generalized Ψ-learning. Unlike Ψ learning, which directly builds knowledge about the prior policy into the Ψ function, the G-function does not give the policy directly but instead needs to be dynamically mixed with the prior policy probabilities. While this computation is straight-forward for discrete action domains as here, extensions to continuous action domains require additional considerations such as normalizability of Ψ-function parametrization (Gu et al., 2016).
The KL control-based derivation also has another benefit in that the stochastic policies can be directly used as an exploration strategy, instead of heuristics such as -greedy or additive noise (Mnih et al., 2013;Lillicrap et al., 2015).
Sequence Tutor implementation
Following from the above derivations, we compare three methods for implementing Sequence Tutor: Q-learning with log prior augmentation (based on Eq. 4), generalized Ψ-learning (based on Eq. 14), and G-learning (based on Eq. 15). A pre-trained sequence generation LSTM is used as the Reward RNN, to supply the cross entropy reward in Q-learning and the prior policy in G-and generalized Ψlearning. These approaches are compared to both the original performance of the MLE RNN, and a model trained using only RL and no prior policy. Model evaluation is performed every 100,000 training epochs, by generating 100 sequences and assessing the average r T and log p(a|s).
The code for Sequence Tutor, including a checkpointed version of the trained melody RNN is available at https://github.com/tensorflow/magenta/ tree/master/magenta/models/rl_tuner.
Experiment I: Melody Generation
Music compositions adhere to relatively well-defined structural rules, making music an interesting sequence generation challenge. For example, music theory tells that groups of notes belong to keys, chords follow progressions, and songs have consistent structures made up of musical phrases. Our research question is therefore whether such constraints can be learned by an RNN, while still allowing it to maintain note probabilities learned from data.
To test this hypothesis, we developed several rules that we believe describe pleasant-sounding melodies, taking inspiration from a text on melodic composition (Gauldin, 1995). We do not claim these characteristics are exhaustive or strictly necessary for good composition; rather, they are an incomplete measure of task success that can simply guide the model towards traditional composition structure. It is therefore crucial that the Sequence Tutor approach allows the model to retain knowledge learned from real songs in the training data. The rules comprising the music-specific reward function r T (a, s) encourage melodies to: stay in key, start with the tonic note, resolve melodic leaps, have a unique maximum and minimum note, prefer harmonious intervals, play motifs and repeat them, have a low autocorrelation at a lag of 1, 2, and 3 beats, and avoid excessively repeating notes. Interestingly, while excessively repeating tokens is a common problem in RNN sequence generation models, avoiding this behavior is also Gauldin's first rule of melodic composition (p. 42).
To train the model, we begin by extracting monophonic melodies from a corpus of 30,000 MIDI songs and encoding them as one-hot sequences of notes 1 . These melodies are then used to train an LSTM with one layer of 100 cells. Optimization was performed with Adam (Kingma & Ba, 2014), a batch size of 128, initial learning rate of .5, and a stepwise learning rate decay of 0.85 every 1000 steps. Gradients were clipped to ensure the L2 norm was less than 5, and weight regularization was applied with β = 2.5×10 −5 . Finally, the losses for the first 8 notes of each sequence were not used to train the model, since it cannot reasonably be expected to accurately predict them with no context. The trained RNN eventually obtained a validation accuracy of 92% and a log perplexity score of .2536. This model was used as described above to initialize the three sub-networks in the Sequence Tutor model. The Sequence Tutor model was trained using a similar configuration to the one above, except with a batch size of 32, and a reward discount factor of γ=.5. The Target-Q-network's weights θ − were gradually updated towards those of the Q-network (θ) according to the formula (1 − η)θ − + ηθ, where η = .01 is the Target-Q-network update rate. A strength of our model is that the influence of data and task-specific rewards can be explicitly controlled by adjusting the temperature parameter c. We replicated our results for a number of settings for c; we present results for c=.5 below because we believe them to be most musically pleasing, however additional results are available at https://goo.gl/cTZy8r. Similarly, we replicated the results using both -greedy and Boltzmann exploration, and present the results using -greedy exploration below.
Results
Table 1 provides quantitative results in the form of performance on the music theory rules to which we trained the model to adhere; for example, we can assess the fraction of notes played by the model which belonged to the correct key, or the fraction of melodic leaps that were resolved. The statistics were computed by randomly generating 100,000 melodies from each model.
The results above demonstrate that the application of RL is able to correct almost all of the targeted "bad behaviors" of the MLE RNN, while improving performance on the desired metrics. For example, the original LSTM model was extremely prone to repeating the same note; after applying RL, we see that the number of notes belonging to some excessively repeated segment has dropped from 63% to nearly 0% in all of the Sequence Tutor models. While the metrics for the G model did not improve as consistently, the Q and Ψ models successfully learned to adhere to most of the imposed rules. The degree of improvement on these metrics is related to the magnitude of the reward given for the behavior. For example, a strong penalty of -100 was applied each time a note was excessively repeated, while a reward of only 3 was applied at the end of a melody for unique extrema notes (which most likely explains the lack of improvement on this metric). The reward values could be adjusted to improve the metrics further, however we found that these values produced pleasant melodies.
While the metrics indicate that the targeted behaviors of the RNN have improved, it is not clear whether the models have retained information about the training data. Figure 2a plots the average log p(a|s) as produced by the Reward RNN for melodies generated by the models every 100,000 training epochs; Figure 2b plots the average r T . Included in the plot is an RL only model trained using only the music theory rewards, with no information about log p(a|s). Since each model is initialized with the weights of the trained MLE RNN, we see that as the models quickly learn to adhere to the music theory constraints, log p(a|s) falls from its initial point. For the RL only model, log p(a|s) reaches an average of -3.65, which is equivalent to an average p(a|s) of approximately 0.026, or essentially a random policy over the 38 actions with respect to the distribution defined by the Reward RNN. Figure 2a shows that each of our models (Q, Ψ, and G) attain higher log p(a|s) values than this baseline, indicating they have maintained information about the data distribution, even over 3,000,000 training steps. The G-learning implementation scores highest on this metric, at the cost of slightly lower average r T . This compromise between data probability and adherence to music theory could explain the difference in the G model's performance on the music theory metrics in Table 1. Finally, we have verified that by increasing the c parameter it is possible to train all the models to have even higher average log p(a|s), but found that c = 0.5 produced melodies that sounded better subjectively.
The question remains whether the RL-tutored models actually produce more pleasing melodies. The sample melodies used for the study are available here: goo.gl/XIYt9m; we encourage readers to judge their quality for themselves.
To more formally answer this question, we conducted a user study via Amazon Mechanical Turk in which participants were asked to rate which of two randomly selected melodies they preferred on a Likert scale. A total of 192 ratings were collected; each model was involved in 92 of these comparisons. Figure 3 plots the number of comparisons in which a melody from each model was selected as the most musically pleasing. A Kruskal-Wallis H test of the ratings showed that there was a statistically significant difference between the models, χ 2 (3) = 109.480, p < 0.001. Mann-Whitney U post-hoc tests revealed that the melodies from all three Sequence Tuner models (Q, Ψ, and G) had significantly higher ratings than the melodies of the MLE RNN, p < .001. The Q and Ψ melodies were also rated as significantly more pleasing than those of the G model, but did not differ significantly from each other.
Discussion
Listening to the samples produced by the MLE RNN reveals that they are sometimes dischordant and usually dull; the model tends to place rests frequently, repeat the same Figure 3: The number of times a melody from each model was selected as most musically pleasing. Error bars reflect the std. dev. of a binomial distribution fit to the binary win/loss data from each model. note, and produce melodies with little variation. In contrast, the melodies produced by the Sequence Tutor models are more varied and interesting. The G model tends to produce energetic and chaotic melodies, which include sequences of repeated notes. This repetition is likely because the G policy as defined in Eq. 15 directly mixes p(a|s) with the output of the G network, and the MLE RNN strongly favours repeating notes. The most pleasant melodies are generated by the Q and Ψ models. These melodies stay firmly in key and frequently choose more harmonious interval steps, leading to melodic and pleasant melodies. However, it is clear they have retained information about the training data; for example, the sample q2.wav in the sample directory ends with a seemingly familiar riff.
While we acknowledge that the monophonic melodies generated by these models -which are based on highly simplistic rules of melodic composition -do not approach the level of artistic merit of human composers, we believe this study provides a proof-of-concept that encoding even incomplete and partially specified domain knowledge using our method can help the outputs of an LSTM adhere to a more consistent structure. The musical complexity of the songs is limited not just by the heuristic rules, but also by the simple monophonic encoding, which cannot represent the dynamics and expressivity of a musical performance. Although these melodies cannot surpass those of human musicians, attempting to train a model to generate aesthetically pleasing outputs in the absence of a better metric of human taste than log-likelihood is a problem of broader interest to the artificial intelligence community.
Experiment II: Computational Molecular Generation
As a follow-on experiment, we tested the effectiveness of Sequence Tutor for generating a higher yield of synthet-ically accessible drug-like molecules. Organic molecules can be encoded using the commonly used SMILES representation (Weininger, 1970). For example, amphetamine can be encoded as 'CC(N)Cc1ccccc1', while creatine is 'CN(CC(=O)O)C(=N)N'. Using this character encoding, it is straightforward to train an MLE RNN to generate sequences of SMILES characters; we trained such a model using the same settings as described above for the melody MLE RNN. However, only about a third of the molecules generated using this simple approach are actually valid SMILES encodings. Further, this approach does not directly optimize for metrics of molecule or drug quality. These metrics include: a) the water-octanol partition coefficient (logP), which is important in assessing the druglikeness of a molecule; b) synthetic accessibility (SA) (Ertl & Schuffenhauer, 2009), a score from 1-10 that is lower if the molecule is easier to synthesize; and c) Quantitative Estimation of Drug-likeness (QED) (Bickerton et al., 2012), a more subjective measure of drug-likeness based on abstract ideas of medicinal aesthetics.
To optimize for these metrics, while simultaneously improving the percent yield of valid molecules from the RNN, we constructed a reward function that incentivizes validity, logP, SA, and QED using an open-source library called RDkit (http://www.rdkit.org/). Included in the reward function was a penalty for molecules with unrealistically large carbon rings (size larger than 6), as per previous work (Gómez-Bombarelli et al., 2016). Finally, after observing that the model could exploit the reward function by generating the simple molecule 'N' repeatedly, or 'CCCCC...' (which produces an unrealistically high logP value), we added penalties for sequences shorter than, or with more consecutive carbon atoms than, any sequence in the training data. Sequence Tutor was then trained using these rewards, the pre-trained MLE RNN, and similar settings to the first experiment, except with -greedy exploration with = .01, a batch size of 512, and discount factor γ = .95. For this experiment, we also made use of prioritized experience replay (Schaul et al., 2015) to allow the model to more frequently learn from relatively rare valid samples. A value of c = 2.85 led to a higher yield of valid molecules with high metrics, but still encouraged the diversity of generated samples.
Results and discussion
As the Ψ algorithm produced the best results for the music generation task, we focused on using this technique for generating molecules. Table 2 shows the performance of this model against the original MLE model according to metrics of validity, drug-likeness, and synthetic accessibility. Once again, Sequence Tutor is able to significantly improve almost all of the targeted metrics. However, it should be noted that the Sequence Tutor model tends to produce simplistic molecules involving more carbon atoms than the MLE baseline; e.g. Sequence Tutor may produce 'SNCc1ccccc1', while the MLE produces 'C(=O)c1ccc(S(=O)(=O)N(C)C)c(Cl)c1', which is the reason for the Sequence Tutor model's lower QED scores. This effect is due to the fact that simple sequences are more likely to be valid, have high logP and SA scores, and carbon is highly likely under the distribution learned by the MLE model. A higher reward for QED and further improvement of the task-specific rewards based on domain knowledge could help to alleviate these problems. Overall, the fact that Sequence Tutor can improve the percentage of valid molecules produced as well as the logP and synthetic accessibility scores serves as a proof-of-concept that Sequence Tutor may be valuable in a number of domains for imparting domain knowledge onto a sequence predictor.
Conclusion and Future Work
We have derived a novel sequence learning framework which uses RL to correct properties of sequences generated by an RNN, while maintaining information learned from MLE training on data, and ensuring the diversity of generated samples. By demonstrating a connection between our sequence generation approach and KL-control, we have derived three novel RL-based methods for optimizing sequence generation models. These methods were empirically compared in the context of a music generation task, and further demonstrated on a computational molecular generation task. Sequence Tutor showed promising results in terms of both adherence to task-specific rules, and subjective quality of the generated sequences.
We believe the Sequence Tutor approach of using RL to refine RNN models could be promising for a number of applications, including the reduction of bias in deep learning models. While manually writing a domain-specific reward function may seem unappealing, that approach is limited by the quality of the data that can be collected, and besides, even state-of-the-art sequence models often fail to learn all the aspects of high-level structure (van den Oord et al., 2016;Graves, 2013). Further, the data may contain hidden biases, as has been demonstrated for popular language models (Caliskan-Islam et al., 2016). In contrast to relying solely on possibly biased data, our approach allows for encoding high-level domain knowledge into the RNN, providing a general, alternative tool for training sequence models. | 2017-03-04T08:44:09.110Z | 2016-11-09T00:00:00.000 | {
"year": 2016,
"sha1": "a870df7e7d43c9144e2520ef4e4779f1672dd654",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "a870df7e7d43c9144e2520ef4e4779f1672dd654",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Computer Science"
]
} |
196638537 | pes2o/s2orc | v3-fos-license | Gender differences in the activity of the autonomic nervous systems of healthy and hypertensive patients in Russia
The idea of arterial hypertension (AH) as a neurogenic disease was put forward in 1922 by the Soviet scientist Lang GF Till date; a large amount of evidence about the role of autonomic disorders in the development and formation of AH has been collected. Special attention is paid to the increase in the tone of the sympathetic nervous system.1–3 The results of one of the largest studies on this topic (Tecumseh Blood Pressure Study4) provided strong evidence that activation of the sympathetic link of the autonomic nervous system (ANS) is important not only in the early stages of the development of hypertension, but contributes to the formation of cardiovascular risk in the future. At the same time, there are indications of gender dimorphism in the autonomic regulation of a healthy body,5–10 which, in turn, determines the peculiarities of the development of hypertension in men and women. With a high predisposition to hypertension in healthy men, blood circulation at rest is determined primarily by increased influence of the sympathetic division of the ANS. But among apparently healthy women having a high predisposition to hypertension, blood circulation depend to a lesser extent on the specifics of the autonomic regulation compared to men.11 However, despite existing data, which indicate different mechanisms of formation and establishment of hypertension in men and women, the amount of experimental data is not yet sufficient.12 In this connection, the aim of this work was to evaluate the gender-specific state of the autonomic nervous system of healthy people and hypertensive patients, using the methods of heart rate variability (HRV) and gas discharge visualization (GDV).
Introduction
The idea of arterial hypertension (AH) as a neurogenic disease was put forward in 1922 by the Soviet scientist Lang GF Till date; a large amount of evidence about the role of autonomic disorders in the development and formation of AH has been collected. Special attention is paid to the increase in the tone of the sympathetic nervous system. [1][2][3] The results of one of the largest studies on this topic (Tecumseh Blood Pressure Study 4 ) provided strong evidence that activation of the sympathetic link of the autonomic nervous system (ANS) is important not only in the early stages of the development of hypertension, but contributes to the formation of cardiovascular risk in the future. At the same time, there are indications of gender dimorphism in the autonomic regulation of a healthy body, [5][6][7][8][9][10] which, in turn, determines the peculiarities of the development of hypertension in men and women. With a high predisposition to hypertension in healthy men, blood circulation at rest is determined primarily by increased influence of the sympathetic division of the ANS. But among apparently healthy women having a high predisposition to hypertension, blood circulation depend to a lesser extent on the specifics of the autonomic regulation compared to men. 11 However, despite existing data, which indicate different mechanisms of formation and establishment of hypertension in men and women, the amount of experimental data is not yet sufficient. 12 In this connection, the aim of this work was to evaluate the gender-specific state of the autonomic nervous system of healthy people and hypertensive patients, using the methods of heart rate variability (HRV) and gas discharge visualization (GDV).
Materials and methods
To assess the state of the autonomic nervous system of patients, the methods of heart rate variability (HRV) 13 and gas discharge visualization (GDV or Electrophotonic Imaging) [14][15][16] were applied. In the HRV method, the heart rhythm is recorded, with a subsequent mathematical analysis of its structure. Main HRV indexes are presented in Table 1. The index of regulatory systems tension (stress index SI) is calculated based on histogram of distribution of cardio intervals as random variables. The amplitude of every bar of the histogram is equal to the amount of RR intervals with particular duration. From the histogram its basic characteristics: mode Mo, mode amplitude (AMo) and variation range (VR) are determined. When during mental or physical loading we have increase of the sympathetic regulation it leads to stabilization of the heart rhythm caused by participation in the cardiac activity of the higher (central) levels of regulation, which leads to a decrease in the distribution of cardio intervals durations, i.e. increase of number of intervals of similar duration (AMo increases). The shape of the histogram is changing its constriction occurs with a simultaneous increase in height. The stress index SI is calculated in accordance as following: HRV measurement do not take a deep breath, coughing, swallowing. A survey of patients was conducted in the morning in a darkened room, with the elimination of emotional arousal factors, including conversation and phone calls. Women were measured in the intermenstrual period. Before measurement patients rested for 5-10 minutes in a horizontal position. The HRV study included a 5-minute ECG recording (no less than 300 cardiocycles) in the supine position during quiet breathing. After this GDV testing was performed this lasted about 5 minutes. GDV bioelectrography (Electrophotonic Imaging EPI 14,15 ) is already utilized in 62 countries, with great success. This effect occurs when an object is placed on a glass plate and stimulated with a high-intensity electrical field; a visible glow occurs, produced by the gas discharge. This glow is detected by a sensitive CCD camera and processed in the computer as a digital image. In the EPI technology, images captured of all 10 fingers on each human subject provide a set of quantitative parameters, which may be used for statistical analysis and practical applications. EPI applications in different areas are being developed. [14][15][16][17][18][19][20][21][22][23][24][25][26][27][28][29][30] The parameters of the image generated from photographing the finger surface under electrical stimulation create a neurovascular reaction on part of the skin, influenced by the nervous-humoral status of organs and systems. Owing to this, the images captured on the GDV/EPI register an ever-changing range of states. In addition, the EPI readings of most healthy people vary only 8-10% over many years of measurements, indicating a high level of precision in this technique. 14,15 A total of 138 people, aged 20 to 70 years were surveyed. Among them, 37 healthy ones, between them 15 men mean age 50.5±9.0 and 22 women mean age 50.1±11.2 comprised the control group, and 101-AH patients, between them 50 men mean age 55.0±15.0 and 51 women mean age 56.0±6.0. In hypertensive patients, 34 persons had hypertension Stage I (blood pressure: Sys 120-139; Diast 81-89), 46-AH Stage II (Sys 140-159; Diast 90-99) and 21-AH Stage III (Syst>160; Diast>100). Some 70% of patients with hypertension constantly took medications to lower blood pressure. On the day prior to the study, medications were canceled.
HRV and GDV readings were carried out for each patient sequentially, in the first half of the day, not earlier than 1.5-2 hours after food. The time interval between the methods was 5-7min. For the HRV analysis, the device "Polispektr" ("Neurosoft," Russia www. neurosoft.ru/eng/) was used. The entry consisted of 5 minutes of ECG recording (no less than 300 cardio cycles) in the supine position. The study included only patients with sinusoidal rhythm without the presence of frequent extra systoles. The HRV indexes used for statistical processing are presented in Table 1. For the GDV analysis, the devices "GDV Pro" and "Bio-Well" were used. Device GDV Pro and Bio-Well use the same hardware, and the difference that in the GDV Pro device data processing is done by software installed to the computer, while in the Bio-Well device data processing is done by more sophisticated software on server. Special experiments demonstrated that parameters measured by these two devices at the same time on the same subjects generate similar parameters.
In this paper we are using two main parameters
The integral area is an amount of light quanta generated by the subject in computer unit's pixels (the number of pixels in the image having brightness above the threshold). We are using both the total area of all 10 fingers and the area of fingers of the left and right hands separate ( Table 2). Stress coefficient is calculated as the ratio between the total area of the particular subject and the total area of the ideal healthy person, calculated from statistics of 8000 people. 14-16 Statistical processing of the results was done in Excel, SPSS Statistics 17.0 and Statistica 7.0 programs. Statistical distribution of the HRV and GDV parameters tested with the Kolmogorov Smirnov test did not follow normal law. Therefore, non-parametric Mann Whitney U-criterion was used for data analysis. The Institutional Review Board approval for the study of the Pirogov Russian National Research Medical University, Moscow, Russia was obtained.
Name Index Characteristics
Total spectral power TP (ms²) The sum activity of neuro-humoral influences on heart rate The relative importance of the power of highfrequency waves HF (%) The activity of the parasympathetic center of the autonomic nervous center The relative importance of the power of low frequency waves LF (%) The activity of the sympathetic centers of the autonomic nervous center (cardio -stimulating and vasoconstrictor) The relative importance of the power of very low frequency waves VLF (%) The activity of the Central and humoral-metabolic mechanisms of heart rate regulation
Results and discussion
At the beginning, a comparison was made of the control group (37 people) with a group of patients with hypertension (101 people) without gender difference. Averaged indicators are presented in Table 3. The data of the above table demonstrate that the values of the LF/HF index both in the control group and in the group of patients with hypertension, when analyzed without gender difference, are within normal limits, and no significant difference between groups in relation to this indicator was found. However, it was revealed that the hypertensive patients differed from the control group in the stress index (SI), and, for AH patients, it exceeded the normal value. The same result was found for the GDV stress index. Furthermore, groups of healthy people and patients with hypertension were divided on the basis of gender, and compared differently. Table 4 shows the HRV and GDV indexes, with marked difference (p<0.05). As a result of this comparison, the difference on many indexes in both men and women were found.
In accordance with, 2,3 hypertensive patients unlike healthy subjects have a high degree of centralization of heart rhythm regulation, i.e., the predominance of the central contour of the heart rhythm regulation (cerebral cortex, hypothalamus-pituitary and autonomic subcortical centers), on the autonomous contour (lungs, sinus node, the nucleus of the vagus nerve). Both for healthy men and men with hypertension, the LF/HF index values exceeded the normal values (LF/HF>2), indicating a shift in autonomic balance toward the predominance of the sympathetic nervous system. For healthy women and women with hypertension LF/HF index was below the norm (LF/HF<1.5), indicating a shift of autonomic balance toward the predominance of the parasympathetic system. [8][9][10] Thus, differences in LF/HF index in healthy men and women, as well as in patients with hypertension, whether men or women, indicate the necessity of considering gender when examining the patients, and subsequently choosing the treatment tactics. It is also important to note that for men, the relative values of the power of high frequency (HF) waves were significantly reduced compared to women. The difference in the HF indicator in men and women leads to the conclusion that for men both healthy ones and patients with hypertension the autonomic balance are shifted toward the predominance of the sympathetic nervous system. Apart from the difference in HRV indexes between men and women, significant differences in GDV parameters were revealed. The integral area of luminescence of the fingers of the right and left hands (JSR, JSL), and the total area of the image (S) had lower values in men compared to women, which also indicates a bias in men's autonomic balance toward the sympathetic nervous system. 4 Comparison between healthy men and men patients, as well as between healthy women and women patients, confirmed the data of higher values of the stress index, presented in Table 3, for patients compared with healthy people. The values of a norm is given in accordance with the "Standards of measurement, physiological interpretation and clinical use of heart rate variability" developed by experts of the European Association of cardiology and the North American Association of rhythmology and electrophysiology.
In addition to the high stress index, patients with hypertension, both men and women, had the higher values of the mode amplitude AMo, indicating a high stabilizing effect of centralization of heart rhythm regulation and decrease of the total spectral power (TP), indicating decrease in neuro-humoral influences on heart rate ( Table 4). For women with hypertension, unlike healthy women, the relative value of the power of the low frequency waves (LF) was reduced (Table 4), which differentiates them from men with hypertension (Table 4). Also, the relative values of the power of high frequencies waves (HF) were reduced, while the relative power of very low frequency waves (VLF) was increased. These frequency ratios may indicate a less pronounced regulation of the heart rate by reflex autonomic level in women with hypertension and more pronounced regulation by low humoral-metabolic level, which is not able to provide homeostasis fast. However, despite lower activity of sympathetic (LF) and parasympathetic (HF) centers of the medulla oblongata in women with hypertension (compared with healthy women), autonomic balance shifted toward parasympathetic activity of the ANS (Table 4). Hypertensive women differed from healthy ones by lower GDV parameters, indicating the decrease in functional reserves of the organism and autonomic imbalance. 4 The final phase of the study was to test the frequency of occurrence of high index LF/HF>2 ( Figure 1) and increased index of tension of regulatory systems (SI>150) ( Figure 1) among men and women, depending on age.
It is evident from the graphs that the rate of LF/HF in all age groups, in most cases, is higher for men than for women. Thus, of all men surveyed, 62.1% had high index LF/HF>2. Among women, only 25.0% had high LF/HF value. To understand the reason for the increased index for 62.1% of men (healthy individuals and patients) and 25.0% of women (healthy individuals and patients), a comparative analysis of a group of men with normo-tone (LF/HF values from 1.5 to 2) and a group of men with sympathetic tone (LF/HF>2), as well as groups of women with normo-tone and groups of women with sympathetic tone, was conducted (Table 5). It was found that for men and women with a sympathetic tone, the relative values of the power of high frequency waves (HF) is significantly lower (p<0.05) compared with men and women with normo-tone. Thus, the LF/HF index for people with sympathetic tone of both genders is higher in comparison with normo-tone people due to the lower parasympathetic activity of the autonomic nervous system, but not at the expense of increased activity of the sympathetic one. The frequency of high SI index was practically the same for men (51.5%) and women (52.8%) (Figure 2).
Conclusion
i. Men had a shift in the autonomic balance toward the predominance of the sympathetic nervous system (62.1% of studied population), while in women the autonomic balance is shifted toward predominance of the parasympathetic nervous system (75% of studied population).. This is valid both for healthy and hypertensive people. Moreover, for men lower activity of the parasympathetic center of the autonomic nervous center was observed.
ii. There were significant differences in GDV indexes. The integral area of luminescence of fingers of right and left hands (JSR, JSL), and the total area of the image (S), had lower values for men compared to women, which also indicates a bias in men of autonomic balance toward the sympathetic nervous system, and confirms the dependence of the GDV indexes on the activity of the autonomic nervous system.
iii. Hypertensive women differed from healthy ones by lower GDV parameters, which indicate the decrease in functional reserves of the organism and autonomic imbalance.
iv. For hypertensive women, unlike healthy ones, regulation of heart rate by autonomic reflex was less pronounced compared to regu- lation by low humoral-metabolic level. Similar changes in the regulation of cardiac rhythm for men with hypertension have been identified.
v. For patients with hypertension, regardless of gender, a high degree of centralization of heart rhythm regulation was found, which reflects the high activity of the sympathetic regulation mechanisms. However, this was not due to the increased activity of the sympathetic center, but due to the lower activity of the parasympathetic cardio-inhibitory center.
vi. More than half of studied population had high degree of centralization of heart rhythm regulation (51.5% of men and 52.8% of women in all age groups). | 2019-04-02T13:11:38.309Z | 2017-09-08T00:00:00.000 | {
"year": 2017,
"sha1": "8991cb0cfd89afd5ec88ca488f112041a7d75770",
"oa_license": "CCBYNC",
"oa_url": "http://medcraveonline.com/JABB/JABB-03-00084.pdf",
"oa_status": "HYBRID",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "3ca1ab8e7c146fdc1cf1e1dc8e5d0af934f6b20e",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
250563307 | pes2o/s2orc | v3-fos-license | Antibacterial metal nanoclusters
Combating bacterial infections is one of the most important applications of nanomedicine. In the past two decades, significant efforts have been committed to tune physicochemical properties of nanomaterials for the development of various novel nanoantibiotics. Among which, metal nanoclusters (NCs) with well-defined ultrasmall size and adjustable surface chemistry are emerging as the next-generation high performance nanoantibiotics. Metal NCs can penetrate bacterial cell envelope more easily than conventional nanomaterials due to their ultrasmall size. Meanwhile, the abundant active sites of the metal NCs help to catalyze the bacterial intracellular biochemical processes, resulting in enhanced antibacterial properties. In this review, we discuss the recent developments in metal NCs as a new generation of antimicrobial agents. Based on a brief introduction to the characteristics of metal NCs, we highlight the general working mechanisms by which metal NCs combating the bacterial infections. We also emphasize central roles of core size, element composition, oxidation state, and surface chemistry of metal NCs in their antimicrobial efficacy. Finally, we present a perspective on the remaining challenges and future developments of metal NCs for antibacterial therapeutics. Graphical Abstract
Introduction
Bacterial infection is one of the greatest threats to global public health. In particular, the emergence of multidrugresistant (MDR) superbugs makes conventional antibiotics ineffective, further exacerbating this threat [1]. According to the latest predictions, MDR bacterial infections will lead to 10 million annual deaths by 2050, more than those induced by cancer presently [2]. Faced with this serious challenge, new antibiotics have been developed to deal with the infamous superbug infections by chemically modifying existing antibiotics or exploiting new natural products [3][4][5]. Nevertheless, the development of new antibiotics is time-consuming and expensive, and the rapid resistance evolution of pathogenic bacteria can reduce or even inactivate the therapeutic activity of the most effective antibiotics [6,7]. Besides, in order to eradicate MDR bacteria, high dosages or multiple of antibiotics may be required, which can induce serious adverse effects and uncertain outcomes [8]. Therefore, there is an urgent need to develop alternative antibacterial strategies, especially non-antibiotic agents, to combat the evolution of bacteria.
The discovery of the antimicrobial activity of nanomaterials, including metal-, metal oxide-, carbon-, quantum dot-, peptide-and polymer-based nanostructures, provides new opportunities to address the MDR crisis [9][10][11][12][13]. Compared with conventional antibiotics, antibacterial nanomaterials access novel antibacterial modalities against pathogens, which might not be attacked by their natural defense arsenal [14]. These rationally engineered nanomaterials kill pathogenic bacteria through diverse antibacterial mechanisms including cell wall and membrane destruction, oxidative damage, disruption of intracellular components, and the delivery of therapeutic agents [8]. For example, polymeric nanomaterials can be imparted with positive charge for interacting with anionic bacterial cell envelopes to perform antibacterial effect [15]. Graphene oxide nanoflakes exhibit antibacterial activity due to their sharp edges induced oxidative stress and photothermal activity [16]. Silver-carbon nanoparticles (NPs) selectively damage the membrane of gram-positive bacteria but keep safe to the membrane of germ cells [17]. AgFeO 2 NPs perform both excellent antibacterial activity and magnetic response for targeted therapy and separation [18]. Moreover, these multiple antibacterial mechanisms can be designed to play a synergistic role in combating the MDR superbugs [13]. In view of the close interrelation between antibacterial effects and nanostructure, developing nanomaterials with highly controllable structure harbors the potential to construct novel effective antibacterial drugs according to the characteristics of pathogen infections.
Among the antibacterial nanomaterials, metal nanoclusters (NCs), the ultrasmall aggregates composed of a few to several hundred metal atoms with well-defined molecular structures [20], have attracted much attention in antibacterial application. The inherent advantages in structures (such as large surface area, precise size, morphology control, ease of surface modification) and physiochemical properties (such as unique optical, electromagnetic, and catalytic properties) enable metal NCs with precisely tunable antibacterial activity [7,19]. For example, conventional AuNPs are inert for bacteria, while the potent antibacterial activity is emerged once decreasing their size to the nanocluster (NC) dimension (≤ 2 nm) [24]. Notably, the molecular-like properties of metal NCs are desirable to understand the antibacterial mechanisms of the nanostructures [21][22][23]. Moreover, the atomic-level manipulating and facilely tailoring of metal NCs empower them to act as multifunctional theranostic agents for photoluminescence guided bacterial infection therapy [25,26]. The synthesis and physicochemical properties of metal NCs have been extensively discussed in previous reviews [27,28]. In the present review, we focus on the research progress of metal NCs as a new generation of nanoantibiotics for biomedical applications (Scheme 1). We first give a brief introduction to the characteristics and possible advantages of metal NCs as nanoantibiotics, then we summarize the antimicrobial mechanisms of metal NCs, including cell wall and membrane disruption, release of metal ions, generation of reactive oxygen species (ROS), damage to intracellular components, delivery of antibacterial agents, and photoactivated mechanisms. Whereafter, we offer a comprehensive overview of the tailoring of physiochemical factors affecting the antibacterial behavior, such as the core size, element composition, and surface chemistry of metal NCs. The precise control of the properties of metal NC-based nanoantibiotics offers an in-depth insight of their antimicrobial mechanism, facilitating the rational design of next-generation antibacterial agents. Finally, a brief discussion of current challenges and future developments of metal NC-based nanoantibiotics is presented.
Characteristics of metal NCs
Metal NCs have been considered as a new class of molecular-like aggregates consist of a few to several hundred metal atoms, whose sizes are comparable to the Fermi wavelength of electrons [20]. At this scale, the strong quantum effect of electrons causes continuous energy states to be divided into discrete electronic states [28]. As a result, metal NCs exhibit distinctive physicochemical properties, including significant Stokes shift, strong photoluminescence, good biocompatibility and HOMO-LUMO transition. As the missing link between single metal atoms and plasmonic metal NPs, metal NCs have received increasing attention in many fields, including antibacterial therapy as described in the following sections.
According to their fundamental compositions, the metal NCs for antibacterial applications can be roughly divided into AuNCs, AgNCs, CuNCs, alloy NCs and related composite nanostructures (Table 1). Among which, AgNCs and CuNCs usually possess outstanding antibacterial behaviors since the silver and copper elements have inherent broad-spectrum antibacterial activities [22,29]. Nevertheless, the superior antibacterial effectiveness of NCs is not always accompanied with desirable biocompatibility in the mammalian cells, raising safety concerns for their clinical application [30]. In this regard, the "noblest" of metals, gold, has a greater advantage over silver and copper due to its biological inertia and high stability. Au-based nanostructures have also been extensively proven to possess excellent biocompatibility in living systems [31,32], and their biocompatibility remains good even if the size is further reduced to the nanocluster range [33]. On the other hand, the antibacterial activities of Au-based agents usually need to be strengthened to reach the therapeutic goal through the rational regulation of their size, composition, and surface chemistry, as well as the incorporation of other antibacterial agents. Compared with single-metal NCs, metal alloy NCs (such as AuAg NCs and AuPt NCs) generally possess higher stability and tunable biological activities, and were widely applied in biomedical fields [34]. Metal alloy NCs with excellent antibacterial properties have also been recently developed [35]. In addition, metal NCs can be integrated with other therapeutic agents or functional materials, such as conventional antibiotics, polymers, and 2D nanomaterials, to realize synergistically enhanced antimicrobial effects. MSNs-AgNCs Gram-negative bacteria; Gram-positive bacteria Membrane damage; Ag + release; ROS generation [110] pMBA-AuNCs ESBL E. coli; MRSA Membrane damage; intracellular component destruction [111] ABA-AuNCs Gram-negative bacteria Cell wall damage [113] QA-AuNCs/indocyanine green MRSA Membrane damage; PTT; PDT [114] Compared with plasmonic metal NPs, metal NCs have several advantageous properties for antibacterial applications (Scheme 2). First, the metal clusters with atomic precision can be obtained via facile one-pot method. The atomic-precision structure endows a deep understanding of the structure-activity relationships of metal NCbased antibacterial therapy. In contrast, the synthesis of plasmonic metal NPs with specific and uniform morphological features is cumbersome and the resulting products are often heterogeneous, which greatly limits the understanding of their mechanisms of action. Moreover, due to the ultrasmall sizes, metal NCs can easily internalized into bacterial cells by traversing the cell wall pores, which greatly promotes their bactericidal activity by inducing ROS generation to oxidize bacterial membrane and disturb bacterial metabolism [36]. Meanwhile, owing to abundant active sites, ultrasmall metal NCs usually exhibit higher catalytic activity than metal NPs and thus induce higher levels of ROS generation, endowing metal NCs with stronger antibacterial activity [37,38]. Moreover, the excellent photoluminescence properties make metal NCs traceable antimicrobials, which are rarely achieved by conventional metal NPs. Furthermore, the superior pharmacokinetics, biodegradation characteristics, and renal clearance of metal NCs are also crucial advantages for their clinical translation [39]. Benefiting from these advantages, ultrasmall metal NCs show significant promise as a new generation of nanoantibiotics for combating bacterial infections.
Antibacterial mechanisms of metal NC-based nanoantibiotics
The ultrasmall size and diverse surface chemistry of metal NCs offer unique advantages for targeting pathogenic bacteria [40]. Metal NCs exhibit multiple antibacterial mechanisms, including cell envelope (cell wall and membrane) damage, release of metal ion, generation of toxic ROS, intracellular component destruction, delivery of antibacterial agents and photoactivated mechanisms. Table 1 presents the reported representative antibacterial metal NCs and their working mechanism. The antibacterial mechanisms mainly arise from the unique physicochemical properties of NCs, particularly, the multivalent interactions between NCs and bacteria via electrostatic attractions, hydrophobic interactions, Van der Waals forces, and receptor-ligand interactions [8]. In this section, we will discuss the antibacterial properties of metal NCs according to different antibacterial mechanisms.
Cell wall and membrane disruption
The bacterial cell envelope is the first physical barrier to prevent antibacterial agents from entering the cell. The presence of teichoic acids (gram-positive bacteria) and lipopolysaccharide (gram-negative bacteria) on the cell wall render bacterial surfaces negatively charged, which promoting preferential electrostatic interactions with the positively charged nanomaterials [2]. Therefore, cationic nanostructures can readily bind to bacterial surface and interact with the cell envelope and damage the cell structure.
A series of metal NC-based antibacterial strategies focus on targeting the negatively charged surface of bacteria [41,42]. Xie et al. designed cationic AuNCs that functionalized with quaternary ammonium salts (QA-AuNCs), targeting methicillin-resistant Staphylococcus aureus (MRSA) [42]. The QA-AuNCs interact with bacterial cells via electrostatic interactions, leading to increased membrane permeability, dissipation of the membrane potential, and disruption of the membranes (Fig. 1a-d).
The destruction of cell membrane integrity serves as the preliminary mechanism for their anti-MRSA activity. Our previous study also found that mercaptopyrimidine coated AuNCs (AuDAMP) contribute to the initiatory antibacterial mechanism by interacting with cell membranes [38]. Indeed, compared with conventional antibiotics, cationic AuNCs can bind to bacterial surfaces more firmly, which is the basis for the highly efficient antibacterial activities [43]. Moreover, Boda et al. found that the cell division and Scheme 2 The unique advantages of metal NCs as antibacterial agents compared to plasmonic metal NPs cell wall thickness of staphylococci treated with cationic AuNCs were significantly reduced, indicating that the biosynthesis of cell wall and membrane was inhibited [44]. Genechip microarray analysis revealed that the genes Alt and SA1898 (encoding autolysin) related to bacterial membrane integrity were significantly up-regulated following the treatment with AuNCs [24]. RNA sequencing results also demonstrated that the expression of genes related to cell wall and membrane biosynthesis were significantly affected [45]. These results clearly showed that metal NCs can kill bacteria by inducing cell wall and/or membrane damage. In contrast to conventional antibiotics, the unique membrane disruption mechanisms of metal NCs can reduce the risk of emergence of bacterial resistance for long-term treatment [8,46].
Release of metal ion
For some metal NCs, especially AgNCs, the release of metal ions represents one of the decisive factors for their antibacterial activity, which is critical to the induction of cellular oxidative stress [7]. For instance, the production and release of silver ions (Ag + ) is often triggered by the oxidation of Ag(0), which can in turn generate ROS and ultimately eliminate the bacteria [47]. It is well known that elevated ROS level can damage the proteins, enzymes and DNA in cells, thereby disrupting normal metabolism and function of the pathogenic bacteria [48]. In addition, the released metal ions can also directly bind to cellular components, such as amino acids, resulting in their dysfunction [49]. Compared with AgNPs, ultrasmall AgNCs have higher surface to volume ratio and are much more susceptible to oxidative dissolution, allowing a fast release of Ag + for enhanced antibacterial performance. Yuan et al. found that the glutathione (GSH)-capped AgNCs exhibit potent antibacterial activity against Pseudomonas aeruginosa, which is attributed to the easily oxidized and released Ag + on the surface of AgNCs [50]. Similarly, GSH-capped Ag + -rich AgNCs (GSH-Ag + -R NCs, with a predominance of Ag + species on the surface of NCs) possess higher antibacterial activity than that of the GSH-Ag 0 -R NCs counterpart (Fig. 2a) [22]. The intact GSH-Ag + -R NCs have abundant local Ag + on the surface are highly active in bacterial eliminating. Subsequently, a large amount of ROS will be generated to accelerate the dissociation of Ag + from the NCs, and then initiate the second round of bacterial eliminating, further enhancing the antibacterial activity in a positive feedback loop (Fig. 2b). In contrast, the antibacterial activity of GSH-Ag 0 -R NCs is barely attributed to the dissociation of Ag + , which leads to a poor antibacterial activity. In a recent study, an AgNCimpregnated hydrogel with long-term and controlled release of Ag + has been reported, which provides improved biofilm eradication capability [51]. In addition, the highly efficient antibacterial activity of Cu + released-CuNCs has also been demonstrated [52].
Generation of ROS
ROS is a general term describing the chemical substances formed upon incomplete reduction of oxygen, mainly including superoxide anion, hydrogen peroxide, singlet oxygen, and hydroxyl radical [48]. In living organisms, ROS play an important role in regulating various Reprinted with permission from [22]. Copyright (2013) Tsinghua University Press and Springer-Verlag Berlin Heidelberg. c AuNCs induced intracellular ROS production. d Differential expression of genes related with cell metabolism, substrate transport, membrane integrity, and cell transcriptomic process were greatly affected following the AuNCs treatment. Red indicates gene up-regulation, and green indicates gene down-regulation (Reprinted with permission from [24]. Copyright (2017) American Chemical Society) physiological functions of the entire life cycle [53]. However, the accumulation of excessive ROS leads to harmful oxidative stress, which can damage organisms via multiple working mechanisms, especially the consumption of intracellular vital reducing substances such as thiols in proteins [54].
Generation of ROS is one of the most essential antimicrobial mechanisms of metal NCs [55][56][57][58]. Besides the indirect ROS generation by metal ions, metal NCs can also directly catalyze the intracellular ROS in bacteria. Compared with large-sized AuNPs, ultrasmall AuNCs significantly up-regulate intracellular ROS level of bacteria (Fig. 2c) [24]. The excessive accumulation of ROS induced by AuNCs is responsible for the significant upregulation of genes encoded the metabolic enzymes in the oxidative process (such as dmpI that encodes 4-oxalocrotonate tautomerase) and the down-regulation of antioxidant genes (such as ilvC that encodes ketol acid reductoisomerase and Gapdh) (Fig. 2d). Xie et al. further found that the generation of intracellular ROS by cationic QA-AuNCs plays an essential role in causing bacterial death (Fig. 1e) [42]. Furthermore, our study demonstrated that AuNCs-induced intracellular ROS generation mainly dependent on their inherent enzymemimic catalytic activity [38]. These AuNCs with inherent oxidase-and peroxidase-like catalytic capacities can up-regulate intracellular ROS levels, which make them promising candidates for next generation nanoantibiotics. Moreover, chemodynamic therapy (CDT), which is defined as the treatments through Fenton reaction or Fenton-like reaction mediated hydroxyl radical generation in acidic microenvironment, has emerged as a promising strategy for cancer and infection disease therapy [59][60][61]. Although the development of metal NCs based CDT agents is still in its infancy, considering the rich surface active sites of metal NCs to trigger efficient Fenton or Fenton-like reactions under the weak acidic conditions of biofilms, metal NCs-mediated CDT represents a potential alternative for the treatment of MDR bacterial infection.
Damage to intracellular components
Homoeostasis of intracellular components and metabolic pathways are critical to the function and proliferation of bacterial cells. Several metal NCs have been found to eventually lead to bacterial death by interfering with these processes [38,[62][63][64][65][66]. These metal NCs-mediated damages include inhibition of ATP synthesis, consumption of reducing substances, reduction of enzyme activity and disruption of DNA. For example, atomically precise Au 25 NCs significantly reduce the metabolic activity and respiratory chain dehydrogenase activity of Escherichia coli, and destroy their DNA sequence structure [45].
RNA sequencing results indicated that the expression of bacterial genes related to glycolysis, oxidative phosphorylation, tricarboxylic acid cycle, and DNA replication repair were significantly disrupted by Au 25 NCs. By inducing the accumulation of intracellular ROS and disrupting the thiol-redox homeostasis of bacterial cells, the histidine-templated AuNCs show a potent therapeutic effect against ampicillin-resistant E. coli [64]. In another study, Zhao et al. found that the antibacterial mechanism of AuPt bimetallic NCs mainly involves dissipation of membrane potential and boosting intracellular ATP levels, which is not dependent on the ROS generation [62]. This unexpected mechanism of action may be resulted from the capability of AuPt bimetallic NCs to catalyze the generation of ATP and inhibit the process of ATPconsuming protein synthesis. In addition, Neissa et al. demonstrated that the bactericidal effect of Ag 3 quantum clusters is attributed to the destruction of topoisomerase-DNA complexes, rather than the release of Ag + [63]. Indeed, Ag 3 quantum clusters can insert into DNA and induce prominent structural damage to the DNA [67]. The lifetime of these distortions was two to three orders of magnitude longer than that of conventional organic intercalants, such as proflavine and ethidium bromide, suggesting their excellent capability to damage bacterial DNA.
Delivery of antibacterial agents
In addition to directly acting as therapeutic agents, metal NCs were also employed as nanocarriers for the delivery of existing antibacterial agents. Antibacterial agents can be loaded on the surface of metal NCs through covalent attachment or self-assembly [66][67][68][69][70][71][72]. The metal NCsbased delivery systems can broaden the antibacterial spectrum of conventional antibiotics and achieve controlled or targeted release of the drugs, thereby augmenting the therapeutic effect and reducing side effects [68]. For instance, an intelligent vancomycin delivery system was developed based on the custom-designed pentapeptide-mediated AuNCs [70]. The pentapeptide ligands contain a binding domain (D-alanine-D-alanine termini) with strong affinity for vancomycin. The self-assembled AuNCs complexes selectively release vancomycin in the presence of gram-positive bacteria due to the stronger binding affinity of vancomycin with bacterial cells than that with pentapeptide in AuNCs. The precise drug release profile of the AuNCs reduces the risk of systemic toxicity and potential side effects. In another study, an efficient antibiotic-AuNC hybrid system was developed by grafting ampicillin on the surface of lysozyme-templated AuNCs [69]. The antibacterial hybrids not only preserved the antibacterial activity against sensitive strains but also reversed the MRSA resistance towards ampicillin. In the presence of cis-2-decenoic acid, the hybrid systems can further inhibit MRSA persister, a hypopus of bacteria. This enhanced antibacterial performance of antibiotic-AuNC hybrid system is mainly attributed to the efficient inhibition of β-lactamase, multivalent binding to the bacterial surface and enhanced penetration.
Photoactivated mechanisms
Phototherapy, including photodynamic therapy (PDT) and photothermal therapy (PTT), is promising in combating bacterial infection, since it can induce the photothermal effect or trigger the generation of ROS by employing the interaction between light and materials. Several non-antibacterial metal NCs can also exhibit potent antibacterial behavior through photoactivation mechanism. For example, DNase-functionalized AuNCs have been developed as photosensitizers to exhibit excellent photothermal and photodynamic properties under 808 nm laser radiation [73]. Highly efficient elimination of biofilm has been achieved owing to the synergistic PTT, PDT and enzymolysis effects of the DNase-functionalized AuNCs. In another study, Hwang et al. developed a rapid procedure to prepare high-quality Au 25 (Cys) 18 , which can kill pathogenic bacteria through photodynamic activity in the presence of crystal violet [74,75]. Under low-flux white light radiation, non-antibacterial Au 25 (Cys) 18 transfers photoelectrons to crystal violet to promote the redox reactions, thus resulting in enhanced ROS generation and bactericidal activity. In addition, chitosan-stabilized AgNCs have also been found to enhance the bactericidal ability through the PTT effect [76].
Overall, as a new generation of nanoantibiotics, metal NCs provide multiple antibacterial pathways to fight superbugs and circumvent antibiotic-resistant mechanisms. These working mechanisms are usually interrelated rather than acting individually. Appropriate tailoring of size, composition and surface performance provide a desirable avenue for the design of highly efficient novel antibacterial therapies.
Physicochemical properties of metal NCs governing antibacterial properties
The physicochemical properties of nanostructures dominate their bioactivities and biomedical applications [77]. Therefore, investigating the nano-bio interface between metal NCs and bacteria is of great significance for a deep understanding of their antibacterial mechanisms and the design of potent antibacterial agents. Recent investigations have confirmed that the antibacterial effect of metal NC-based nanoantibiotics is highly correlated with their physicochemical properties, including size, composition, oxidation states, and surface chemistry [22,24,40,62].
In this section, in order to gain deep insights into their structure-activity relationship, we will discuss how these physicochemical properties of metal NCs influence their antibacterial behaviors.
Size
The size of metal-based NCs is critical to their antibacterial behaviors, as it significantly determines the processes of intracellular uptake, transport, accumulation and subsequent biological interactions. For instance, ultrasmall AuNCs can more easily enter bacterial cells and induce intracellular ROS burst, while large-sized AuNPs are incompetent even if they have consistent surface chemistry (Fig. 3a, b) [24]. In addition, reducing the size of AuNCs also greatly extended the antibacterial spectrum, since different sizes leading to divergent intracellular biochemical processes [78]. As the size of the AuNCs decreases, their capacities to interfere bacterial membrane permeability, membrane potential, and intracellular ROS and ATP levels are all significantly affected. In a detailed study, Zheng et al. set up a library of three different sized AuNCs of different gold atom numbers (Au 25 , Au 102 , and Au 144 ) and two larger sized AuNPs (~ 3 and ~ 5 nm) capped by the same thiol ligand, p-mercaptobenzoic acid, and further investigate the size effect on antibacterial efficacy [36]. Au 25 , Au 102 , and Au 144 can be easily internalized into the bacteria to achieve antibacterial activity. On the contrary, larger sized AuNPs exhibited ineffective internalization and no bactericidal effect was observed, indicating the vital role of size on antibacterial capability. Once the ultrasmall AuNCs have been internalized, they would work as a group to exhibit molecularlike antibacterial behavior, which displayed comparable antibacterial efficacy on the basis of the same molecular concentration of AuNCs (Fig. 3c, d). After internalization, the ultrasmall AuNCs can induce ROS generation to oxidize bacterial membrane and perturb cell metabolism (Fig. 3e, f ), resulting in superior bactericidal effect. In addition, by adjusting the size of AgNCs, their antibacterial activity and cytotoxicity can be effectively balanced to produce biocompatible nano-antibacterial agents [79].
Composition
Alloying is another promising strategy to construct metal NCs with enhanced physicochemical properties and improved antibacterial activities [80][81][82][83]. Compared with single metal composition, alloying also significantly influences the antibacterial behavior of metal NCs. For example, compared to non-antibacterial pure AuNCs and PtNCs, AuPt alloy NCs have superior antibacterial activity, which is governed by the composition ratio of gold and platinum atoms [62]. To understand the composition-dependent antibacterial behavior of metal NCs, a full-spectrum of alloy metal NCs, Au x Ag 25−x (MHA) 18 (MHA = 6-mercaptohexanoic acid) with x = 0 − 25, were developed and their antibacterial properties were investigated (Fig. 3g) [35]. As alloying enhances the stability of Au x Ag 25−x (MHA) 18 , their ability to generate ROS is deactivated and thus affecting the antibacterial activity of NCs. Indeed, a U-shaped antibacterial profile was observed, where the alloyed NCs showed decreased antibacterial capability compared to AuNCs or AgNCs (Fig. 3h). This study showed that the composition of metal NCs finely regulates their antibacterial behavior, indicating the importance of a complete understanding of the composition-related properties and applications, which calls for multidisciplinary collaborative research. Reprinted with permission from [24]. Copyright (2017) American Chemical Society. Percentage of the dead S. aureus treated with AuNCs and AuNPs at the same Au atom concentration (c) and particle concentration (d) for 2 h. Relative intracellular ROS production level (e) and relative lipid peroxidation production level (f) of S. aureus after 2 h treatment of AuNCs and AuNPs at the same particle concentration. Reprinted with permission from [36]. Copyright (2020) KeAi Communications Co. Ltd. g Composition-dependent antimicrobial ability of full-spectrum Au x Ag 25−x alloy NCs. h Percentage of the dead S. aureus after treatment with Au x Ag 25−x alloy NCs (Reprinted with permission from [35]. Copyright (2020) American Chemical Society)
Surface chemistry
Surface modification is one of the most promising strategies for improving the biomedical applications of metal NCs. Through tailoring the surface chemistry, the antibacterial properties of metal NC-based nanoantibiotics can be easily regulated. In our previous report, the antibacterial activities of AuNCs can be modulated by surface ligand species [38]. Among the four investigated AuNCs modified by mercaptopyrimidine analogs with similar structures, the amino-rich ligands seem to endow the AuNCs stronger antibacterial activity and broader antibacterial spectrum (Fig. 4a). Notably, the aminorich mercaptopyrimidine-AuNCs have also been further confirmed to be able to eliminate intracellular bacterial infections and concurrently regulate host cell immune responses [84]. Meng et al. found that grafting traditional herbal monomer cinnamaldehyde on the surface of histidine-mediated AuNCs (CA-AuNCs) can elevate ROS generation and concurrently deplete thiols in bacterial cells through ligand exchange, resulting in enhanced bacterial killing effect [85]. This work constructs oxidative stress amplifier (CA-AuNCs) through a ligand exchange strategy for combating MDR bacterial infections for the first time. Similarly, by simply tuning the oligonucleotide sequence, the physiochemical properties and antimicrobial performances of the DNA-templated AgNCs can be easily adjusted [86]. Further structural studies have shown that the AgNCs possess different structure and stability, which might be one of the key factors regulating their antibacterial activity. In addition, the influence of the spatial assembly of oligonucleotide sequences on the antibacterial behavior of DNA-scaffolded AgNCs was also demonstrated [87].
The surface charge of metal NCs is another important factor affecting their antibacterial behavior. It is generally believed that positively charged surface of antibacterial nanomaterials would favor close interaction with the negatively charged bacterial surface, resulting in improved antibacterial efficacy [2]. Indeed, a series of cationic metal NCs have been successfully developed as nanoantibiotics [41,42]. By fine-tuning the surface charges of AuNCs, however, Zheng et al. proposed an antibacterial behavior in stark contrast to this paradigm [40]. They reported that ultrasmall AuNCs with more negative charges show better antimicrobial effects due to the induction of higher intracellular ROS levels (Fig. 4b). This surprising finding suggests the complexities of the NC-bacterial cell interactions and sheds some light on the design of high-performance metal NC-based nanoantibiotics.
A recent study has found that the ligand density of metal NCs can also contribute to different antibacterial behaviors [88]. By regulating the density of phenylboronic acid on surface of AuNCs by adjusting the ratio of different anchoring groups, thiol (-SH) or amine (-NH 2 ) groups, an AuNCs with tunable antibacterial capability was synthesized (Fig. 4c). The AuNCs modified by mercaptophenylboronic acid (M-AuNCs) and aminophenylboronic acid (A-AuNCs) specifically bind to lipoteichoic acid (LTA) of gram-positive bacteria and lipopolysaccharide of gram-negative bacteria, respectively, resulting in potent and tunable antibacterial behavior (Fig. 4d-f ). This adjustable antibacterial behavior is expected to be promising in personalized therapy.
Metal NCs can also be activated to combat bacterial biofilms through surface modulation [43,51,73]. The formation of biofilm is considered to be the key to antibiotic resistance, which serves as a natural barrier for antibiotic penetration and activation [89]. Conventional antibacterial therapeutics exhibit limited penetration and reduced activity in the acidic microenvironment (pH values of 4.5-6.5) of the bacterial biofilm [90]. To overcome this barrier, our group developed a pH-responsive biofilm elimination strategy through the self-assembly of ultrasmall AgNCs via customized pH-sensitive charge reversal ligands [91]. The surface-assembled nanoantibaiotics (rAgNAs) can selectively activate the antibacterial activity in the acidic biofilm microenvironment. Under non-acidic conditions, the antibacterial activity of rAgNAs is extinguished because the release of toxic Ag + (See figure on next page.) Fig. 4 Surface chemistry modulates the antibacterial activity of metal NCs. a Comparison of antibacterial activities of mercaptopyrimidine analogues mediated AuNCs. Antibacterial activities indicated with MIC (µg/mL). AMP: 4-amino-2-mercaptopyrimidine; DAMP: 4,6-diamino-2-merc aptopyrimidine; AHMP: 4-amino-6-hydroxyl-2-mercaptopyrimidine; DHMP: 4,6-dihydroxyl-2-mercaptopyrimidine. Reprinted with permission from [38]. Copyright (2018) American Chemical Society. b Surface ligand chemistry of AuNCs determines their antimicrobial ability. Reprinted with permission from [40]. Copyright (2018) American Chemical Society. c Schematic illustration of the preparation process of phenylboronic acid-derivative-modified AuNCs by orchestrating the variation of ligands as tunable spectrum antibacterial agents. A/M-AuNCs exert bactericidal activity by interacting with LPS and LTA. d UV-vis spectra of the LPS, LTA, A/M-AuNCs, LPS/A/MAuNCs, and LTA/A/M-AuNCs. The antibacterial activity of A/M-AuNCs at different concentrations against E. coli or S. aureus in the presence of LPS and LTA at concentrations ranging from 0 to 128 µg/mL is plotted in parts e and f, respectively. Reprinted with permission from [88]. Copyright (2018) American Chemical Society. g Intelligent nanoantibiotics (rAgNAs), composed of ultrasmall AgNCs self-assembled with the assistance of acidity-responsive polymeric ligand, can accumulate in the biofilm with enhanced penetration, and can be selectively activated and rapidly release Ag + in the acidic microenvironment of biofilm for improved therapeutic effect attributed to the acid-triggered disassembling of rAgNAs. h Enhanced accumulation and deep penetration of rAgNAs for the biofilm-amplified bactericidal effect (Reprinted with permission from [91]. Copyright is inhibited by surface assembly (Fig. 4g). Once entering the acidic biofilm microenvironment, rAgNAs not only show charge reversal to promote local accumulation and retention but also disassemble into small AgNCs, thus enabling deep penetration and accelerated the Ag + release for significantly enhanced antibacterial activity (Fig. 4 h). In addition, since the release of Ag + is inhibited in the natural physiological environment, the damage of the AgNCs to mammalian cells is also effectively avoided. This biofilm-responsive nano-antibacterial strategy has shown great potential in the treatment of drug-resistant bacterial biofilm infections. Moreover, cationic thiolate modified AuNCs show highly efficient antibacterial effect against mature biofilm, likely due to the excellent permeability of positively charged AuNCs into biofilm [43]. DNase-functionalized AuNCs can hydrolyze DNA in extracellular polymeric substances matrix and induce oxidative stress with photoactivation to eradicate biofilm [73]. Overall, surface engineering represents a promising approach to enhance the antibacterial effect of metal NCbased nanoantibiotics. Except for size, composition and surface chemistry, other physicochemical properties of metal NCs can also affect their antimicrobial effect. For instance, thiolated AuNCs with more loosely bound Au(I)-thiolate surface motifs (semi-rigid structure) have better antimicrobial activity was demonstrated [92]. In addition, it is reported that the oxidation states of Ag atoms in AgNCs are also critical on their antimicrobial effect, and AgNCs with higher Ag(I) content had a stronger killing effect [22]. In conclusion, by systematically investigating the influence of each factor on the antimicrobial capability and the underlying antibacterial mechanism, we can rationally design highly efficient antimicrobial metal NCs by tailoring their size, composition, surface chemistry, structure, and oxidation state.
Synergistic antibacterial metal NCs
Metal NC-based nanoantibiotics can be integrated with other therapeutic or functional materials to realize synergistically enhanced antimicrobial effects. Combination of antibacterial therapies can attack bacteria from different fronts, which is one of the most common strategies for the treatment of severe MDR bacterial infections [93,94]. The enhanced antibacterial activity of metal NCs can be realized by integrating NCs with other agents such as conventional antibiotics, polymers, and 2D nanomaterials to form a complementary hybrid. For instance, we recently demonstrated a synergistically enhanced antibacterial hybrid by conjugating amino-rich mercaptopyrimidine-AuNCs (AuDAMP) with a cyclic lipopeptide antimicrobial peptide daptomycin (Dap) (Fig. 5a) [83]. The prepared antibacterial hybrids (Dap-AuDAMP) integrate the antibacterial capabilities of both agents and render an enhanced synergistic effect. Relying on the daptomycin-induced disruption of bacterial membrane, the bacterial cell internalization of AuNCs is greatly enhanced. The internalized AuNCs can generate ROS continuously in bacterial cells and then induce bacterial death. The continuous ROS generation also limit the development of bacterial resistance. In addition, a novel aggregation-induced emission (AIE) pattern between the AuDAMP and daptomycin was also observed. Similar synergistic antibacterial effects can also be obtained by integration of antibacterial AgNCs with daptomycin [96]. Therefore, this universal concept can be further extended to other metal NCs and antimicrobial peptides. In another study, a self-assembly strategy of metal NCs and antimicrobial peptides was also developed [97]. Antimicrobial peptide surfactin (SFT) is bound to the surface of 1-dodecanethiol-capped AuNCs through non-covalent bonds to form a highly efficient antibacterial assembly (SFT/DT-AuNDs) (Fig. 5b). Compared with SFT, the SFT/DT-AuND assemblies show improved antimicrobial activity since they possess a lower minimum inhibitory concentration (> 80-fold) than that of SFT (Fig. 5c). In addition, synergistically enhanced antimicrobial behavior has also been successfully achieved by directly using antimicrobial peptides as surface ligands to synthesize metal NCs [98].
As emerging functional materials, 2D nanomaterials such as graphene oxide (GO) and MXene nanosheets, are also employed to synergistically enhance the antibacterial properties of metal NCs [99][100][101][102][103]. These 2D nanomaterials generally have distinctive antibacterial action mechanisms. For example, GO nanosheets can physically cut through bacterial membranes and induce ROS generation [104]. By decorating AuNCs into GO nanosheets, a highly efficient antibacterial nanohybrid was developed [100]. The assembled nanosheets can simultaneously produce massive heat and generate high levels of ROS to inactivate bacteria under visible light irradiation (Fig. 5d). In comparison with bare AuNCs and GO nanosheets, these GO-AuNC nanohybrids show an enhanced antimicrobial activity towards gram-positive and gram-negative bacteria. Later, Zheng et al. constructed a synergistic antibacterial hybrid by the integrating of antibacterial AuNCs and the paramagnetic holmium ions (Ho 3+ ) onto GO nanosheets [101]. The complexed holmium ions can help the nanohybrids to be vertically aligned under weak magnetic fields, which offer a high-density sharp edge with preferential orientation to effectively pierce the bacterial membrane. Meanwhile, the integrated AuNCs can effectively internalize into bacterial cells to induce high levels of ROS, which strongly disturb the cell metabolism. These antibacterial nanohybrids employ both physical (by oriented GO nanosheets) and chemical (by integrated AuNCs and GO nanosheets) mechanisms to realize synergistic antibacterial performances. Similarly, by conjugating antibacterial AuNCs onto titanium carbide (MXene), the synergistic multi-mechanism antibacterial performance is also achieved [102]. [100]. Copyright (2020) Elsevier. e An AuNC constructed mixed-metal metal-organic network film for combating implant-associated infections. Reprinted with permission from [111]. Copyright (2020) American Chemical Society. f Schematic illustration of the bacterial cellulose scaffold loaded with aminobenzeneboronic acid (ABA)-modified AuNCs as antibacterial wound dressing to address the issue of MDR-infected skin wounds (Reprinted with permission from [113].
Copyright (2021) American Chemical Society)
Integration with natural polymers, such as chitosan (CS), can also lead to synergistically enhanced antibacterial activity of metal NC-based nanoantibiotics. For example, an efficient antibacterial nanoaggregate was developed based on the self-assembly of mercaptosuccinic acid (MSA)-protected AuNCs and chitosan [105]. These self-assembled nanoaggregates displayed enhanced antimicrobial activity against both gramnegative and gram-positive bacteria compared with individual components. A composite hydrogel that encapsulates ultrasmall AgNCs into chitosan matrixes to enhance antibacterial behavior and promote tissue reconstruction has also been reported, showing great translation potential [106,107]. In addition, several synergistic antimicrobial strategies based on the assembly of metal NCs and other materials, such as recognition proteins, upconversion NPs, and mesoporous silica NPs, have also been established [108][109][110]. Overall, these studies provide new options for improving the antibacterial properties of metal NCs, especially in dealing with notorious superbugs' infections.
In addition, there have been several reports on the use of metal NCs as effective ingredients in antibacterial practices. For instance, Chu et al. constructed an antibacterial film composed of AuNC-based mixedmetal metal-organic network on titanium disks to effectively inhibit implant-related infections (Fig. 5e) [111]. The generalizable modular procedure of the AuNC-metal-organic networks is amenable to accelerate the modification of metal surfaces for inhibiting implant-associated infections. Similarly, coating the cationic AuNCs on the orthodontic device (aligner) can effectively combat the formation of Streptococcus mutans biofilm [112]. The anti-biofilm activity of the coated AuNCs can be maintained for at least 3 months, even after repeated usage. In order to visually monitor nanotherapeutic-loaded wound dressings, a novel wound dressing by integrating the fluorescence of the nanotherapeutic and the transparency of the scaffold was developed [113]. During the bacteriainfected wound healing process, the fluorescence intensity of the therapeutic AuNCs in the transparent bacterial cellulose scaffolds decreases as the release of the nanotherapeutic into the wound, which indicates the replacement of the dressing when the residual concentration of the AuNCs is lower than the minimum inhibitory concentration (Fig. 5f ). Therefore, by the real-time monitoring of the dressing state, wound damage caused by frequent dressing replacement can be avoided. Furthermore, this visible strategy can be extended to medical devices to realize high-precision real-time monitoring during their service life. Recently, Zhuo et al. prepared a nanoantibiotic that could cross the blood-brain barrier by combining QA-AuNCs and indocyanine green [114]. With the help of the nearinfrared laser, the nanoantibiotics could effectively cross the blood-brain barrier and treat intracranial MRSA infection at low doses through a triple-combination synergistic therapy of direct-killing, PTT, and PDT. Compared with traditional vancomycin treatment, the synergistic treatment was significantly less toxic to the liver and kidney and thus would be a safe strategy for intracranial MRSA-infection therapy.
Summary and perspective
As an innovative type of versatile nanomedicine, metal NCs have been recently found to possess attractive prospects in the treatment of increasingly serious MDR bacterial infections. In the present review, we provide a comprehensive review of the current status of ultrasmall metal NCs in antibacterial applications, including antibacterial mechanisms, structure-activity relationships, and synergistic effects. The general mechanisms of metal NC-based nanoantibiotics targeted bacterial infections include cell wall and/or membrane damage, metal ions release, intracellular ROS generation, destruction of intracellular components, the delivery of therapeutic agents, and photoactivated mechanisms. The physicochemical properties of metal NCs, including size, composition, oxidation state, and surface chemistry, govern their antibacterial behaviors. Metal NC-based nanoantibiotics can be integrated with other therapeutic or functional materials, such as antimicrobial peptides, 2D nanomaterials, and polymers, to form complementary nanohybrids with synergistically enhanced antibacterial effects. Based on the regulation of the physicochemical properties of the metal NCs and further functionalization, the multifarious personalized antibacterial nanomaterials can be fabricated for precision medicine.
However, there are several challenges remain to be addressed for further translation of these metal NCbased nanoantibiotics. First, although the antibacterial mechanisms of atomically precise metal NCs have been systematically investigated, the understanding of the dynamic nano-bio interaction is still insufficient, which is also an underexplored field [115]. We can take full advantage of the atomically precise physiochemical characterization of metal NCs to investigate the nanobio interaction for advancing the rational design of nanoantibiotics. Second, given that the complex effect of alloying on the antimicrobial activity of metal NCs is extraordinary [35], it is suggested that the influence of this method on the antimicrobial behavior of metal NCs should be extensively investigated. Especially, alloying is also a promising strategy to improve the photoluminescence efficacy and structural stability of metal NCs, which is essential for their diverse biomedical applications such as traceable nanoantibiotics [62,116]. Third, more attention should be paid to the effect of the chirality and isomerization of surface ligands on the antimicrobial activity of metal NCs, because these properties haven been reported to profoundly affect the biological interactions of nanomaterials [117][118][119]. Finally, although several barriers including lysosomes escaping, mitochondria targeting and mitochondria membrane penetration alleviate the toxicity of cationic nanostructures to mammalian cells [120], the in vitro and in vivo biosafety of the metal NC-based nanoantibiotics should be fully considered and evaluated to facilitate their clinical translation [121]. We believe that through the joint efforts of scientists in different fields such as nanobiotechnology, materials chemistry, pharmaceutical science and clinical medicine, antibacterial metal NCs, as effective next generation nanoantibiotics, hold a bright future for dealing with the serious crisis of MDR bacterial infections. | 2022-07-16T13:33:03.996Z | 2022-07-16T00:00:00.000 | {
"year": 2022,
"sha1": "9941267fcbe64b8ac4243d75ed1eed3b04f8889b",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Springer",
"pdf_hash": "9941267fcbe64b8ac4243d75ed1eed3b04f8889b",
"s2fieldsofstudy": [
"Chemistry"
],
"extfieldsofstudy": [
"Medicine"
]
} |
248595160 | pes2o/s2orc | v3-fos-license | Face recognition based on Siamese convolutional neural network using Kivy framework
ABSTRACT
INTRODUCTION
Face and facial expression recognition have gotten a lot of interest from academic studies throughout the globe in the last five years [1]- [7].Even now, face recognition research is being conducted on several new difficulties, and new approaches for various applications are being created [8]- [15].The human face is regarded as the essential feature of the body.According to studies, even a face can communicate and has various phrases for different emotions.Face recognition systems, which are based on extraction of features and dimension reduction, are often employed to validate human identity.Face recognition systems have been created in various ways, with varying degrees of success.Face recognition remains a difficult challenge in real-world applications, despite several face recognition algorithms operating effectively in diverse situations.Currently, no technique provides a reliable solution to the various conditions and applications faced by face recognition.The face recognition challenge is divided into two groups.The first is a one-to-one matching challenge known as the face verification job.Face verification is used, for example, when people unlock their phone with their face.Passengers should pass through a system that scans their passport and face to ensure proper in certain airports.The second assignment is a facial recognition task, which requires humans to figure out who this individual is.It is an issue of one-to-many matching.The performance of several complex tasks, such as face verification and face detection, has dramatically improved since convolutional neural network (CNN) based algorithms have been utilized.Oneshot learning is another method for completing the goals listed above and a technique for learning representations from a model.This study aims to have the best performance Siamese CNN in face recognition and develop this model in the mobile application.This study creates encodings of the given input image in Siamese CNN.Then, it takes an image as an input from a different individual and calculates its encoding with the same network without changing any network parameters.Following these calculations, we may compare the two photos to see whether they are comparable.Face recognition, signature verification, and object tracking have been effectively accomplished using Siamese CNN in computer vision [16]- [22].This study develops a building application of facial recognition systems using a Siamese CNN and the Kivy framework.
Augmentation techniques
We attempted five different augmentation techniques: random brightness, random contrast, random left-right flip, random jpeg quality, and random saturation.Figure 3 depicts their augmentations.This augmentation aims to replicate data to make the categorization process easier.After using the augmentation approach, the anchor and positive images' data increase to 5,590 and 4,000, respectively.
Siamese convolutional neural network
The typical model is Siamese CNN with layers each with units, where ℎ 1, denotes the hidden vector in layer for the first twin, and ℎ 2, means the same for the second twin.In the first -2 layers, we only employ rectified linear (ReLU) units, whereas the subsequent layers use sigmoidal units.The model comprises a series of convolutional layers, each of which employs a single channel with different-sized filters and a fixed stride of one.The number of convolutional filters is given as a multiple of 16 to improve performance.The resultant feature maps are subjected to a ReLU activation function, which is optionally followed by max-pooling with filter size and stride of 2. As a result, each layer's th filter map looks like (1) and ( 2) [16].
We chose * as the legitimate convolutional operation corresponding to returning just those output units resulting from complete overlap among each convolutional filter and the input feature maps, and −1, is the 3-dimensional tensor encoding the feature maps for layer [16].In the final convolutional layer, all elements are flattened into a single vector.This convolutional layer is represented by a fully connected layer, and then another layer that computes the inspired distance metric between each Siamese twin and outputs it to a single sigmoidal output unit.The prediction vector is defined as |), where denotes the sigmoidal activation function.This last layer assesses the similarity between the two feature vectors by inducing a metric on the learned feature space of the (-1) th hidden layer.The is the extra parameter that the model learns during training and use to weigh the relevance of component-wise distance.This describes the network's last th fully-connected layer, connecting the two Siamese twins.We provide one example in Figure 4, which is the most effective form of our explored model.The Siamese twin is not shown in Figure 4, but it joins just after the 4096-unit fully-connected layer, which computes the L1 component-wise distance between vectors.This network also performed the best on the verification job of any network [16].767 dissimilar image pairs in a mini-batch production.The anchor in distinct classes is random as the mini-batch size of images, while the paired images are controlled as half the same class and half the different class.The weights are then updated throughout each mini-batch of training iterations using an adaptive moment estimate optimizer (Adam) [24].With an initial learning rate of 0.0001, the Adam optimizer is employed for model training and optimization.The encodings of input images are computed in Siamese CNN, and then the results perform the same thing with the same network, calculating the encoding image of a different individual.We may compare two encodings after doing computations to see whether they are comparable.Images' encodings serve as representations of their latent features.The encoding comparison reveals that the photos belong to the same individual.In the network's training, an anchor image was used and compared to its examples of positive and negative images.The gap between anchor-positive and anchor-negative must be modest, but the gap between anchor-negative and anchor-positive must be substantial.
The following ( 3) is known as the triplet loss function, and it may be used to compute gradients.Where "a" represents an anchor image, "n" represents a negative image, and "p" represents a positive image.Another variable is known as margin.The margin indicates the significance of the gap of similarity.For example, if we pick margin=0.3and (, ), then (, ) must be greater than 0.8.It helps in locating the supplied photos.The triplet loss function is used to compute gradients, which are then used to update the parameters of the Siamese CNN.
Performance evaluation
We assess our model using four metrics: accuracy, recall, precision, and F1-score.Accuracy in ( 4) is the fraction of forecasts that exactly match the actual data.Precision in (5), also known as positive predictive value (PPV), is the percentage of the main face image successfully validated out of an overall optimistic prediction.Recall or sensitivity in (6), often known as true positive rate (TPR) in facial recognition applications, refers to the percentage of verified main face photos correctly classified as positive.Specificity, also known as the true negative rate (TNR), is the fraction of all facial characteristics not in the primary face image classified as negative.To illustrate, the percentage of the direct face image correctly identified is not another person's face.The F1-score in (7) indicates the harmonic mean derived by taking the weighted average of precision and recall [25].
Kivy framework
Kivy is a cross-platform Python toolkit that may quickly construct apps with novel interfaces.Kivy is a sophisticated Python-based framework for developing mobile apps featuring natural user interfaces (NUI) [26].Kivy has the following features: support for numerous inputs such as tangible user interface objects (TUIO), multi-touch, mouse, and keyboard; robust APIs for most smartphones; a single application for several operating systems; and compatibility for networking protocols and remote login.Many widgets and multi-touch assistance Kivy is used to customize widgets [27].Face recognition using Siamese CNN is suggested in this study and implemented in Python.The Kivy used to create the user interface in this paper is Kivy 2.0.0.
The novelty of the proposed method
The main novelty is the idea of using the Siamese CNN to study facial recognition and its application on mobile using the Kivy framework.Due to the considerable interclass similarities and intraclass variances.To address this, we propose in this research employing a Siamese CNN to provide the computer with the capacity of similarity learning and, as a result, lower the interclass similarity and intraclass variation of the non-linear representation of pictures of each face.The computer may use this Siamese design to reduce interclass similarity and intraclass variances.
RESULTS AND DISCUSSION
We used the following method to test the effectiveness of our proposed Siamese CNN model in face recognition instances.We began by assessing our model for the different approaches, quantity of data gathered with and without augmentation, buffer size, and epoch count.We used up to 900 face photos in Method I to train the Siamese CNN without augmentation.Methods II and III collected 9,000 data face images with augmentation.We explored how the sample count influences the training task.We tested two synthetic samples: face images without enhanced datasets and face images with augmented datasets.Due to Methods II and III containing a significant number of datasets, we decided that the buffer size should be 10,000 bigger than the Method I's, which is 1,024.We can also examine the influence of the epoch in each technique.Methods I, II, and II used 50, 5, and 32 epochs, respectively.The outcomes of our approaches are shown in Table 1 The findings show that our network can perform relatively well on augmented datasets.The loss function Method II outperforms the others by yielding the most negligible value of 0.002, indicating that the loss function adequately expresses the amount of misclassification.Method I has a more significant loss function than the other methods while having a high recall, precession, and F1-score.This suggests that Method I have a high misclassification rate.Figure 5 depicts the loss, recall, and precision in each epoch of procedure I. Method II has the best loss function, 0.002, while having a smaller loss, recall, and precision than Method I. Figure 6 depicts the loss, recall, and precision in each epoch of procedure II. Figure 7 shows that recall and precision are equal, but the graph dips again at the fifth epoch.The sole difference between Method III and Method II is the number of epochs.Although having a higher Recall and F1 score than Method II, Method III has a lower Loss and Precision.Figure 7 depicts the drop in the graph.
After obtaining the Siamese CNN model using Methods I, II, and III, we used the Kivy framework to incorporate the model into a mobile application.For verification, we utilized 50 photos from the positive image collection.This verification image is conducted to compare to the input image.The outcomes of detection, validation and verification results are used to test Method I, Method II, and Method III models for face detection application.Each approach is compared using a different detection threshold and the same verification threshold.Method A uses a detection threshold of 0.1 and a verification threshold of 0.8.Method B uses a detection threshold of 0.5 and a verification threshold of 0.8.A detection threshold is a statistic that determines if a prediction is positive.The fraction of positive predictions divided by the total number of positive samples is the verification threshold.Table 2 shows the results of the comparison of each approach for implementing face recognition using the Kivy framework.Method II (A) got the best accuracy of 98%. Figure 8 shows the results of the confusion matrix model for the Siamese CNN architecture with data testing.From the 50 samples data tested, 9 sample data were misclassified in Figure 8(a), 8 sample data were misclassified in Figure 8(b), 1 sample data were misclassified in Figure 8(c), and 25 sample data were misclassified in Figures 8(d), (e), and (f).We can see that Figure 8(b) has one False Image identification error, the least of the others, and no false image identification error.There are 50 images of faces that are tested.The first 25 face image was taken using a camera where the image is the same as the positive face image.The second 25 pictures were taken from the LFW images dataset, which differs from the face in the positive face image.Figure 9 shows the sample result of the system using the Kivy framework.Methods I (A) and (B) get pretty good accuracy, but Method II (A) is still better.Method I only uses a little training, and data does not use augmentation techniques.Method II (A) is the best among other methods by using a detection threshold of 0.1 and a verification threshold of 0.8.Method II (B) has low accuracy because it uses a detection threshold that is too high, namely 0.5.Methods III (A) and (B) get low accuracy because the model is overfitting.3.As shown in Table 3, our work achieves better accuracy than most of the other recent work reported in this paper.The accuracy of the joint Bayesian method for the LFW dataset is 90.9%, the accuracy of the fisher vector faces method for the LFW dataset is 93%, the accuracy of the FR+FCN method for the LFW dataset is 93.6%, the accuracy of the principal component analysis (PCA), discrete cosine transform (DCT) method for CASIA-Web face and LFW dataset is 94.8%, the accuracy of the Face++ method for LFW Dataset is 97.2%, and the accuracy of the Siamese CNN method for LFW Dataset is 98%.In the end, it was proved that Siamese CNN was the best among others for face recognition.[28] LFW Dataset 90.9% Fisher Vector Faces [29] LFW Dataset 93% FR+FCN [30] LFW Dataset 93.6% PCA, DCT [31] CASIA-web face and LFW dataset 94.8% Face++ [30] LFW Dataset 97.2% Siamese CNN (Proposed) LFW dataset 98%
CONCLUSION
In this paper, we have proposed a method to improve face detection performance using Siamese CNN.The experiments show that the proposed method for face detection using the augmentation technique presents superior results than not using augmentation.The objective was to improve accuracy, which is the goal of every face recognition system.The LFW images dataset was used to test the approach.The approach was tested on a total of 9,000 face images, with a classification accuracy rate of 98%.The rate of recognition confidence is influenced by the number of photos used for training.It shows that the Siamese CNN can be utilized for real-world face recognition using Kivy framework.The Kivy framework effectively constructs and tests the suggested facial recognition method and mobile application.The researchers want to validate the method with a variety of datasets in the future.By increasing the number of images used in the technique, the degree of accuracy may be further enhanced.
Figure 1 .Figure 2 .
Figure 1.Face images sample for (a) anchor images and (b) positive images
Figure 3 .
Figure 3. Example image used augmentation techniques
Figure 4 .
Figure 4.The convolutional architecture for the verification was selected
Table 1 .
. Performance evaluation of our method without augmentation and with augmentation
Table 3 .
The comparison results of the proposed method Siamese CNN with existing methods | 2022-05-10T16:42:57.807Z | 2022-05-01T00:00:00.000 | {
"year": 2022,
"sha1": "35527083bf51f4f58b466b4d13f5c0cf1d7fd16f",
"oa_license": null,
"oa_url": "https://ijeecs.iaescore.com/index.php/IJEECS/article/download/26914/16304",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "ff7dd9644ded1823d45043dd49abebaba96031e4",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": []
} |
18621337 | pes2o/s2orc | v3-fos-license | Comparison of Two Common Outpatient Preparations for Colonoscopy in Children and Youth
Colonoscopies are often performed in children for diagnostic and therapeutic purposes. Our study compared two bowel-cleansing solutions: sodium picosulphate, magnesium oxide, and citric acid (Pico-Salax) with liquid magnesium citrate as preparations for colonoscopy. A retrospective chart review of all patients seen in the Gastroenterology outpatient clinic and who underwent bowel cleansing in preparation for colonoscopy from February to December 2006 was undertaken. Thirty-two children received Pico-Salax and 36 received liquid magnesium citrate. The tolerability of both solutions was similar. Most children in both groups had liquid stools and complete colonoscopies. Bowel preparation for a colonoscopy can be successfully achieved using either Pico-Salax or liquid magnesium citrate.
Introduction
Examination of the lower gastrointestinal tract by colonoscopy is often required in the pediatric age not only for diagnostic but also for therapeutic purposes [1]. There are situations when endoscopists are unable to perform complete colonoscopies due to inadequate bowel cleansing, which is particularly challenging for pediatric patients. Typically, children at our institution undergo a 2-day preparation, which includes the use of laxative agents (Pico-Salax or magnesium citrate) and a clear fluid diet. In some cases children undergo nasogastric tube lavage with a solution containing polyethylene glycol-electrolyte, however this requires a hospital admission which increases the cost of the procedure [2].
We compared two bowel-cleansing preparations in an outpatient setting with children undergoing colonoscopies; our primary outcome was to compare the efficacy of the cleansing for endoscopy. Secondary outcomes were to compare tolerability and stool consistency between the two groups.
Materials and Methods
A retrospective chart review was completed for all patients seen in the Gastroenterology outpatient clinic who underwent bowel cleansing in preparation for a colonoscopy from February to December 2006 at the Children's Hospital of Eastern Ontario.
The clinic nurses assigned each patient one of the two bowel preparations used in the outpatient clinic setting at their discretion. One preparation contained magnesium oxide, citric acid with sodium picosulphate (Pico-Salax, Ferring Pharmaceuticals Inc., Canada). This powder consists of sodium picosulphate 10 mg, magnesium oxide 3.5 g, and citric acid 12.0 g per sachet. Magnesium oxide and citric acid form magnesium citrate (when dissolved in water) and is administered as follows (per the manufacturer's recommendations): children from 1 to 6 years of age used 1/4 sachet, those from 6 to 12 years took 1/2 sachet and those from 12 to 18 years used 1 sachet of Pico-Salax once per day for two consecutive days. Liquid magnesium citrate was administered at a dose of 60 mL (1.74 g of magnesium citrate per 30 mL) in children between 10 to 15 kg, 90 mL in children between 16 and 20 kg, 150 mL in children between 21 to 35 kg, and 300 mL in children >36 kg also for two consecutive days. Additional measures included the administration of bisacodyl at a dose of 15 mg in children >20-35 kg or age 6-12 years of age and 20 mg in children >36 kg or 12-18 years of age following Pico-Salax or magnesium citrate. Castor oil (15-30 mL) was given to children younger than 6 years of age. Children remained on a clear liquid diet for the two days prior to the procedure. Physicians recorded success of bowel preparation systematically including ease of endoscopy classified by need of irrigation and suctioning, minimal suctioning, or completion of procedure without irrigation and/or suctioning. Ease of endoscopy was considered excellent when there was minimal suctioning or there was no need of irrigation or suctioning. Tolerability was assessed by the recording of complaints such as vomiting, cramping and abdominal pain.
Statistical analyses were performed using R software (V2.7.2). Two-sided P-values of less than .05 were considered statistically significant. Continuous variables were summarized using mean and standard deviations. Categorical variables were summarized using frequency and percentage. Logistic regression was performed to compare the odds of primary outcome (excellent outcome) between the Pico-Salax and magnesium citrate groups with and without controlling for age. The rate of excellent outcome in children under age 6 and above age 6 was compared using Fisher's exact test. Secondary outcomes (tolerability and stool type) were also compared between the two groups using Fisher's exact test.
Results
A total of 68 children were included in the study, 32 received Pico-Salax and 36 received magnesium citrate. Mean age was 11.6 ± 3.9 years. Of the total of 68 candidates, 37 (54.4%) were males. Patient characteristics are reported in Table 1.
Mean age was significantly different between the two groups (P = .0042). Only 3.1% (1/32) of children that received Pico-Salax were under age 6, whereas 22.2% (8/36) of children that received magnesium citrate were under age 6. Therefore, Pico-Salax was more likely to be used on older children whereas magnesium citrate was preferred for use on younger children. Fifty-nine percent (19/32) of children in the Pico-Salax group and 52.8% (19/36) of the magnesium citrate group had excellent outcome (OR = 1.31, 95% CI 0.5-3.4, P = .59) ( Table 2). The logistic regression analysis controlling for age estimated that the odds ratio of having excellent outcome between the Pico-Salax group and magnesium citrate group was 1.50 (95% CI, 0.54-4.20, P = .44). Therefore, there was no statistically significant evidence that the odds of excellent outcome were different between the two bowel-cleansing solutions.
The percentage of excellent outcome between children less than 6 years of age and above 6 years was also compared. It was found that 55.6% (5/9) of children under age 6 had excellent outcome and 55.9% (33/59) of children above age 6 had excellent outcome (P = 1.00). Therefore, there was no statistically significant evidence that the percentage of excellent outcome was different between children less than 6 years and above age 6.
Tolerability of both cleansing agents was similar as evidenced by lack of difference in adverse events (vomiting, cramping, and pain), and most patients having no symptoms to report ( Table 3). The magnesium group tended to have more liquid stools than the Pico-Salax group as reported in Table 3. However, this difference did not reach 0.05 significance level. There were more liquid stools in the magnesium citrate group compared to Pico-Salax, so there appears to be an offset of adverse events compared to firmness of stool.
The percentage of complete colonoscopy was 97% in the Pico-Salax group and 92.7% in the magnesium citrate group. There were two patients (one in each group) on whom complete colonoscopy could not be performed due to the presence of formed stools; these patients were eventually admitted to hospital for nasogastric tube lavage with Golitely (Polyethylene glycol 3350 and electrolyte solution).
Discussion
Adequate bowel preparation is crucial for a successful colonoscopy. This can be significantly challenging in the pediatric population, as acceptance and tolerability of available agents might be poor.
Two bowel preparation protocols are available at our center (Pico-Salax and liquid magnesium citrate), and used in combination with either bisacodyl or castor oil. Interestingly, there were no significant differences found between Pico-Salax and magnesium citrate in our study. Tolerabilility and efficacy of both methods were similar; this could be explained by the presence of magnesium citrate (an osmotic agent) in both solutions as the main ingredient. The addition of sodium picosulphate (a contact laxative that stimulates smooth muscle contraction) to magnesium oxide and citric acid (that forms magnesium citrate when dissolved in water) did not reveal higher benefits in our patient population.
One of the limitations of the present study is potential imbalance in other clinical parameters (such as age) between the two groups. This imbalance could confound the effect of bowel preparation methods on the bowel preperation outcome. Young children are the most difficult to prepare for a colonoscopy. We compared outcomes in children less than 6 years to children above 6 years and found no difference between the ease of endoscopy or tolerability of the cleansing agent. This could be related to the small cohort in our study; 900 subjects would need to be included to detect a significant difference with 80% power.
One would assume that older children's bowel preparation would be of better quality than younger children's, however this was not the case in our study. One could speculate that older children may not have been entirely compliant by taking the laxative agent or that the diet was not fully followed, as older children might have had less supervision.
There are controversies regarding the best combination of laxatives as well as the need for dietary restrictions. Abubakar et al. [3] stated that the use of oral bisacodyl for two consecutive days prior to the procedure and a phosphate enema the morning of the procedure provided excellent bowel preparation, these children did not follow any dietary restrictions.
On the other hand, Dahshan et al. [4] concluded that the 2-day bisacodyl preparation yielded poor bowel cleansing as compared to either two Golytely or a combination of magnesium citrate and X-prep (senna fruit, sugar, and 7% alcohol).
Others believe that dietary restriction is a limiting factor for successful bowel cleansing and particularly in children. El-Baba et al. [5] showed that the administration of a prepackaged diet kit (low residue solid and liquid food kit) in combination with magnesium citrate and bisacodyl the day before the procedure was more effective than oral sodium phosphate and liquid diet in preparation for colonoscopy.
Other methods of bowel cleansing, such as the oral administration of sodium phosphate solution proved a higher risk for electrolyte disturbances particularly hyperphosphatemia, hypocalcemia, and hypokalemia in up to 57% of adult patients [6,7] and could be suspected to also be problematic in children and youth.
Polyethylene glycol-electrolyte containing solutions are highly efficient in preparation for colonoscopy in children [4,8], the limitation of this modality for bowel cleansing is the large volume required to achieve adequate results. Furthermore, most children are unable to take this solution orally and need to be admitted to hospital for nasogastric tube lavage, which can be inconvenient and more expensive.
Pico-Salax has been shown to have a high efficacy and safety in adults as a method of preparation for colonoscopies [9][10][11][12][13]. Pinfield and Stringer [14] described the efficacy of Pico-Salax in a pediatric randomized study that compared it to bisacodyl. All children (N = 32) on Pico-Salax were reported to have good or excellent preparation and fewer episodes of abdominal pain than in the group with bisacodyl administered. In our study, symptoms of abdominal pain or vomiting were low, with both groups having similar tolerability rates.
The high efficacy of magnesium citrate as a cleansing agent in adults was reported by Chen et al. [15] where magnesium citrate in combination with bisacodyl was tolerated better and was more effective than castor oil. Magnesium citrate provided the day before the colonoscopy proved more efficacious than oral sodium phosphate in adults [16]. Most recently Sabri et al. [17] compared magnesium citrate and oral sodium phosphate showing similar tolerability and efficacy in adolescents. In the present study we used magnesium citrate in combination with bisacodyl or castor oil with good results that were similar to the use of Pico-Salax.
In summary, comparison of the two available bowel preparations showed no significant difference with regards to ease of endoscopy and tolerability. Colonoscopy with minimal or no need for irrigation and/or suctioning was achieved in about half of the patients on each group with the rest needing some intervention by the colonoscopist leading to achieving successful complete colonoscopy in the majority of patients. The retrospective nature of the study and the small study cohort represent potential limitations and larger studies may be warranted. | 2014-10-01T00:00:00.000Z | 2009-12-06T00:00:00.000 | {
"year": 2009,
"sha1": "4eccfda99c2e33481c84e0a7d1530c866a455f73",
"oa_license": "CCBY",
"oa_url": "http://downloads.hindawi.com/journals/grp/2009/518932.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "4eccfda99c2e33481c84e0a7d1530c866a455f73",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
10684029 | pes2o/s2orc | v3-fos-license | Limit Set of Trajectories of the Coupled Viscous Burgers' Equations
In this letter, a coupled system of viscous Burgers' equations with zero Dirichlet boundary conditions and appropriate initial data is considered. For the well-known single viscous Burgers' equation with zero Dirichlet boundary conditions, the zero equilibrium is the unique global exponential point attractor. A similar property is shown for the coupled Burgers' equations, i.e., trajectories starting with initial data which is not too large approach the zero equilibrium as time goes to infinity. This ``approaching"or convergence is not necessarily exponentially fast, unlike the single viscous Burgers' equation.
Introduction
We consider the following coupled Burgers' equations and appropriate initial conditions for 0 < x < 1, t > 0. Here a, b are constants. This coupled system, derived by Esipov [1], is a simple model of sedimentation or evolution of scaled volume concentrations of two kinds of particles in fluid suspensions or colloids, under the effect of gravity. The constants a, b depend on the system parameters such as the Péclet number, the Stokes velocity of particles due to gravity, and the Brownian diffusivity. In [1] Esipov reported numerical simulations for (1.1)-(1.3) and compared the results with experimental data.
In this letter, we consider dynamical aspect of the coupled system. We show that trajectories (orbits of solutions) of this coupled system approach the zero equilibrium as t → +∞ when the initial data u(x, 0), v(x, 0), is not too large in some Sobolev norm. That is, the ω-limit set of these trajectories is the zero equilibrium.
Single viscous Burgers' equation
We first recall an interesting property about ω-limit set for the well-known (single) viscous Burgers' equation ( [2]), i.e. equations in (1.1) without the nonlinear coupling term with boundary and initial conditions The interest in the Burgers' equation arises because it is a simple one dimensional analog of the Navier-Stokes equation. The importance of the Burgers' equation is due to the nonlinear convection term uu x . In the following, L 2 (0, 1), L ∞ (0, 1), H 1 0 (0, 1) and H 2 0 (0, 1) are the usual Sobolev spaces, while C(0, 1) is the space of continuous functions. We denote by ||·|| the usual L 2 (0, 1) norm. All integrals are with respect to x ∈ [0, 1], unless specified otherwise.
Multiplying the equation (2.1) by w and integrating over x ∈ [0, 1], we get 1 2 where we have used the Poincaré inequality in the last step. Thus by the Gronwall inequality ( [3], p.88), we further have This means that all trajectories converge in the L 2 (0, 1)-norm to the zero equilibrium exponentially fast, i.e., the ω-limit set ( [4]) of every trajectory is the zero equilibrium. The zero equilibrium is the global point attractor. See [5] and [6] for further results in this regard. This property also holds for higher dimensional viscous Burgers type convection-diffusion equations ( [7]).
ω-limit set of the coupled viscous Burgers' equations
In this section, we show that trajectories of the coupled viscous Burgers' equations (1.1)-(1.3), whose initial data is not too large in H 1 0 (0, 1)norm, converge to the zero equilibrium in the max-norm. This convergence, though, is not necessarily exponentially fast, unlike the situation for the single viscous Burgers' equation. Let For initial data u(x, 0), v(x, 0) ∈ H 1 0 (0, 1), local (-in-time) existence and uniqueness of classical solutions to (1.1)-(1.3) can be easily shown by the usual semigroup method; cf. [8], Theorems 3.3.3, 3.3.4 and 3.5.2. Global existence and uniqueness of classical solutions then follows once we show a priori that the solutions do not become unbounded in H 1 0 (0, 1)-norm at finite positive time. Moreover, the solutions, whenever they exist, are in H k 0 (0, 1), k = 1, 2, · · ·; (cf. [8], p.73). Note that I(t) ≤ J(t) ≤ K(t) whenever u, v exist, by the Poincaré inequality.
We will show that J(t) → 0 as t → 0, when J(0) is bounded by some constants depending only on the system parameters a, b in (1.1).
Multiplying the first and second equation in (1.1) by −u xx and −v xx , respectively, adding the two resulting equations and then integrating over x ∈ [0, 1], we obtain Now we further estimate the right hand side of (3.4) term by term. From the fact that for u ∈ H 1 0 (0, 1), we get the so-called Agmon inequality where ||u|| ∞ is the L ∞ -norm. Observe, using the Cauchy-Schwarz inequality ( [9], p.183) and the above Agmon inequality In the last step, we have used the fact that I(t) ≤ J(t). The same inequality holds for v 3 x dx. Similarly, The same inequality holds for v 4 x dx. Moreover, note that So we have The same estimate holds for v 4 dx. Substituting (3.6), (3.8) and (3.9) into (3.4), and using the Young's inequality ( [3], p.108), we finally get for ǫ > 0. Taking ǫ = 4 5 and noting that J(t) ≤ K(t), we now have The comparison equation has fixed points 0 and J + = −1 + 1 + 9000(|a| + |b|) 4 1800(|a| + |b|) 4 .
The third fixed point is negative and is discarded since J(t) is always nonnegative by definition (3.2). We calculate f ′ (0) < 0 and f ′ (J + ) > 0. So fixed point 0 is stable while J + is unstable; see [10], p.187 or [11], p.8. That is, for the comparison equation (3.12), Due to the standard comparison result for ordinary differential inequalities and equations ( [12], P.69), the J(t) satisfying the differential inequality (3.11) also approach zero as t goes to infinity, when J(0) < J + .
We have thus shown that, if the initial data satisfies 1 2 ( u ′ 0 (x) 2 + v ′ 0 (x) 2 ) < J + , the corresponding classical solutions and hence trajectories exist for all t > 0, since the H 1 0 (0, 1)-norm, i.e., J(t), in this case is bounded. Moreover these trajectories approach the zero equilibrium as t → ∞. The zero equilibrium is the ω-limit set of these trajectories. We remark that this convergence is not necessarily exponentially fast, unlike the single Burgers' equation. If 1 2 ( u ′ 0 (x) 2 + v ′ 0 (x) 2 ) > J + , however, we cannot conclude any thing about the corresponding trajectories, based on the above dynamical system style analysis.
Note that u x , v x are actually H 1 0 (0, 1)-norms of u and v due to the Poincaré inequality, and note also that H 1 0 (0, 1) is embeded in C(0, 1). So the above convergence of trajectories holds in the max-norm in C(0, 1).
Then the global unique classical solutions of the coupled system (1.1)-(1.2)-(1.3) exist, and the corresponding trajectories (orbits of solutions) approach the zero equilibrium in max-norm. That is, the zero equilibrium is the ωlimit set of these trajectories. | 2014-10-01T00:00:00.000Z | 1996-07-29T00:00:00.000 | {
"year": 1996,
"sha1": "3a9cf8bd6806a9e9b61ac0c121113313b63fa7e5",
"oa_license": "elsevier-specific: oa user license",
"oa_url": "https://doi.org/10.1016/s0893-9659(97)00133-x",
"oa_status": "BRONZE",
"pdf_src": "Arxiv",
"pdf_hash": "a5ea53d78aed718ff15e5867eec0eb71061e9d99",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics",
"Mathematics"
]
} |
5972750 | pes2o/s2orc | v3-fos-license | p53-dependent G2 arrest associated with a decrease in cyclins A2 and B1 levels in a human carcinoma cell line
In vivo transfer of wild-type (wt) p53 gene via a recombinant adenovirus has been proposed to induce apoptosis and increase radiosensitivity in several human carcinoma models. In the context of combining p53 gene transfer and irradiation, we investigated the consequences of adenoviral-mediated wtp53 gene transfer on the cell cycle and radiosensitivity of a human head and neck squamous cell carcinoma line (SCC97) with a p53 mutated phenotype. We showed that ectopic expression of wtp53 in SCC97 cells resulted in a prolonged G1 arrest, associated with an increased expression of the cyclin-dependent kinase inhibitor WAF1/p21 target gene. A transient arrest in G2 but not in G1 was observed after irradiation. This G2 arrest was permanent when exponentially growing cells were transduced by Ad5CMV- p53 (RPR/INGN201) immediately after irradiation with 5 or 10 Gy. Moreover, levels of cyclins A2 and B1, which are known to regulate the G2/M transition, dramatically decreased as cells arrived in G2, whereas maximal levels of expression were observed in the absence of wtp53. In conclusion, adenoviral mediated transfer of wtp53 in irradiated SCC97 cells, which are mutated for p53, appeared to increase WAF1/p21 expression and decrease levels of the mitotic cyclins A2 and B1. These observations suggest that the G2 arrest resulted from a p53-dependent premature inactivation of the mitosis promoting factor. © 2000 Cancer Research Campaign
The p53 gene is probably the most frequently altered (mutation, deletion) tumour suppressor gene in human cancer cells (Friend, 1994;Sherr, 1996). It also plays a key role in the regulation of apoptosis and G1 arrest after treatment with DNA-damaging agents such as irradiation. The critical biochemical function of wild type (wt) p53 is its ability to bind to specific DNA sequences and to activate the transcription of genes such as GADD45 (Growth Arrest after DNA Damage), Bax involved in the regulation of apoptosis and p21 WAF1/CIP1 gene, whose product inhibits several cyclin/cyclin-dependent kinases complexes, essential for cell cycle progression (Deng et al, 1995). Ionizing radiations induce a delay both in G1 and G2. However, only the G1 delay appears to depend on p53 (Kastan et al, 1991). Hence most of the transformed and cancer cells which exhibit a mutated p53 fail to arrest in G1 after irradiation but still undergo a G2 arrest (Kastan et al, 1991).
The mechanisms underlying the radiation-induced G2 arrest are not fully understood. The progression of cells through G2 and M phases is regulated in eukaryotes by MPF (mitosis promoting factor) which includes two proteins, a cyclin (cyclin B) and a cyclin-dependent kinase (cdk) p34 cdc2 (cdk1). Another complex, cyclin A2-cdk1, is activated before cyclin B-cdk1 and is also required for the progression from G2 to mitosis. The kinase activity of cdk1 depends on both its association with the cyclin subunit and phosphorylation of specific residues. Levels of cdk1 remain relatively constant throughout the cell cycle, whereas levels of cyclins A and B fluctuate with a maximum in M phase (Sherr, 1996). Irradiation has been shown in mammalian cells to induce a decrease in cdk1 activity, related to a decrease of cyclin B1 and/or to an inhibition of cdk1 phosphorylation (Jin et al, 1996). Indeed cyclin B1 is a rate-limiting component of the radiation-induced G2 delay (Kao et al, 1997). Down-regulation of cyclin B1 occurs at least in part by a post-transcriptional regulation of mRNA levels (Maity et al, 1995). In addition, Azzam et al (1997) showed in various cell lines that cdk1 is down-regulated by γ-radiation through a p53-dependent mechanism.
Several studies have recently suggested that wtp53 might also regulate the cell cycle arrest in G2/M after DNA damage (Agarwal et al, 1995;Allday et al, 1995;Aloni-Grinstein et al, 1995;Guillouf et al, 1995;Han et al, 1995;Powell et al, 1995;Stewart et al, 1995;Pellegata et al, 1996;Skladanowski et al, 1997). However, the contribution of p53 seems to be complex and to depend on the cell type. Most studies were performed in rodent cells and relatively little is known about this effect in human tumour cell lines, although a recent study by Hermeking et al (1997) has shown a p53-regulated inhibition of G2M progression in a human rectal carcinoma model. In addition, none of these studies has investigated the effect of p53 expression on the levels of cyclins A2 and B1 during the G2 arrest.
In our study we further examined the consequences of irradiation combined to the expression of wtp53 after gene transfer on (i) the radio-induced G2 arrest, (ii) expression of the mitotic cyclins p53-dependent G2 arrest associated with a decrease in cyclins A2 and B1 levels in a human carcinoma cell line A2 and B1, and (iii) apoptosis. Since this study was designed in the context of combining p53 gene therapy and irradiation, we have used a recombinant adenovirus to transfer the wtp53 gene in a head and neck squamous cell carcinoma line SCC97, which exhibited a mutated p53 phenotype. An adenoviral vector was used since it ensures a particularly high gene transfer efficiency, and is commonly used in most gene therapy studies (Gallardo et al, 1996;Clayman et al, 1996).
Cell line
The study has been performed using the human squamous cell carcinoma cell line SCC97 (gift of G Clayman, MD Anderson, Houston, TX, USA). Cells were grown in modified essential medium (MEM) supplemented with 15% fetal calf serum (FCS) with penicillin-streptomycin and 2 mM glutamine at 5% carbon dioxide. The clonogenic survival curve after γ-irradiation has been established and the surviving fraction at 2 Gy was 59%.
SCC97 cells exhibit a mutation of p53 gene, that was characterized by sequencing. A point mutation (transversion T-A) at the second base of the fifth intron inactivates the normal splicing donor site. An alternative splicing therefore occurs between a GT sequence in exon 5 as a donor site and the usual acceptor site. This results in a 46 bp deletion in the coding sequence in exon 5 and a frame-shift, leading to a truncated unstable p53 which cannot be detected by immunofluorescence or Western blot. Western blot analysis and flow cytometry have been performed using DO7 and DO1 monoclonal antibodies against the N-terminal part of the human p53 protein, showing that the mutated p53 protein was not detectable.
Recombinant adenovirus
A human non-replicative recombinant adenovirus type 5, with a complete deletion of E1A and a partial deletion of E3 regions was used as vector to infect the cell line in vitro. This vector encoded the human wtp53 gene driven by the human cytomegalovirus (CMV) promotor. A batch was provided by RPR-Gencell (Vitry, France, Ad5CMV-p53 RPR/INGN201). The construction and properties of Ad5CMV-p53 have been reported elsewhere (Zhang et al, 1994). The vector AV1.0CMV/293 was used as a control. Viruses were propagated, purified, titrated in 293 cells transcomplementing for E1 gene products, and assessed for the absence of contaminating wild-type recombinant virus using protocols previously described . Viral titre was determined by UV-spectrophotometric analysis (viral particle ml -1 ) and by plaque forming assay (pfu ml -1 ). Titres of the viral stocks used were between 10 9 and 10 11 pfu ml -1 . Viruses were stored at -80°C.
Gene delivery and fluorescence-activated cell sorter analysis
Viruses were diluted in Dulbecco's MEM (DMEM) supplemented with 2% FCS in polypropylene tubes. In vitro transduction were performed by plating 1ϫ10 6 cells in 75-cm 2 flasks. Twenty-four hours after plating, cells were incubated for 1 h at 37°C with purified virus in 6 ml of MEM supplemented with 10% FCS. At various times after infection, cells were trypsinized and fixed by the addition of a formaldehyde solution (4% in phosphate-buffered saline (PBS)) for 20 min at room temperature. Cells were pelleted and permeabilized in 0.2% Triton X-100 in PBS for 10 min at room temperature. After three rinses in Power Block 1X (Biogenex), cells were incubated with the anti-p53 antibody (p53-FITC DO7, 1/200) (Dako) for 2 h at room temperature. Cells were rinsed twice in Power Block and then stored at 4°C until analysed. A control was done by incubating cells with a mouse IgG1 fluorescein isothiocyanate (FITC) (1/100) (Becton Dickinson) to estimate the amount of non-specific binding of mouse monoclonal antibodies.
p53 mRNA expression
Total RNAs were extracted from SCC97 cells and mRNAs were purified with the QuickPrep Micro mRNA Purification Kit (Pharmacia Biotech). To determine the p53 phenotype of the cells, a functional test was performed in yeast, which reveals the loss of the transactivating function of the mutated p53 protein (Flaman et al, 1995). Briefly, p53 mRNAs were reverse transcribed and amplified by polymerase chain reaction (PCR). The Saccharomyces cerevisiae strain yIG397 was co-transformed with the reverse transcription PCR (RT-PCR) product and a linearized expression vector (pSS16) carrying the 5′ and 3′ ends of the p53 open reading frame and the LEU2 gene. Yeast colonies in which a homologous recombination between the linearized pSS16 and the PCR product had occurred expressed constitutively p53 and were selected on medium lacking leucine. The strain used also contains an integrated plasmid with the ADE2 open reading frame under the control of a p53-responsive promoter. Yeasts transformed with a plasmid encoding wtp53 grew normally and formed white colonies (ADE2+). Yeasts expressing mutant p53 failed to express ADE2 and formed small red colonies because adenine is limiting for cell growth (phenotype ADE2 -).
Analysis of apoptosis and cell cycle
Detection of apoptotic cells by flow cytometry was performed using the in situ cell death detection kit, fluorescein by TUNEL (terminal deoxynucleotidyl transferase (TdT)-mediated dUTP nick end labelling, Boehringer Mannheim). The TUNEL assay was performed using a procedure previously described. Briefly, cells were trypsinized and centrifuged with the culture medium to pellet in the same tube attached cells and cells in suspension. Cells were washed in bovine serum albumin (BSA) solution (1% in PBS) and fixed in 4% paraformaldehyde (in PBS, pH 7.4) for 30 min at room temperature. Cells were pelleted and permeabilized with 0.3% Triton X-100 in sodium citrate 0.1% for 2 min on ice. Cells were then rinsed twice in PBS and incubated with the TUNEL reaction mixture. A negative control was done by incubating cells in the reaction mixture without terminal transferase and a positive control was provided by incubating cells with DNAase 1 (Boehringer Mannheim, 50 mg ml -1 in PBS, for 10 min at room temperature) prior to the TUNEL reaction. Cells were then washed twice in PBS and incubated for 30 min at 37°C with 10 mg ml -1 RNAase and with propidium iodide at 500 mg ml -1 . Twocolour fluorescence analysis was done on a fluorescence-activated cell sorter (FACS).
Cells analysed for their DNA content were fixed in ice-cold 70% ethanol and stored at -20°C until analysis. For FACS analysis, cells were resuspended in PBS and stained with propidium iodide as described above.
Cell synchronization
Cells were incubated with 200 µM mimosine (Sigma) which is a plant amino-acid for 24 h to synchronize them in G1/S.
Irradiation
Cells were irradiated with a 137Cs γ-ray source at a dose rate of 1.45 Gy min -1 at room temperature, in culture dishes.
Expression of wtp53 after adenovirus-mediated gene transfer
Expression of wtp53 mRNA in SCC97 cells infected with Ad5CMV-p53 was estimated using the yeast functional test (Flaman et al, 1995). The percentage of white yeast colonies reflected the ratio between the ectopic mRNA encoding wtp53 and the endogeneous mRNA encoding mutant p53. We found 90% of white colonies starting with mRNAs purified from SCC97 cells 48 h after infection with Ad5CMV-p53. This value was independent on the multiplicity of infection (MOI) in the range of 25-800 MOI. Using the same assay, we found that expression of wtp53 was already detectable 2 h after infection and reached a plateau 4 h after infection ( Figure 1A). These results indicated that wtp53 mRNA was efficiently expressed from the viral vector.
The proportion of cells expressing the protein wtp53 was then determined by FACS analysis as a function of the MOI ( Figure 1B, left). A maximum of 90% of wtp53-positive cells was observed at 100 MOI. Therefore, we subsequently used this MOI to assess the effect of wtp53 in the whole cell population, expressing wtp53 relatively homogeneously. The wtp53 protein was rapidly expressed after infection since it was detectable by flow cytometry 4 h after infection and was maximal in 90% of cells after 10 h ( Figure 1B, right). In addition, Western blot analysis revealed that a band corresponding to wtp53 was detected at 4 h and that the amount of wtp53 still increased after 10 h ( Figure 1C). After cell lysis, the proteins, separated by SDS-PAGE, were transferred to a membrane and probed with antibodies recognizing p53 (DO7). The band observed corresponded to wtp53 since the endogenous p53 truncated protein in SCC97 was not recognized by the antibody we employed AV1.0CMV/293 were not arrested and maintained an exponential growth. Irradiation (5 or 10 Gy) of SCC97 cells induced predominantly a cell arrest in G2 but not in G1. This G2 arrest was reversible and had almost completely disappeared 48 h after irradiation ( Figure 2A). In contrast, when cells were infected with Ad5CMV-p53 immediately after irradiation at 2.5, 5 (data not shown) or 10 Gy, the G2 arrest was still observed 48 h after irradiation (Figure 2A). In addition, the fraction of cells blocked in G2 was dosedependent: 21% for 2.5 Gy, 41% for 5 Gy and 81% for 10 Gy. This protracted G2 arrest was p53-dependent since cells transduced with the control vector AV1.0CMV/293 showed a transient G2 arrest as uninfected irradiated cells (Figure 2A).
Apoptosis
The occurrence of apoptosis after p53 gene transfer and/or irradiation was studied by the TUNEL assay followed by flow cytometry 24 h (data not shown) and 48 h after treatment ( Figure 2B). The cell cycle was analysed in parallel. Almost no apoptosis was detected in control uninfected cells (0.5%) and in cells infected with the control vector AV1.0CMV/293. When these cells were irradiated (10 Gy), or infected by the control adenovirus, apoptosis occurred in both cases in 26-35% of cells ( Figure 2B). Infection with Ad5CMV-p53 induced apoptosis in 18% of cells, equally distributed in G1 and G2 phases of the cell cycle. This value increased to 84% when infection with Ad5CMV-p53 was performed after irradiation. These apoptotic cells were in the G2 phase for most of them. This was a large scale effect since no cells remained attached to the culture dishes 72 h after irradiation and infection. In conclusion, these results showed that the G2 arrest induced by irradiation which was prolonged when combined to wtp53 gene transfer, was rapidly followed by a massive apoptosis. In contrast, when cells were infected first by Ad5CMV-p53 and subsequently irradiated, a cell cycle arrest in the G1 phase was observed, with no significant increase of apoptosis (data not shown).
Expression of p21, cyclin A2 and cyclin B1 in synchronized cells after wtp53 gene transfer
To study the mechanisms underlying the cell cycle arrest after irradiation associated with Ad5CMV-p53 infection, we used a homogeneous cell population synchronized in G1/S. Figure 3 shows that SCC97 cells were efficiently synchronized in G1/S upon treatment with 200 mM mimosine, with 89% of cells arrested at the G1/S boundary (T0 which represents the time of mimosine removal). Fourteen hours after release from the mimosine block, almost all the cells were in late S phase and in G2. Finally, 20 h after drug release, most of the cells were in G2 and a small percentage (11%) of them were already in G1 phase of the next cell cycle.
Irradiation of cells at the time of mimosine release (T0) did not inhibit S phase entry but induced a delay in G2, that was clearly observed at t = 16 h. At t = 20 h, only a few proportion of cells had escaped the delay in G2 and were in G1, compared to the control non irradiated cells (data not shown).
Western blot analysis showed that in control non irradiated cells the expression of cyclins A2 and B1 peaked 14 h and 16 h after release from the mimosine block respectively, corresponding to cells in late S and G2 phases (data not shown). After irradiation (5 Gy), the maximal expression of both cyclins was delayed and occurred between 12 and 22 h (cyclin A2) and peaked at 20 h (cyclin B1) (Figure 4). Maximal expression of these mitotic cyclins was therefore coincident with the delay in G2.
The expression patterns of cyclin A2 ( Figure 4A) and cyclin B1 ( Figure 4B) after irradiation (5 Gy) were then compared with or without wtp53 expression. The level of expression of cyclins A2 and B1 were dramatically reduced in cells infected with wtp53, as compared to non-infected cells ( Figure 4A and B) or cells infected with the control vector (data not shown), and decreased to undetectable levels at 20-22 h. To determine if this effect was a direct consequence of wtp53 expression, the same experiment was repeated without irradiation. Figure 5 shows that the expression of wtp53 alone was sufficient to decrease the expression of both cyclins A2 and B1, without irradiation.
Finally, Figure 6 shows that p21/waf1 protein detected by Western blot was present only after expression of wtp53 protein and that irradiation alone did not induce p21 expression. There was no difference regarding p21 expression between wtp53 expressing cells with or without irradiation.
DISCUSSION
In these experiments, we showed that wtp53 expression in SCC97 cells was associated with the restoration of p21 expression and a G1 arrest. However, when wtp53 was transferred immediately after irradiation, a prolonged G2 arrest was observed, with more than 80% of the cells arrested in G2, 48 h after treatment, followed by a massive apoptosis. This phenomenon was not related to the viral infection since neither G2/M arrest nor increased apoptosis were observed with the control null virus. In addition it was not due to γ-irradiation alone since the proportion of apoptotic cells following irradiation alone (about 30%) was much lower than when wtp53 was transferred after with irradiation (> 80%). Such a large increase of apoptotic cells after irradiation has also been observed in this histological type, using a protein kinase C inhibitor in combination with irradiation (Chmura et al, 1997). In addition, our results suggest that the timing of γ-irradiation before wtp53 gene transfer can be critical to obtain maximal effects in terms of apoptosis and cell killing. These observations could have important implications for future clinical protocols.
The wtp53-dependent G2 arrest observed in irradiated asynchronous cells was also observed in cells synchronized with mimosine, in the absence of irradiation. Indeed, when cells were released from the mimosine block, wtp53 induced a prolonged block in G2. This was also observed by Agarwal et al (1995) who showed in Li-Fraumeni syndrome fibroblasts expressing wtp53 under the control of the tetracycline promotor that expression of Ad (5) wtp53 without any irradiation could mediate a predominant G2 arrest when cells were released from a mimosine block. In their case, wtp53 expression blocked the cells (for as long as 20 days) reversibly, in contrast with our results showing a G2 arrest followed by massive apoptosis. It should also be pointed out that the level of expression of the exogenous wtp53 following infection was probably much higher than the physiological one, since we used a strong CMV promotor. However, Agarwal et al (1995) showed that the G2/M arrest could be achieved in their model at physiological levels of p53. Our data suggest that the consequences of the p53-dependent G2 arrest can vary according to the model: massive apoptosis in our case, prolonged but reversible G2 arrest in Agarwal model and a third consequence has also been reported by Aloni-Grinstein et al (1995) who showed that expression of wtp53 was able to mediate differentiation during the G2/M growth arrest in a murine pre-B-cell line.
The influence of p53 on the G2 arrest has also been observed by Powell et al (1995) who described a differential sensitivity of p53(-) and p53(+) cells to caffeine induced radiosensitization. They showed that there was a complete override of G2/M arrest in p53(-) cells, but no impact in p53(+) cells also suggesting that wtp53 may help cells to overcome the G2 arrest. In addition, Winters et al (1998) reported a p53-dependent pathway which could operate after exposure of human cells to ionizing radiation to promote G2 arrest accompanied by nuclear translocation rather than inhibitory phosphorylation of CDC2. Our results are in agreement with these different studies. Nevertheless we observed that wtp53 blocked the cells in G2/M and induced massive apoptosis of these cells, which was in contrast with the studies of Skladanowski et al (1997) and Guillouf et al (1995) who showed that wtp53 was not able to block the cells in G2/M but could promote the exit from the G2/M arrest induced respectively by γ-irradiation and etoposide and then was followed by apoptosis. However, these results were obtained with a murine myeloid leukaemia cell line which could have a different and specific behaviour since Han et al (1995) described an apoptosis p53-independent mechanism at the G2 checkpoint in cells after X-irradiation in HL-60 cells (human promyelocytic leukaemia).
The molecular mechanisms involved in the p53-regulated G2/M arrest are under investigation. A recent study by Hermeking et al (1997) has found that a protein called 14-3-3 sigma was a p53-regulated inhibitor of G2/M progression in a human cancer cell line. In our study, we analysed the molecular mechanisms of this G2/M arrest by studying cyclins A2 and B1, which are known to be implicated in the G2/M transition. Cells which arrived in G2 expressed cyclins A2 and B1 (after irradiation alone) with a maximal level delayed compared to the control. This was in agreement with Muschel et al (1993) who have shown that the expression of cyclin B1 was delayed in expression after irradiation. However, in their study, cyclin A and cyclin B expression responded differently to radiation since cyclin A2 had a peak of expression at the same time as the control.
Various hypotheses can be generated to explain the complete disappearance of cyclins A2 and B1 in our study. A first hypothesis is that the 14-3-3 sigma protein may interfere with the expression of cyclins A2 and B1, since this protein has been recently described as a determinant of p53-regulated G2/M arrest in a human cancer cell line (Hermeking et al, 1997). A second hypothesis is that the cyclins could be degraded by caspases, as suggested by Stack and Newport (1997) apoptotic cells. Indeed, in our experiment, the cyclin disappearance and G2 arrest were coincident with a massive apoptosis, favouring this hypothesis. A third possible explanation is related to the cell-cycle-regulated ubiquitin-protein ligase complex, known as cyclosome or anaphase-promoting complex (APC/C). It initiates exit from mitosis by degrading mitotic cyclins. The human fizzy gene (p55cdc) is the homologue of a Drosophila gene necessary for cyclin A and cyclin B degradation (Townsley and Ruderman, 1998). Up to now, very little is known about the APC/C pathway, the role played by the Fizzy/cdc20p protein family and how the machinery is inactivated in response to DNA damage. However, we could hypothesize that p53 plays a role on Fizzy transcriptional activation to prevent cells to exit from mitosis into interphase of the next cell cycle. Finally, wtp53 may repress transcription of both cyclins A2 and B1. In support for this hypothesis, Yamamoto et al (1994) showed that wtp53 was able to inhibit transcription from the cyclin A2 promoter.
P53 may also interfere with E2F activated transcription of the cyclin A2/B1. Indeed, recent observations have clearly shown that the transcriptional factor E2F played a determinant role in cellcycle regulation and was able to interact with both p53 and MDM2 (Muller, 1995). In addition, Dimri et al (1996) have demonstrated that p21 suppressed the activity of E2F-responsive promoters as cdk1 and that E2F was a critical target and ultimate effector of p21 action in a retinoblastoma protein (pRB)-independent pathway. Moreover, it has been recently shown that p21, acting as a p53dependent cdk inhibitor played a role at the G2/M transition by association with cyclin A-cdk and cyclin B-cdk and promoted a transient arrest in late G2 that could contribute to late cell cycle checkpoint controls (Dulic et al, 1998). A role of p21 in G2/M transition has also been reported by Niculescu et al (1998). Altogether, these data indicate that the decrease in cyclins A2 and B1 in cells transduced by wtp53 may result from either transcriptional inhibition or protein degradation. Immunoprecipitation of cyclins A2 and B1 after pulse-chase labelling revealed an initial inhibition of cyclins A2 and B1 expression followed by increased degradation (our unpublished data). These preliminary results therefore suggest that both transcriptional repression and protein degradation could play a role in cyclins A2 and B1 disappearance during the p53-mediated G2 arrest.
In conclusion, a protracted G2 arrest was observed when exponentially growing SCC97 cells were infected by Ad5CMV-p53 immediately after γ-irradiation. This prolonged arrest was followed by massive apoptotic cell death. The possibility to obtain massive apoptosis, when delivering wtp53 gene immediately after irradiation, suggest that this therapeutic sequence could be relevant for future clinical protocols combining radiotherapy and wtp53 gene transfer.
The G2 arrest was also observed in the absence of irradiation in synchronized cells expressing wtp53, with a dramatic decrease in cyclins A2 and B1.
This study is the first to show a p53-dependent down-regulation of cyclins A2 and B1, thus providing a molecular basis for the wtp53-dependent G2 arrest. However, the molecular mechanisms involved in p53-mediated down-regulation of these mitotic cyclins need further investigations and may also vary from one cellular model to another. | 2014-10-01T00:00:00.000Z | 2000-01-18T00:00:00.000 | {
"year": 2000,
"sha1": "63461d619bb024191b351684b897507f707aa97b",
"oa_license": "CCBY",
"oa_url": "https://www.nature.com/articles/6690976.pdf",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "63461d619bb024191b351684b897507f707aa97b",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
} |
259461572 | pes2o/s2orc | v3-fos-license | Hybrid Model for Sentiment Analysis of Bitcoin Prices using Deep Learning Algorithm
Bitcoin is a decentralized digital currency that a single authority or government does not control. Bit-coin uses blockchain technology to verify transactions and guarantee user security and privacy. The fluctuating value of bitcoin is influenced by opinions that develop because many people use these opinions as a basis for buying or selling bitcoins. Therefore, knowledge to determine the market conditions of bitcoin based on public opinion is very necessary. This study aimed to develop a hybrid model for bitcoin sentiment analysis. The dataset used came from comments on the Indodax website chat room; as many as 2890 data were successfully collected, then data preprocessing, translation to English, and text labeling and used hybrid parallel CNN and LSTM using a word embedding glove 100 dimensions. Results of the experiments conducted, at 90:10 data splitting and 100 epochs was the best model with 88% accuracy, 86% precision, 78% recall, and 81% f1-score, while the classification of opinion text comments on Indodax chat results in 64.22% neutral comments, 21.14% positive comments, and 14.63% negative comments. Based on research results, using a parallel hybrid model provides a high accuracy value in classifying text. From these results, positive comments were more than negative,
INTRODUCTION
In this increasingly modern era, Digital currency is in great demand by the public because it has the potential to significantly increase in value, making it an attractive investment alternative for some people. Digital currencies are an alternative form of liquidity with tremendous differences in ownership, transaction, and production issues about traditional monetary assets [1]. Some examples of digital currencies are Bitcoin, Ethereum, and others. Bitcoin is one of the cryptocurrencies where this currency is a digital currency that bases on online payment transactions. Bitcoin is mainly used in transactions on the internet without using intermediaries, alias, not using bank services [2]. The bitcoin exchange rate is very volatile, with unreasonable price increases, so it is vulnerable to the risk of bubbles that can potentially harm the public [3]. One of the factors that can affect the price of bitcoin is the emergence of negative or positive public opinion. The circulation of these opinions can affect the level of public trust in bitcoin. For this reason, it is necessary to do a way to find out market conditions based on the opinions that develop by using text mining.
Text mining is a search or mining effort, namely a text where data sources are usually obtained from documents to find words that can represent the document's contents. So that an analysis of the connectivity between documents can be carried out, the initial stage that must be passed is to change the file type, which was originally pdf to text, then filtering is carried out [4]. Text Mining aims to find valuable information hidden from structured and unstructured information sources [5]. Retrieval of information from the text (text mining), among others, can include text or document categorization and sentiment analysis (sentiment analysis) [6]. The technique used in text mining is sentiment analysis. Sentiment analysis is carried out to extract information from the data that has been collected, including the quality and quantity of positive and negative sentiments and trends that appear in opinions and comments related to Bitcoin. This information can then be used to predict Bitcoin price movements, both in the short and long term. Sentiment analysis is a field of study that analyzes opinions, sentiments, evaluations, judgments, attitudes, and emotions of a person towards a product, service, organization, individual, problem, event, or topic [7]. The main task of sentiment analysis is to classify polarities or features in the form of text contained in sentences or documents and determine the opinions described from these sentences or documents [8]. By using sentiment analysis, the polarity of existing opinions can be collected so that it can be used to predict the public atmosphere or a negative or positive picture of netizens' feelings. Sentiment analysis can be processed using deep learning algorithms such as convolutional neural networks, long short-term memory, recurrent neural networks, and gated recurrent units.
CNN is a combination of artificial neural networks and deep learning methods. CNN consists of one or more convolutional layers, often with a subsampling layer followed by one or more fully connected layers as standard neural networks [9]. CNN has advantages compared to other methods, such as processing computations to train the model before conducting testing. When testing is carried out, it does not require repeated training, and the model can be used anywhere. However, the CNN method has weaknesses; the more datasets used in the training process, the longer the training process [10]. LSTM is an evolution of the RNN architecture, which was first introduced by Hochreiter & Schmidhuber in 1997. A robust type of neural network designed to handle sequence dependence is called a recurrent neural network [11]. In addition, LSTM has the advantage of handling vanishing gradient problems common in relatively long data processing [12]. Based on the disadvantages of CNN and the advantages of LSTM, this research will combine the two algorithms to get better model performance.
Bitcoin price sentiment analysis has been done before, research conducted by [13] using a machine learning model with the K-Nearest Neighbors algorithm for sentiment analysis, data obtained from social media, namely Facebook, this study obtained an accuracy value of 62%. The second study ever conducted by [14] using a machine learning model with the Nave Bayes algorithm for sentiment analysis, data obtained from social media, namely Twitter, this study obtained an accuracy value of 71.98%. Recent research ever done by [15] using a machine learning model with the comparison Nave Bayes and Support Vector Machine algorithm for sentiment analysis, data obtained from social media, namely Twitter, this study obtained the best accuracy in the Support Vector Machine algorithm with a value of 71.30%. Based on previous research on bitcoin price sentiment analysis done in 2021 and focused on sentiment analysis on social media. This study uses the sentiment analysis method with an accuracy below 80%. However, since then, many things have changed, such as changes in transaction volumes and new regulatory policies that have had an impact on investor behavior. Therefore, the analysis that can be carried out in this study is to update the data and sentiment analysis methods used to account for new changes in the cryptocurrency industry. This novelty in this research is that the data used comes from Indodax, a digital currency exchange in Indonesia that provides chat room services and uses a more sophisticated sentiment analysis method, namely deep learning, to improve the accuracy of sentiment analysis. By updating the methods and data sources used, this new research is expected to provide a better understanding of investor sentiment regarding bitcoin prices. As a result, it can make decisions in conducting bitcoin transactions. RESEARCH METHOD Figure 1 shows the steps to be followed. Based on Figure 1, the process begins with collecting data on the indodax.com website using the help of an instant data scrapper, and then preprocessing will be carried out, then translated into English, which aims to provide label text using the help of vader lexicon, at the word embedding stage it will convert the text into a vector using glove which is then processed using hybrid parallel CNN and LSTM. The final result will measure the performance of the model using the confusion matrix.
Data Collection
The process begins with collecting data on the Indodax.com website. The data collection process is presented in Figure 2. Based on Figure 2, data collection used an instant data scraper by taking comments from the chat room column, and then the data is converted into excel format. The results of data collection are presented in Table 1. Table 1 presents data collected on the Indodax.com site, which was taken from June 26 -July 27, 2022. There were 2890 data samples collected, with two variables: usernames and comments.
Data Preprocessing
Data preprocessing is the stage that is carried out before conducting data analysis or modeling. The purpose of data preprocessing is to prepare data so that it fits the needs and can be processed by the algorithm that will be used. Several stages are carried out in the preprocessing, namely: a. Data Cleaning is the process of identifying and addressing problems with data, such as missing, duplicate, or invalid data. b. Case folding is the process of changing uppercase to lowercase or vice versa; this is typically used in the text search process to ensure that the search results are not affected by the case. c. Tokenizing is used to break down the text into parts that are easier to process. d. Filtering is a data cleaning process that is carried out by filtering data that is not needed. e. Stemming is the process of changing words with the same root into basic or original words.
Translating to English
This process changes the text from Bahasa to English. Changing the language to English is intended so that data can be labeled automatically in the labeling process using the vader lexicon. Translate is done automatically using the help of Google Translate.
Labelling
Labelling is the process of adding labels or tags to text to provide additional information or categories about the text. At this stage, labeling is done automatically using the vader lexicon.
Splitting Data
Splitting data refers to dividing a dataset into two or more subsets; this is typically done to evaluate the performance of a machine-learning model or to use the subsets for different purposes. This process uses test-train-split.
Word Embedding
This process is carried out to give weight (value) to each word in a document or text. This weight is given by considering the frequency of occurrence of the word in the text, the importance of the word in the context of the text, and its relevance to the topic.
Hybrid Model
The algorithms used to build the hybrid model are CNN and LSTM. The architecture of each algorithm can be seen in the image below: 1. CNN uses a convolution process to filter and extract relevant features from the data and then uses classification to classify the data. CNN is very effective in image processing but can also be used for text by converting it into a matrix. CNN also has a high degree of generalization, meaning it can handle new data well without retraining. CNN architecture is presented in Figure 3. The convolutional layer functions to convert input into smaller and more structured features, and it performs convolution between the input and the filter, which is used as a representation of the desired features. The convolutional layer's formula (1), (2), (3), and (4) is as follows.
(c) Max Polling The pooling layer reduces the dimensions of the input and simplifies the existing information; this is done by selecting the maximum or average value from several related points in the input. Using the following equation formula (3) [18].
c is the maximum value of the feature map (d) Fully Connected Layer Fully connected layers are commonly used in deep learning architectures. They are usually followed by one or more non-linear activation functions such as ReLU (Rectified Linear Unit) or sigmoid. They are useful for learning complex relationships between the input and the output and making predictions based on those relationships.
The softmax layer is used to classify an input. This layer converts the previous layer's output into probabilities, with each output being a probability of the corresponding class. Using the following equation formula (4).
y is output. The output value is greater than the threshold and will have a value of 0 otherwise.
2. Long Short-Term Memory LSTM can remember information that has long been passed so that it can understand the wider context of the data. LSTM is also able to control information entering and leaving its long-term memory so that it can process data more accurately. LSTM architecture is presented in Figure 4 [16]. The input gate is in charge of controlling the information that goes into the LSTM unit, and the forget gate is in charge of controlling the information that is issued from the LSTM unit, while the output gate is in charge of controlling the information that is output by the LSTM unit.
CNN-LSTM
Hybrid CNN-LSTM combined two algorithms. CNN is used to obtain features from input data, while LSTM is used to manage input data sequentially. Hybrid CNN-LSTM can deal with signal processing problems with spatial and temporal structures, such as language processing, image processing, and video processing. This model has the ability to retain the required information over a long period of time and ignore irrelevant information, improving signal processing performance. CNN-LSTM architecture is presented in Figure 5 The hybrid architecture consists of two main parts, namely CNN and LSTM. CNN consists of several convolution layers which are used to extract spatial features from data. The results from the CNN are forwarded to the LSTM layer, which is the RNN network. This LSTM layer will process sequential data by storing information from the previous timestep and using it to predict the next timestep. Combining the advantages of CNN and LSTM, the CNN-LSTM hybrid architecture is able to handle sequential data with a spatial structure better than using only one type of neural network. The use of layers in the CNN-LSTM hybrid uses the following parameters.
Convolutional layers
The parameters in this layer are determined based on the shape filter that has been determined. To calculate the parameters in this layer, use the following equation Formula (5).
conv = ((shape of width of f ilter * shape of height of f ilter * number of f ilters in the previous layer + 1) * number of f ilters) (5) The shape of the width and shape of the height filter is the dimension of the filter used, and then the filter is multiplied by the previous filter, added by 1 then multiplied by the filter used on this layer.
LSTM layer
The parameters in this layer are determined based on the shape filter and input that have been determined. To calculate the parameters in this layer, use the following equation Formula (6).
Model Evaluation
Model evaluation is the process of evaluating the ability of a model to solve a problem or manage data. This process is carried out using a confusion matrix. The confusion matrix is presented in Table 2. (FPNet) is a model supposed to predict positive data but instead predicts neutral data. False Positive Negative (FPNeg) is a model supposed to predict positive data but instead predicts negative data. False Negative Neutral (FNegNet) is a model supposed to predict negative data. Instead, it predicts neutral data, False Negative Positive (FNegPos) is a model supposed to predict negative data but instead predicts positive data, False Neutral Positive (FNetPos) is a model supposed to predict neutral data but instead predicts positive data, False Neutral Negative (FNetNeg) is model supposed to predict neutral data but instead predicts negative data.
Based on Table 2, we will calculate the values of accuracy, precision, recall, and f1-score using the following equation. 1. Accuracy is the system's accuracy level in classifying the correct data. The accuracy value can be calculated by dividing the number of correct predictions by the total number of predictions using Formula (7).
2. Precision is the ratio of the number of correctly predicted classes to the total number of predicted classes-Formula (8), (9), and (10).
P ositive precision = T P T P + F N egP os + F N etP os (8) 3. The recall is one of the indicators used in evaluating the classification model. Recall measures how well the model can return the correct data from the total data that is actually correct using Formula (11), (12), and (13).
P ositive recall = T P T P + F P osN eg + F P osN et (11) N egative recall = T N eg T N eg + F N egP os + F N egN et (12) 4. F1-Score is the harmonic mean of precision and recall (sensitivity) using Formula (14).
Model Validation
Model validation is the process of evaluating the effectiveness of the model that has been created. If the results produced by the model follow the actual data, then the model is valid. However, if the results do not match the actual data, the model must be repaired or regenerated. Model validation is important to ensure that the model that has been created can be used correctly and effectively to solve the problem at hand.
RESULT AND ANALYSIS 3.1. Data Collection
Datasets were taken using a scraping technique using the instant data scraper extension, and opinions were taken from the Indodax.com website chat room; only words containing 'BTC' would be taken, the data collection period was taken within one month, i.e., from June 26 July 27, 2022. After the data is collected, then the preprocessing process is carried out, then the data is translated into English, which aims to do automatic labeling using the vader lexicon. Labeling is categorized into three classes: positive, neutral, and negative. The result is 2890 comments that have been collected. Data that has been labeled will be divided into two parts. Data division is carried out four times, comparing training data: test data, namely 60:40, 70:30, 80:20, and 90:10. A total of 2,890 comments were collected. The following part will display the number of comments in each class, and these results are presented in Figure 6. From the results shown, it can be seen that the positive amount of data is 611, the neutral is 1856, and the negative is 423.
Data Preprocessing
Data that has been collected using a scarping technique with the help of the instant data scraper extension will then be cleaned, which aims to remove empty data and clean data from symbols and links.
1. Data Cleaning The result of data cleaning is presented in Table 3; this process aims to ensure that the data to be used is clean, accurate and following the needs of analysis by checking missing data, cleaning inappropriate data, and removing symbols and links.
Tokenizing
The tokenizing result is presented in Table 5; this process is intended for text analysis, as it helps in grouping and comparing words more easily.
Filtering
The filtering result is presented in Table 6; this process aims to eliminate irrelevant or useless data for the analysis to be carried out.
Stemming
The result stemming is presented in Table 7; this process aims to change the words in a text into basic words that aim to reduce the dimensions of the data.
Translating to English
The next step is to translate from language to English using the help of Google Translate. The results of the Bahasa to English translation are presented in Table 8.
Labeling
Data that has been converted to English will continue for automatic labeling using the vader lexicon with conditions. If the compound value is 0.05, then the Sentiment is Positive. If the compound value is 0.05, then the Sentiment is Negative, and Apart from that, the sentiment is Neutral.
Giving sentiment to each text is presented in Table 9.
Splitting Data
Data is divided four times with a ratio of 60:40, 70:30, 80:20, and 90:10 (as show in Table 10). Table 10 presents the results of data divided into training data and test data.
Hybrid Model
Hybrid cnn-lstm uses the parallel model presented in Figure 7. Based on Figure 7, this hybrid used 3*1 dimensions. In the first and second convolutional layers, it has 32 filters. The LSTM layer has 100 filters. It is necessary to check the parameters to test the hybrid model created. The parameters to be used in this study are presented in Figure 8. Parameters of hybrid are presented in Figure 8; it can be concluded that this study uses two CNN and LSTM algorithms. An explanation of the layers in each algorithm is as follows.
Conv1d (Conv1D) is the first convolution layer on CNN. These parameters are obtained by multiplying the shape of the width and height filter with dimensions of 3 * 1, then multiplying by the previous filter, which is 100, and adding by 1, then multiplying by the filter owned, which is 32, so the result obtained is 9632. P arameter conv1d = (3 * 1 * 100 + 1) * 32 Conv1d 1 (Conv1D) is the second convolution layer on CNN. These parameters are obtained by multiplying the shape of the width and height filter with dimensions of 3 * 1, then multiplying by the previous filter, which is 32, and adding by 1, then multiplying by the filter owned, which is 32, so the result obtained is 3104 Lstm is a layer of the LSTM algorithm. These parameters are obtained by adding up the previous filters, which are 32, added by 1 and 100, which are the filters for this layer, multiplied by the filter for this layer, which is 100, and multiplied by 4, which is the unit of lstm, so the result obtained is 53200. P arameterlstm = 4 * ((32 + 1 + 100) * 100)
Model Evaluation
Model evaluation is a process to evaluate how well the built model is able to predict or complete the specified task. Some common methods used include accuracy, precision, recall, and F1-score. Choosing the right method can help evaluate a model's performance and make decisions about the most suitable model to use in a given situation. Data splitting was carried out four times, and then a comparison was made to find the best model performance presented in a table. The comparison results of data splitting are presented in Table 11. Results of comparison data splitting from the confusion matrix are presented in Table 11. Based on Table 11, the data splitting of 90:10 with 100 epochs gets the best accuracy value. Testing with 100 epochs in data splitting 90:10 are presented in Table 12. Table 12 presents the calculation results using 100 epochs. Out of 100, it gets an accuracy value of 88%. This value is the highest found in the 90:10 data splitting. Model accuracy and model loss are presented in Figure 9. Based on Figure 9, the training accuracy is higher than the data validation accuracy, indicating that the model tends to experience overfitting. Overfitting occurs when the model is too complex and too focused on training data so that it is unable to generalize well to new data or data that is not visible during training. As a result, even though the training accuracy is high, the model performance on validation data is not as good as expected; it could even be much lower. In this case, the model may have "memorized" the training data and be unable to identify the more common patterns behind the data. This can be overcome in various ways, such as using regularization techniques, expanding the dataset, or using other techniques to reduce model complexity, for results confusion matrix on the 90:10 data splitting are presented in Figure 10. Figure 10, the model's performance has been tested by obtaining an accuracy score of 88%, precision of 86%, recall of 78%, and f1-score of 81%. The results of evaluating the performance of the CNN structure with the LSTM architecture in this study were compared with several previous studies for classifying 3-class multi-label text. This study aimed to combine the CNN and LSTM algorithms as a multi-label text classifier with the additional feature of word embedding glove dimensions of 100. The results of the performance evaluation of training and testing showed good results. This can prove that the LSTM and CNN hybrid models are good methods for sequential text classification. Comparison results with previous research are shown in Table 13. [18] 69.97% SVM [19] 82% RNN [20] 64.48% CNN [21] 86% LSTM [22] 81% CNN & LSTM + Glove 88% ISSN: 2476-9843 Table 13 shows that the LSTM and CNN hybrid models with the word embedding GloVe feature get optimal results for text classification because CNN can perform feature extraction at the word level and create a vector representation of each word. However, CNN cannot pay attention to the order of words in a sentence. In contrast, the LSTM can remember word order and understand the context and relationships between words in a sentence. Therefore, by using a combination of these two models, we can obtain features from CNN and maintain the ability of LSTM to pay attention to word order in sentences.
Model Validation
The ROC curve is used to evaluate model performance by comparing the level of the model's ability to predict positive and negative classes. Table 14 will explain the categories of model performance values. Based on the categories in Table 14, this study obtained a model validation value of 92.77%, presented in Figure 11. Based on Figure 11, this research is in the range of 0.9-1.0, which means that this research is included in the excellent category.
CONCLUSION
The results of this study are the Convolutional Neural Network and Long Short-Term Memory algorithms, which can be combined, with previous studies, that only used one deep learning algorithm. The final result is the best in the data splitting of 90:10 and 100 epochs with 88% accuracy, 86% precision, 78% recall, and 81% f1 score. This study contributes by using a parallel hybrid model as well as direct data from investor opinion on the Indonesian digital currency exchange, namely Indodax, which then conducts a sentiment analysis of bitcoin prices so that market conditions can be determined. Results obtained are dominated by neutral comments, but positive comments are more than negative comments. Therefore, investors should not sell and buy bitcoins. Furthermore, market conditions change rapidly, and the researcher provides suggestions to further researchers to update the data so that they can make decisions in certain situations.
DECLARATIONS
AUTHOR CONTIBUTION The first and second authors' contributions have carried out this work, with the main contribution from the first author. The third and fourth author contribution assists in data analysis.
FUNDING STATEMENT This research is independently funded because it allows researchers to have the freedom to carry out research and pursue more specific goals without depending on sponsors or other parties.
COMPETING INTEREST This research is based on the assumption that no conflict of interest can affect the results and conclusions of the research. | 2023-07-11T00:36:34.871Z | 2023-03-24T00:00:00.000 | {
"year": 2023,
"sha1": "9a6c3f29de5f68cd7c420a602cd93c240790c2e5",
"oa_license": "CCBYSA",
"oa_url": "https://journal.universitasbumigora.ac.id/index.php/matrik/article/download/2640/1273",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "35177b64fa9b4fadc5bba59897a5806ee1ea93e5",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": []
} |
18436857 | pes2o/s2orc | v3-fos-license | Analysis of machine perfusion benefits in kidney grafts: a preclinical study
Background Machine perfusion (MP) has potential benefits for marginal organs such as from deceased from cardiac death donors (DCD). However, there is still no consensus on MP benefits. We aimed to determine machine perfusion benefits on kidney grafts. Methods We evaluated kidney grafts preserved in ViaspanUW or KPS solutions either by CS or MP, in a DCD pig model (60 min warm ischemia + 24 h hypothermic preservation). Endpoints were: function recovery, quality of function during follow up (3 month), inflammation, fibrosis, animal survival. Results ViaspanUW-CS animals did not recover function, while in other groups early follow up showed similar values for kidney function. Alanine peptidase and β-NAG activities in the urine were higher in CS than in MP groups. Oxydative stress was lower in KPS-MP animals. Histology was improved by MP over CS. Survival was 0% in ViaspanUW-CS and 60% in other groups. Chronic inflammation, epithelial-to-mesenchymal transition and fibrosis were lowest in KPS-MP, followed by KPS-CS and ViaspanUW-MP. Conclusions With ViaspanUW, effects of MP are obvious as only MP kidney recovered function and allowed survival. With KPS, the benefits of MP over CS are not directly obvious in the early follow up period and only histological analysis, urinary tubular enzymes and red/ox status was discriminating. Chronic follow-up was more conclusive, with a clear superiority of MP over CS, independently of the solution used. KPS was proven superior to ViaspanUW in each preservation method in terms of function and outcome. In our pre-clinical animal model of DCD transplantation, MP offers critical benefits.
Introduction
Static cold storage (CS) using the University of Wisconsin solution (Viaspan ® ) (UW) is the gold standard of preservation of kidneys obtained from deceased donors [1]. Its introduction in the late nineteen eighties has reduced the incidence of delayed graft function (DGF) and improved graft survival of kidneys obtained from donations after brain death [2]. Nevertheless, the growing use of expanded criteria donors (ECD), donors with acute renal failure [3,4] and deceased after cardiac death donors (DCD) has increased the DGF incidence of graft preserved by UW [5] or by CS in general [6,7].
Use of DCD grafts in the clinic is limited by a high rate of primary non function and DGF [7][8][9], in correlation with the length of the warm ischemia period [6]. However, as they represent a significant increase in the pool of donors (30%), which is of particular importance in the current shortage (only one out of three patients on the waiting list receives a kidney), finding the optimal way to preserve these organs and improve their quality as become a first order issue.
Hypothermic machine perfusion (MP) preservation is increasingly being used as an alternative preservation method to CS. Studies have reported a reduction of DGF after MP compared to CS [10][11][12][13][14][15][16][17][18], however the solutions used were different, and some studies lacked proper randomization. These early clinical data were supported by experimental studies, conducted in large animal models of DCD using different preservation solutions, reporting improvements of kidney function after MP [19][20][21][22]. Nevertheless, not all animal studies support the superiority of MP over CS in DCD models. Indeed, MP of pig DCD kidneys using a combination of Belzer machine perfusion solution (MPS) and Viaspan ® did not reveal any superior effect to ViaspanUW-CS [20] and when the same preservation solution was used in both the CS-and MP-groups, no significant difference between MP and CS preservation could be observed in dogs [21] or pigs [23] for WI times of up to 60 min. A better performance of ViaspanUW-MP was, however, reported for longer WI times in dogs [21]. These experimental data question the necessity of MP for DCD kidneys. Clinical evidence on the use of MP and its benefits can be conflicting [24][25][26], however recent clinical trials show small but significant benefits of MP over CS [27] in terms of DGF rate and one year survival of grafts from all categories of donors and further studies demonstrated some benefits from MP in terms of DGF and function in a DCD subset [28].
Hence, clinical evidence for the superiority of MP over CS in DCD kidney transplantation is accumulating and interest in MP is still growing [29][30][31][32] as new machines [33,34] and preservation concepts [35] are being developed. Nevertheless there is also a need for preclinical studies in a standardized transplantation model to investigate the benefits of MP on both acute and chronic kidney injury.
The present study uses a recently developed porcine model mimicking conditions of DCD class I and II [36,37], by 60 minutes of WI before organ collection and storage. We propose a four-way comparison using preservation with Viaspan ® (ViaspanUW), the gold standard in CS, either by CS or MP, and preservation with Kidney preservation solution-1 ® (KPS), recommended for MP, either by CS or MP.
We will measure function recovery, quality of function, chronic immune response development, chronic fibrosis development and animal survival. This will allow us to determine a 'machine effect' independently of the solution used, as well as measure benefits of clinical MP (KPS-MP) versus clinical CS (ViaspanUW-CS).
Surgical procedures and Experimental groups
The DCD model was performed in large white male pigs (INRA, GEPA, Surgères, France) (30-35 kg) according to the guidelines of the French Ministry of Agriculture for the use and care of laboratories animals as previously described [37]. Briefly, WI was induced by right renal pedicle clamping for 60 min, conditions inducing consistent damages [37]. The right kidney was removed, cold flushed with the same solution used for either MP or CS, and preserved for 24 hours at 4°C either by static storage (CS), or by MP using the Lifeport ® machine (Organ Recovery System, USA) with either ViaspanUW (Viaspan ® , Bristol-Myers Squibb, France) or KPS (KPS-1 ® , Organ Recovery Systems, Brussels). Solution composition is detailed in Table 1. At the end of the preservation period, the kidney is transplanted in the same animal, and the left kidney is removed to reproduce the nephron mass in transplanted patients. Average anastomosis time was 30 ± 5 min and no complications were observed between the 2 surgical procedures. 4 groups were studied: 1)ViaspanUW-CS: kidneys preserved in Viaspan ® solution by CS (n = 6); 2)Viaspa-nUW-MP: kidney preserved in Viaspan ® by MP (n = 8); 3)KPS-CS: kidneys preserved in CS (n = 7); 4)KPS-MP: Kidneys preserved in KPS-1 ® solution by MP (n = 7). Results between experimental groups were compared to a group of normal animals (Control; Sham Operated sex-, age-and weight-matched, n = 7).
Primary non-function (PNF) of the graft was defined as a total absence of urine output for 7 consecutive days after transplantation and since dialysis is not available in our animal facility, animals with PNF were sacrificed.
Organ perfusion parameters
The Lifeport ® kidney transporter operated in pulsatile mode, with a maximum systolic pressure set at 40 mmHg and frequency at 60 min -1 . The initial perfusion pressure was set at 35 mmHg. This setting was corrected hourly, according to the clinical protocol recommended by the "Agence de Biomédecine" (France), based on the organ's value of perfusion resistance (mm Hg/(mL/min) displayed in real time on the machine screen, representing the quotient of pressure divided by flow. The perfusion pressure was corrected according to the 3 following criteria of resistance value: 1) inferior or equal to 0.3 mm Hg/(mL/min), the perfusion pressure setting was decreased at a rate of 5 mmHg/h with a minimal perfusion pressure of 20 mmHg; 2) ranging from 0.3 to 0.6 mm Hg/(mL/min), the perfusion pressure setting was maintained at 35 mmHg; 3) equal or over 0.6 mm Hg/(mL/min), the perfusion pressure setting was increased at a rate of 5 mmHg/h with a maximal perfusion pressure of 45 mmHg. Overall mean pressure was 31.5 ± 2.5 mmHg in the KPS-MP group and 33.4 ± 1.5 mmHg in the ViaspanUW-MP group.
Functional parameters
Animals were placed in individual metabolic cages for blood and urine collection. Functional parameters were measured using an automatic analyzer (Modular automatic analyzer, Roche Diagnostic, Meylan, France). Activities of brush border enzyme alanine aminopeptidase and lysosomal enzyme N-acetyl-β-D-glucosaminidase (NAG) were determined in urine as previously described [38], briefly, NAG activity was determined on a Roche Modular P system (Roche Diagnostics, Meylan, France) and AAP determination was measured using storage method and colorimetric assay. NAG and AAP activity (U/L) was expressed as a ratio with urinary creatinine (mmol/L) so as to adjust for differences in urinary flow of the sample.
Histopathological studies
Serial ultrason-guided percutaneous biopsies were performed at day 7 and M1 and larger tissue samples were collected at 3 month after sacrifice. Samples were either frozen at -80°C or fixed in formalin then embedded in paraffin. All sections were examined and photographed under blind conditions by a pathologist and a nephrologist. A standard procedure was used to estimate the level of tubulointerstitial fibrosis using the Picro Sirius red staining, as described previously [39]. ED1+ and CD3+ cell invasion was measured on frozen sections from the graft at 3 months, stained with specific antibodies (South-ernBiotech, USA). 10 high powered fields (400X) were randomly selected and the number of positive cells determined in a blinded fashion. Immunostaining was performed for Vimentin (Dako, Sweden). The percentage of staining was determined by computerized image analysis in 10 randomly selected fields (×200) of each slide.
Statistical methods
Results are shown as mean ± SEM. For the statistical analysis among groups, we used NCSS software (NCSS LLC, USA) an one-way ANOVA analysis with Tukey-Kramer test for multiple comparisons in case of normality (Skewness, Kurtosis and Omnibus tests) and equality of variance (Modified-Levene Equal-Variance Test) and Kruskal-Wallis Multiple-Comparison Z-Value Test (Dunn's Test) in case these parameters were not met. Correlation were evaluated with Pearson and Spearman tests and a 2 way ANOVA test was performed to check influence of preservation techniques and solutions. Statistical significance was accepted for P < 0.05.
Organ characteristics
Kidney's weights before preservation did not differ between the experimental groups (166.9 ± 7.4 g). After preservation, kidneys from ViaspanUW-CS group had lost the most weight (115.0 ± 7.7 g) while KPS-CS kidneys did not change significantly (155.3 ± 13.8 g, p < 0.05 to UW-CS). ViaspanUW-MP organs seemed to gain weight (191.8 ± 16.3 g) while KPS-MP had significantly gained weight (208.6 ± 13.2 g, p < 0.05 to all). Organ resistance was significantly higher and flow rate significantly lower at the start of perfusion for Viaspa-nUW-MP grafts compared to KPS-MP organs (p < 0.05, Figure 1A and 1B).
Function recovery ( Figure 1C to 1G) Animals from the ViaspanUW-CS group never recovered diuresis, their serum creatinine increased steadily until day 7 when the obvious lack of function recovery and generally poor state of the animal lead us to euthanize them. ViaspanUW-MP and KPS-CS groups recovered diuresis by day 4 post reperfusion, functional recovery was similar except for a lower creatinine peak at day 5 (p < 0.05) and a higher osmolarity ratio from D5 to D11 for KPS-CS (p < 0.05). KPS-MP demonstrated better function recovery with diuresis resuming at D3, lower serum creatinine levels and a similar osmolarity ratio to Viaspa-nUW-MP. MP groups also demonstrated controlled glycosuria by D11 (p < 0.05 versus KPS-CS), while glycemia was normal in all groups (data not shown).
Urinary enzymes (Figure 2 A and 2B)
Measurement of urinary levels of proximal tubule enzymes alanine peroxydase and N-acetyl-β-D-glucosaminidase (β-NAG) showed early high levels followed by a progressive reduction with time, sign of tubular damage and slow recovery of structure. KPS-MP grafts showed fastest and most effective recovery, with Viaspa-nUW-MP and KPS-CS showing consistently higher levels (p < 0.05). Measure in peripheral blood of the ratio of oxidized glutathione over total glutathione, reflecting the oxidative stress state of the animal, showed lowest levels at all time points for KPS-MP group (p < 0.05). ViaspanUW-MP group showed equal or lower levels than KPS-CS. ViaspanUW-CS showed the highest levels for the duration of the follow up. Statistical analysis showed that use of MP was correlated with lower oxidized glutathione levels at Day3 (R 2 = 0.76, p < 0.0001) and 2 way ANOVA showed an influence of solution (p < 0.05) and perfusion technique (p < 0.001) while no additive influence was determined. At day 7, MP was also correlated with lower levels (R 2 = 0.54, p < 0.01) and 2 way ANOVA showed additive effect of solution and perfusion technique (p < 0.01). Use of KPS was not correlated with lower levels at day 3 while it was slightly correlated with levels at day 7 (R 2 = 0.41, p < 0.01) Tissue histology ( Figure 3, Table 2) Evaluation of tissue histology at D7 showed intense tissue damage and necrosis for ViaspanUW-CS grafts.
There was significantly reduced damage in the Viaspa-nUW-MP group (p < 0.05) compared to ViaspanUW-CS. KPS grafts tended to show lower amount of damage compared to ViaspanUW kidneys. At D14 and M1, ViaspanUW-MP consistently showed more tissue damage (p < 0.05 at M1) and tubulo-interstitial invasion compared to KPS-CS, and further reduction was observed in KPS-MP kidneys (p < 0.05 to both at M1).
Immune response development ( Figure 4) to high rate of inflammation and tubulitis. Serum creatinine was highest in ViaspanUW-MP group, followed by KPS-CS (p < 0.05) and KPS-MP (p < 0.05 to both). This order was also found when evaluating fibrosis development: ViaspanUW-MP kidneys showed more than 30% fibrosis, while KPS-CS neared 20% (p < 0.05 to ViaspanUW-MP). Fibrosis development in KPS-MP was negligible and did not differ from control. Here also, use of KPS correlated with lower fibrosis (R 2 = 0.65, p < 0.01). Within the KPS groups, MP was correlated with lower fibrosis (R 2 = 0.87, p < 0.01).
Discussion
Herein, we demonstrate in a preclinical study using a highly reproducible swine model of transplantation the benefits of MP over CS, particularly in regards to chronic outcome.
We performed static preservation with both Viaspa-nUW and KPS, demonstrating the superiority of KPS in terms of function recovery, histology at D7 and survival. Comparisons of these two groups offers a perspective on studies generally performed on machine perfusion: when two different solutions are used for static and machine preservation, the observed effect is not solely due on perfusion but also depends significantly on the solution used. Our 4 groups/2 variables approach circumvents this bias, highlighting the importance of animal studies in large animals to assess the benefits of novel therapies, as indeed such setting is impossible in the clinic.
Weight variations of kidney grafts are classically observed during preservation. Our observation of weight loss for CS and weight gain for MP are consistent with a similar experimental design in pigs [23]. In addition, increases in kidney weight after MP have been previously reported to have no significant impact on the graft outcome [40].
Comparing ViaspanUW-CS to ViaspanUW-MP allows us to determine the benefits of machine perfusion with the current high-K + gold standard in static preservation. Although ViaspanUW is not used for MP in clinical settings, using identical preservation solution focuses the analysis solely on the effect of perfusion. Early follow up with classical tools such as serum creatinine do not allow to determine differences between the two methods. In our setting, pigs were not dialyzed thus analysis of diuresis was pertinent, but this would not be the case in the clinic. Interestingly, measure of peripheral blood gluthathion red/ox status provided discriminating information between the groups, which was enhanced by analysis of histology at day 7. Use of UW demonstrates in the clearest fashion the benefits of MP: while high concentration of potassium induce vasoconstriction, as seen in the resistance index at beginning of perfusion, the machine is able to rescue this negative effect and regulate flow, allowing the organ to better face the stress of reperfusion, with dramatic benefits on outcome particularly survival, such as found in the clinic [27,28]. This model thus offers a unique opportunity for further clarification of the exact mechanisms through which MP provides this protection. Benefits of machine perfusion were also not immediately obvious between KPS-CS and KPS-MP groups: diuresis and creatinine levels were close, as were other functional parameters usually available in the clinic. Here also, discrimination was possible with measure of Glutathion red/ox. Moreover, since both groups produced urine, proximal tubule enzymes activity assay in the urine was invaluable. Alanine aminopeptidase and β-N-acetylglucosaminidase are found in kidney tubular cells brush border and their presence in urine is a commonly accepted sign of tubular damage [41], their activity level in the urine revealed a superiority of MP in maintaining tissue integrity at all time point, which was confirmed by histological analysis of the grafts parenchyma.
Early follow up of ViaspanUW-MP and KPS-CS showed similar values on the tests we performed, highlighting the existence of a solution bias when comparing preservation strategies. Altogether, results from the early follow up do not permit a clear discrimination between CS and MP, unless we consider less orthodox tests such as glutathione red/ox or urinary tubular enzyme activity assays. Excretion of Na + and glycosuria, in a context of normoglycemia, also offered a degree of discrimination between experimental groups for tubular necrosis and tubular dysfunction.
In the case of glutathione red/ox, a clear correlation was drawn between the use of MP and lower oxidative stress, and both solution and perfusion technique demonstrated an effect on this parameter. However, addition of effects was not found until day 7. We thus identify an independent machine effect, however the relatively small differences observed herein would likely not be present in the clinic due to disparities in patients population, while in identical pigs statistical significance is obtainable. Grafts histology analysis confirmed the superiority of MP over CS, however these tests may not be standard in clinical practice. Thus, measurement of the benefits of MP is difficult in short follow up studies, particularly if the preservation solution bias is not circumvented. We followed animals for 3 month post reperfusion. In this large animal model, this length permits us to follow the development of chronic lesions such as immune response and interstitial fibrosis and tubular atrophy (IFTA). The summated effects of damage sustained by organ preservation and reperfusion [42] lead to loss of graft function, and ultimately loss of the grafts itself, often due to the development of IFTA [43]. This pathology is also strongly correlated with immune response [42,[44][45][46]. Herein, KPS-MP showed less innate and adaptative invasion compared to KPS-CS, which showed lower levels that ViaspanUW-MP. Use of KPS correlated with lower invasion, and within the KPS groups we showed that the use of MP correlated with better outcome. Unfortunately, absence of data from the UW-CS group did not allow us to perform further statistical analysis. This confirms the benefits of the machine on chronic immune response development. The Viaspa-nUW-MP fared poorly compared to KPS groups, however its superiority to ViaspanUW-CS is demonstrated in terms of animal survival. These results are in contradiction to a study conducted on dogs [21], however the setting of the study and the anatomy of the dog kidney render the comparison of data difficult.
Epithelial to mesenchymal transition (EMT), a process through which polarized tubular cells are driven to dedifferentiate and alter their phenotype towards that of a mobile and fast proliferating mesenchymal cell [47], is shown to be a repair mechanism that can be deregulated during injury and promote interstitial fibrosis [48][49][50]. Our results show that Vimentin staining, a marker of EMT, is high in ViaspanUW-MP, lower in KPS-CS and close to control levels in KPS-MP. Thus, the machine effect is also found in a major pathway leading to fibrosis and graft loss. We measured the extend of fibrosis using Sirius red and showed a similar order in the grade of lesion: ViaspanUW-MP was highest and KPS-CS showed half the degree of fibrosis of ViaspanUW-MP. KPS-MP group did not show a degree of fibrosis higher than control.
Considering no ViaspanUW-CS animal survived to the end of the follow up, no comparison is possible in regards to chronic lesions such as immune response or fibrosis, however previous studies using the same protocol as ViaspanUW-CS showed a 27% survival rate with important immune response and IFTA (47%) [51,52]. ViaspanUW-MP showed better survival, strengthening the results of a similar study investigating the short-term effects (7 days) of ViaspanUW-MP in a pig model [19], also reporting trends towards a better early kidney function after MP [19,23]. Our results demonstrate superiority of KPS over ViaspanUW solution in our animal model, independently of the preservation strategy. UW is a high K + and low Na + solution [53], proposed to maintain intracellular ionic balance. However high potassium has been shown to induce cellular depolarization, decrease cellular ATP content and activates voltage-dependent channels, such as calcium channels [54,55]. Influx of calcium can result in vasoconstriction impairing organ perfusion during washout and reperfusion, participating in the 'no reflow' phenomenon [56][57][58]. Recently, studies have shown equal or improved results of low potassium/high sodium ratio such as KPS [1], consistent with our findings. Use of Mannitol instead of lactobionate in KPS may also account for the better performance, as this compound has reactive oxygen species scavenging properties [1].
The present study uses large white pigs, an animal well suited for preclinical studies as it is close to humans, particularly in regards to the multipapillar multilobular organization of its kidney, only found in higher mammals, implying a complex vascular bed making these organs particularly sensitive to IRI [59]. In this setting, we determined that the benefits of machine perfusion, with a machine currently used in the clinic, are most evident on chronic graft outcome. Indeed, discrimination between the groups in the early time points was only possible through assays rarely performed in transplant centers and thus could explain the relatively small benefits found in clinical studies investigating the machine effect [27]. However, our results suggest that chronic follow up of these patients will uncover a wider rift between MP and CS, as chronic lesions start to develop.
The exact mechanisms by which MP minimizes the activation of lesional pathways in our study remain to be elucidated. MP actions may include a complete perfusion of the organ promoting a thorough washout of blood and subsequent tissue equilibration with the preservation solution. This more efficient washout has been previously reported to limit the aggregation of erythrocytes [60]. Finally, the maintenance of a flow may protect against depolarization of the endothelial cell membrane which is linked to generation of ROS, increased intracellular Ca 2+ concentration, and activation of NO synthases [61]. Hence, more mechanistic studies are necessary to unravel the exact mechanism of action in MP, in order to focus on improvement and optimal application of this technique.
The present study appears limited by the use of an isograft model, devoid of the influence of immunosuppressants. However, machine perfusion has been developed to optimize graft preservation, hence address ischemia reperfusion injury. Thus, we felt that an allograft model, with the addition of immunosuppressors and their own set of deleterious side effects, would dilute the impact of our results. We thus sacrificed relevance to the clinic by the use of isograft in order to obtain clarity of our results in regards to the benefits of machine perfusion. Another limitation is the fact that our model does not follow exactly the setting of classes I and II of the Maastricht criteria. Indeed, it normally includes no more than 30 min arrest before starting the CPR procedure, which is then continued during the transport to the hospital (generally with a machine); then as failure to resuscitate is pronounced there is a 5 min no touch period. All these steps should not exceed 150 min. The patient is then either cold perfused or a extracorporeal circuit is put in place, giving enough time to secure consent from the family and collect the organs, which are then machine perfused. It is obvious that a correct modelling of this situation should include all these steps, and we are actually in the process of adapting such procedures on the pig. However in the meantime we are using 60 min WI as it reproduces as closely as possible the conditions of DCD.
Conclusion
In a study using a preclinical model of DCD kidney transplantation, we demonstrate the superiority of MP over CS independently of the solution used for perfusion. Our results suggest significant benefits on graft outcome, particularly evident on the chronic effects of IRI with a protection against chronic immune response, EMT and IFTA.
Additional material
Additional file 1: Representative graft morphology for kidney lost during follow up. Morphological analysis of grafts lost during follow up revealed extensive necrosis and tubule loss at week 1 for cases of primary non function (PNF). Graft loss at weeks 2 and 4 was due to high rate of inflammation and tubulitis. | 2014-10-01T00:00:00.000Z | 2011-01-25T00:00:00.000 | {
"year": 2011,
"sha1": "59767a81e78c7bda8d2e131548fcbeef9c4a7e12",
"oa_license": "CCBY",
"oa_url": "https://translational-medicine.biomedcentral.com/track/pdf/10.1186/1479-5876-9-15",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "59767a81e78c7bda8d2e131548fcbeef9c4a7e12",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
225668759 | pes2o/s2orc | v3-fos-license | New perspectives in feminist methodological research
women. The authors advocate for the possibility for mainstream Western productions to take on these more nuanced portrayals evidenced in the texts analyzed from the global South. The essay focuses on the potentials of new approaches to questioning the youthful hegemonic gaze and deconstructing technologies of age such as assumptions about embodied experience and hegemonic temporality, based on their re-exami-nation of feminist film theory from the lens of aging studies. This essay shows how a set of contemporary films challenge technologies of age, effectively challenging ageism by reconceptualizing embodiment, modes of the cinematic gaze, sexuality, and the abject in ways that offer new points of view. develop confidence in STEAM subjects and careers by working with a mentor in a structured 6-week program. The authors position the design and goals of the program within an extensive published literature on gender gaps in STEAM fields. The project is a part of a larger ongoing series of similar programs and was conducted in the Basque region of Spain, with focus on the three largest cities in that region. One hundred fifty-nine women mentors and over 4,000 children (approximately half girls and half boys) in 80 schools participated. A group of 6th-grade volunteer subjects were assigned with mentors in small groups of under 15 for 6 one-hour sessions at their schools. The goal was to provide positive female role models for girls and boys in order to potentially build their confidence in pursuing a STEAM career, as well as to educated students on the lack of equality be tween the genders in STEAM fields and the need for more women in these fields. Mentors were provided with training, and a questionnaire was administered at the end of the 6-week program. Participants, both children and their mentors, reported positive evaluations of the experience. The authors note that there is also some work to be done in terms of further developing the program to address some of the shortfalls. For instance, girls admired their mentors and their achievements, but for the most part failed to turn this experience inward toward an increase in self-confidence for their own potential in a similar field. On the other hand, some additional positive benefits of the program emerged in the questionnaire data. The female mentors themselves derived im portant benefits from their participation including the development of greater self-confidence in their own role as mentors and teachers who could help children develop self-esteem and career goals. In addition, mentors developed a network of peers through the program that can be a source of support and solidarity for their own work in the future. Students and mentors rated the experience of participation very highly, finding the program a valuable one. This project began in academic year 2016-2017 and has been repeated in different locations in Spain and other countries and has found some success. The program has been expanded and has also been con ducted in other areas of Spain including Madrid and Catalonia, and plans are underway for a similar program to be carried out in Chile. This innovative methodology actively works to transform the way that participants, between a methodological approach recuperation of women’s agency in cases in which their advances were or and the investigation into the for historical confinement and the consequences for the production of knowledge and for the authoritative perspective resulting from the erasure of women’s participation. step in this methodological to research the biographies of women in history and to underscore some of the achievements that had not been revealed, while simultaneously exploring the causes underlying such concealment from the intellectual and academic fields. The a significant transformation that takes place when gender as a category of analysis and interpretation is included in the intersection of historiography with the natural sciences: new resources are revealed, new methods of research are developed and, finally, new knowledge is generated. The shows how biographical studies transcend individuality to become exemplary documents of the sociopolitical and intellectual context in which the lives of these
New perspectives in feminist methodological research
Lisa Cuklanz 1 y María Pilar Rodríguez 2 Over the past several decades, many pages and volumes have been written on the subject of feminist methods, including numerous issues of this journal that have been devoted to the topic. There are several important contributing factors to the ongoing focus on this diverse subject over such a broad span of time. First, feminist methods have been theorized and explored in virtually all research fields, including medicine, law, political science, history, sociology, linguistics, film and media studies and many, many others. Second, feminist approaches to research and knowledge have literally transformed these fields in broad-reaching and profound ways, shifting attention to previously undervalued or neglected topics, and shifting priorities and approaches to those previously studied. Feminist research has altered our understanding of our world, and has influenced the way that research is conducted, not only by feminist researchers, but by many researchers following the dictates of state-of-the-art in their fields. A third very significant reason for the sustained attention to feminist methods is that researchers who are dedicated to this type of work are engaged in a continual effort to extend the limits and definitions of their commitment to values such as egalitarianism, intersectionality, social change, and ethical challenges to the status quo, particularly as it relates to privilege and relations of power. Beginning with commitments to investigate the voices and experiences of women and to interrogate the gender binary, feminist methods have moved well beyond these initial foci to investigate the boundaries of meaning that these concepts and ideas suggest.
As feminist research has remained committed to these values, it is been open to continual challenge and revision, and has remained dynamic and relevant as a result of these efforts. While there has been general agreement that the central goals of feminist research should include a focus on women's experiences and voices, and that feminist methods should strive to break down relations of power within the research process, researchers and theorists have grappled with central questions about how to define key terms and how best to accomplish these fundamental goals. For instance, many have questioned the use of the term "woman" and challenged its accepted definition, asking whether and how it could be possible to imagine a unified experience among a group of people with such diverse identities and experiences of the world. Similarly, research that at first seemed to critique and interrogate existing relations of power was later understood as replacing one set of power imbalances with another, creating new problems while hoping to solve old ones. Dedication to feminist methods has required a commitment to continual critique, to redefinitions, and to remaining open to the need for ongoing revisions and amendments to the research process. These commitments have made feminist methodologies diverse, dynamic, and at times self-contradictory, as different values and commitments have at times come into conflict with one another. Through all of this, the process of striving to find new and better ways to transform knowledge, to understand out world from new perspectives, and to make the world a better and more just place has been the focus of feminist researchers across academic and professional fields. It is our goal in this special issue to continue this central imperative of feminist research methods, starting from established fields of knowledge and extending in innovative ways into new territory that expands our ability to understand the ways in which gender conditions our world and our understanding of it.
Feminist research methods have been employed for many decades, and has a rich and varied history due to the range of disciplines included within their scope. We wish to focus attention on more recent developments for this special issue. Among the publications after the year 2000 that deserve a mention in the field of feminist methodologies, only three representative works will be mentioned here to provide a limited overview of the main contributions. A review of three volumes on the topic that each gather together chapters on specific approaches to feminist methods will serve to ground the present collection of articles. In Feminist methodology: Challenges and choices, editors Caroline Ramazanoglu and Janet Holland (2002) PRESENTACIÓN make sense of their experiences; and how power inhabits knowledge production (2002,9). The book provides insightful reflections on the past, present and future of feminist methodologies and their developments; the editors are not afraid to admit and incorporate critical contradictions in feminist efforts to produce and justify authoritative knowledge of gendered social life (2002,10). A second work, Joey Spragues's Feminist Methodologies for Critical Researchers: Bridging Differences (2005) is oriented to students of sociology and related disciplines. The author introduces quantitative and qualitative methodological options and explores the links between epistemology and methodology. Additionally, suggestions on how to overcome bias in standard methodologies and to develop critical thinking are provided. Finally, Sharlene Nagy Hesse-Biber's Handbook of Feminist Research: Theory and Practice includes familiar aspects and new challenges of feminist research including standpoint theory, use of authority in feminist work, and confluences between postmodernism, intersectionality, and queer identities in feminist research. The book addresses global and international contexts of research including chapters on global ethnography and globalizing feminist research.
A number of important volumes explore critiques of mainstream feminist methods or bring alternative perspectives to the fore. Among these, Global Critical Race Feminism: An International Reader (edited by Adrien Wing) is an important recent work that extends earlier critiques by authors including Patricia Hill Collins and Gloria Anzaldua. Many other academic publications explore feminist methodologies in specific areas such as geography, social work, international relations, engineering, natural sciences, education, communication technologies, law, business, psychology, postcolonial studies, migration studies, theology, health, fashion and politics, among others. Feminist methodologies have also been developed according to different geographical and ethnographic areas covering the particularities of different socio cultural and political approaches in different continents. Of these, Nina Lykke's book titled Feminist Studies: A Guide to Intersectional Theory, Methodology and Writing focuses on current issues in feminist theory, epistemology and methodology. Lykke presents analytical approaches to gendered power conditions intersecting with other processes of social inclusion and exclusion based on race, class, and sexuality, along with theories of the performativity of gender and sex. Many other individual articles have offered new perspectives on the specific topic of feminist methodology investigation.
Challenges to feminist thought have always been present in societies, and resistances to the development of feminist methodologies are to be expected. In times in which social and political transformations are leading to a better understanding of social justice and a greater implication of wide sectors of the population with feminist demands, reactions in the form of backlash to preserve power and maintain certain aspects linked to patriarchy and hegemonic society surface. In the academic realm, efforts to continue the exploration of new ways of approaching the discussion on how to provide criteria for producing, justifying, and evaluating knowledge are needed. Feminist methodologies provide innovative and critically advanced means to uncover authoritarian appeals to maintain the status quo by appealing to traditional authority and denying accountability for the ethical and political implications of the production of knowledge. Challenges to feminist knowledge claims have come from dominant approaches to science, reason, progress and truth associated with a pretended and mandatory objectivity in all disciplines. The brief review of some of the outstanding academic developments in feminist methodologies, as well as the articles included in the present volume, are the proof that intellectual rigour, passion for advancing knowledge and openness to innovative ways of thinking provide significant advances and a convincing response to attempts to dismiss feminist proposals.
This special issue of Investigaciones Feministas is therefore not the first to examine feminist methods, and will certainly not be the last, as we do not purport to offer a final word on the subject. However, in this issue we offer a fresh combination of insights into relationships between feminist methods and other research commitments. In addition, this special issue offers not only examinations of feminist methods in a range of fields, but also unique applications and interpretations that serve to push the limits of the label feminist methods. This process of exploring and extending boundaries continues to be a central feature of feminist method. Throughout the essays in this volume, authors draw on feminist methodologies in order to transform our understanding of newly emerging fields and streams of scholarship. Each article includes an examination of existing literature within an established field of scholarship and continues with an original contribution to that field in the form of an independent research project, case study, or theoretical exploration extending that field of scholarship. By providing. linkages with existing literature in traditional fields and categories of research in humanities, social sciences and technology/physical sciences, we point to the ways in which this innovative work continues to contribute to the original project of analysis, critique, and transformation of work across academe.
Along with more traditional review-style essays that examine ways in which feminist methods have been defined and applied, this issue offers several articles that bring a fresh perspective to this subject about which so much has been written. The essays in this volume are informed and inspired by feminist research, but each essay pushes into new territory, either through applying feminist methods to a specific case study, or by examining the implications of feminist methods to a newly emerging realm of theory or experience. An innovative aspect of this volume is that it combines articles that review methodologies that have been present in feminist studies for the past decades with others that focus on relatively new disciplines and areas of study, such as ageing studies, the affective turn, and graph theory. Yet, others offer a reformulation of methodological prac-tices with illustration of recent applications, as is the case with the Standpoint Theory. An important distinctive feature of the monographic number is that it is not only interdisciplinary in the manner in which several disciplines, methods and concepts are included, but also in the way in which many of the articles approach their subject by combining knowledge in at least two different disciplines. The article written by Raquel Medina and Barbara Zecchi, titled "Technologies of Age: The Intersection of Feminist Film Theory and Aging Studies" intersects ageing studies with feminist film theory, which leads to a fundamental methodological and theoretical rethinking of the notion of cinema as a powerful technology of age. The article titled "Feminist historiography and science studies: New methodological frameworks" by Esther Rubio Herráez focuses on the intersection between feminist historiography and studies in the natural sciences to present a methodological approach that allows for the recovery of the achievements of women in the field and to explore the reasons for their absence or lack of recognition. The article "New approaches to the propagation of the antifeminist backlash on Twitter", authored by Miren Gutiérrez María J. Pando-Canteli and María Luz Congosto, incorporate an eclectic and interdisciplinary approach that integrates graph theory and CDA (Critical Discourse Analysis), thus connecting social media formulations with offline discourses to examine the backlash to feminist advances in contemporary societies. The article "Navigating among women: Digital ethnography and its contributions to feminist research" combines and ethnographical approach with a design developed in a Facebook in southern Chile to explore several gender power relations in that community. "The Power of Necropolitics: Affect Theory and Violence in Perspective", written by Annabel Martín and Cristina Ortiz-Ceberio, explain how the affective turn can be explored in relation with the study of violence and necropolitics to provide fruitful results. These are some of the examples of the effort that these complex articles have made to offer innovative and intellectually challenging approaches to the notion of feminist methodologies.
In the opening essay, Erol and Cuklanz provide an overview of two important streams of research in order to provide a delineation of the ways in which these two bodies of work share similar purposes and methods, as well as ways in which their goals diverge. In the first half of the essay, the authors summarize key concepts and methods in feminist research. In the second, they provide an overview of queer theory, discussing its central objectives and guiding principles. The authors note that the two streams of scholarship share some emphases, including raising critical consciousness, emphasizing human experience related to gender and diverse intersectionalities, encouraging social change, and questioning or even replacing accepted forms of knowledge production. Both approaches also seek to center marginal voices and experiences, and both propose research that challenges dominant assumptions and relations of power. The authors note that although queer theory developed from feminist origins and began as a branch of feminist scholarship sharing many of its critical tenets such as questioning relations of power and centering the marginal within research, at present much work in queer theory does not focus on gender and cannot be considered feminist. Rather, queer theory rejects a unified notion of the category of "woman" as well as a binary approach to gender, relying instead on the deconstruction of binaries wherever they serve to structure knowledge or support existing structures of power. While the core commitment of feminist methods has been to discover and focus academic attention on women's experiences and voices, bringing their formerly absent perspective into view, the core commitment of queer theory has been to decenter and deconstruct binaries including, but not limited to, those of gender and sexuality. In addition to providing a comparative overview of these two streams of scholarship, the authors offer examples of each as well as a discussion of the key limitations and critiques of each. This review essay considers the historical growth and development of feminist methods, examines the complex relationship between feminist methods and queer theory, and provides a context for the further consideration of more recent innovations in feminist methods throughout the special issue.
The first two essays in the volume explore innovative methods to approach social media. The authors of "New approaches to the propagation of the antifeminist backlash on Twitter" employ Congosto's (2018) typology for the study of Twitter profiles to examine the proliferation of reactions against the #Cuéntalo Twitter campaign, the equivalent of #MeToo within Spain. Using Twitter Representational State Transfer API, the authors accessed over 477,000 Twitter messages connected with this hashtag during three days in April 2018, with the goal of providing a deeper understanding of antifeminist backlash on that platform. Through examination of retweets, they created an index of the impact of individual Twitter accounts. In addition, their analysis draws on the integration of Critical Discourse Analysis and graph theory to coordinate their analysis of offline discourses with those found on Twitter. These combined approaches are offered as a new mixed quantitative and qualitative methods approach to social media analysis that bears particular relevance to social movement and backlash discourses. The authors approach #HimToo as a counter-corrective movement focused on resisting feminism and women's rights gains. Contemporary rhetoric within this movement is notably less extremist and reactionary than earlier antifeminist backlash discourses. The essay explores not only dynamics and trends within the movement, but also identifies and analyzes themes within its discourses. The study found that this specific backlash community was a small one comprised of Spanish nationalists, neo-liberals, ultra-Catholics, and prison officers who were vociferous and well-focused, tweeting consistently. Backlash messages centered around a few repetitive ideological ideas including that of tyrannical feminism, the ideological bias of the left, abuse of men, and finally focus on specific court cases. Anti-feminism and right-wing politics were closely aligned among this group and their tweets. Approximatly 60% of the backlashers emerged as retweeters, while a much smaller group produced new material retweeted by others. While testimony and evidence were rare, commentary and retweeting were the more common forms of expression. These findings corroborate those of other studies of antifeminism, its rhetorical dimensions, and the ideological connections between antifeminism, racism, and neoliberalism.
While the first article examines anti-feminist rhetoric on Twitter, the second takes the study of a feminist movement through Facebook. The article "Navigating among women: Digital ethnography and its contributions to feminist research" offers methodological reflections that emerged within the interdisciplinary framework in a digital-ethnographic design developed in a Facebook group which brings together women from a town in southern Chile. The authors place their study in the context of the feminist movement that developed in Chile in May 2018, in which different educational establishments raised "tomas feministas" questioning the patriarchal ideological structure of Chilean society. Universities shut down their activities for two months and the demands impacted Chilean society, expressing themselves in massive protests and demonstrations in which offline spaces were intertwined with online spaces through, for example, adherence to international mass movements, such as #Niunamenos or #Metoo. The research objectives seek to understand the patriarchal power relations that are exercised in this space of interaction, the regulations on gender performances, and the meanings of community that their own participants assign to the space. The article situates its contribution in the core of feminist research, which is the commitment to social transformation and the struggle against hegemony which often is established as an effect of research itself. A feminist digital ethnography in a group of women allows both the authors and the readers to understand how patriarchal power relations can be subverted or confronted and to identify the strategies used. The authors contextualize the research, explaining how it results from the reflections of their own experiences as participants. The methodological proposal offers an original contribution through the adoption of a techno-feminist theoretical approach which focuses on the possibilities of mutual transformation between gender and technology. The originality of this article stems from the nature of the research itself, that is, the analysis of an online group from a feminist perspective, questioning the positions of power in the production of knowledge and glimpsing their contributions to social transformation and colonization of knowledge.
The third article in this special issue is situated within humanities, focusing on film studies from the perspective of aging studies. The authors of "Technologies of age: The Intersection of feminist film theory and aging studies" begin their analysis with an overview of what feminist analyses have shown about gender differences in aging in mainstream Western cinemas. While the preferential treatment of older male actors has been accompanied by a lack of roles for older women in cinema as well as portrayal of aging as decline, the essay provides a framework for possible changes in these troubling patterns of representation. Within aging studies, the concept of affirmative aging offers the possibility of moving beyond a binary understanding of youth versus old age, where aging is depicted mainly as decline while youth equates to vitality and success. This essay provides a thorough analysis of depictions of aging in mainstream Western cinema as well as an extensive list of films from the global South providing alternative depictions of women's experiences of aging from a range of innovative points of view. Raquel Medina and Barbara Zecchi assess key areas of intersecting concern for feminist theory and aging studies including under-representation of older women in decision-making and in on-screen roles, stereotypical representation of older female characters, and limitations in depictions of older characters as compared to depictions in cinemas of the global South. After assessing industry output from several Western countries, the authors examine different representations from the directors in Mexico, Iran, India, Chile, Spain, and many other nations that have demonstrated the potential for more complex, nuanced, and positive representations of aging in cinema including treatments of historically significant issues such as representations of the aging body, sexuality among older characters, and positive views of human potential in advancing years. Similarly, the authors show how female and feminist filmmakers including Isabel Coixet, Alankrita Shivastava, Pilar Miro, Chinonye Chukwu, Claire Denis, Natalia Beristain, Francesca Archibugi, and Maria Novaro among others have advanced new representations of older women and have improved the visibility of aging women. The authors advocate for the possibility for mainstream Western productions to take on these more nuanced portrayals evidenced in the texts analyzed from the global South. The essay focuses on the potentials of new approaches to questioning the youthful hegemonic gaze and deconstructing technologies of age such as assumptions about embodied experience and hegemonic temporality, based on their re-examination of feminist film theory from the lens of aging studies. This essay shows how a set of contemporary films challenge technologies of age, effectively challenging ageism by reconceptualizing embodiment, modes of the cinematic gaze, sexuality, and the abject in ways that offer new points of view.
While the first three essays in this issue provide innovative feminist approaches to humanities subjects including social media and film, a number of essays in this volume work within the arena of social science. In "The Power of Necropolitics," Annabel Martin and Cristina Ortiz Cebero explore the intricate relationships between affect theory and feminist thought as applied to the subject of necropolitics, which they operationalize as "the right to kill or destroy" political enemies. The essay focuses on political violence in the Basque example and explicates a style of negotiation in place of necropolitical destruction as it has played out in that situation. Using Deleuze and Guattari's affective theory of bodies as assemblages of sensations and impressions, the authors remind us of the importance of moving beyond questions of subjectivity in order to center ethical considerations. The essay settles on the key question of the applicability of affect theory to provide transformative solutions to political problems, and thus to contribute meaningfully to social change projects such as those proposed by feminism. Significantly, Martin and Cebero situate a key contribution of affect theory to feminism in that it invites us to question our usual thought processes and categorizations in relation to gender, offering a way to underscore unexamined interdependencies and linkages. The authors seek to use these theoretical insights to provide a new sense of interconnection to rebuild where violence has shattered social connections and a sense of community. Analyzing restorative justice in the Basque Country, they show how affect theory effectively challenges necropolitical logics by centering the emotional power of connection rather than divisive and destructive self-other frameworks. This essay provides a synopsis of the Basque conflict that took place from 1959 to 2011, situating it in world historical context as well as within the more local context of Spanish political history. The case is particularly relevant for this study in that the Spanish state was in some ways complicit with the terrorist project of ETA, due to a number of contextual factors. The central role of a belief system centered on heroic masculinity means that a feminist take on gendered categories can be fruitfully combined with the analytical work proposed through affect theory in this case study. The authors identify in the Basque application of restorative justice a specific material process in which terrorists and their victims spoke face to face, merely in order to share their experience, outside of any self-interested motive in material gain or political benefit. The process requires the location of a shared humanity among the parties and takes this discovery as its central project.
The authors of "Inspira STEAM: breaking the confidence gap with female roles" conducted an ambitious independent research project designed to provide a context in which young girls, as well as boys, could develop confidence in STEAM subjects and careers by working with a mentor in a structured 6-week program. The authors position the design and goals of the program within an extensive published literature on gender gaps in STEAM fields. The project is a part of a larger ongoing series of similar programs and was conducted in the Basque region of Spain, with focus on the three largest cities in that region. One hundred fifty-nine women mentors and over 4,000 children (approximately half girls and half boys) in 80 schools participated. A group of 6th-grade volunteer subjects were assigned with mentors in small groups of under 15 for 6 one-hour sessions at their schools. The goal was to provide positive female role models for girls and boys in order to potentially build their confidence in pursuing a STEAM career, as well as to educated students on the lack of equality between the genders in STEAM fields and the need for more women in these fields. Mentors were provided with training, and a questionnaire was administered at the end of the 6-week program. Participants, both children and their mentors, reported positive evaluations of the experience. The authors note that there is also some work to be done in terms of further developing the program to address some of the shortfalls. For instance, girls admired their mentors and their achievements, but for the most part failed to turn this experience inward toward an increase in self-confidence for their own potential in a similar field. On the other hand, some additional positive benefits of the program emerged in the questionnaire data. The female mentors themselves derived important benefits from their participation including the development of greater self-confidence in their own role as mentors and teachers who could help children develop self-esteem and career goals. In addition, mentors developed a network of peers through the program that can be a source of support and solidarity for their own work in the future. Students and mentors rated the experience of participation very highly, finding the program a valuable one. This project began in academic year 2016-2017 and has been repeated in different locations in Spain and other countries and has found some success. The program has been expanded and has also been conducted in other areas of Spain including Madrid and Catalonia, and plans are underway for a similar program to be carried out in Chile. This innovative methodology actively works to transform the way that participants, both youth and their mentors, understand and experience their own relationship to STEAM fields and careers.
The article titled "Feminist Historiography and Science Studies: New methodological frameworks" by Esther Rubio Herráez exposes the slow process of incorporation of women and women's accomplishments to the academic field of the natural sciences due to the perception of the essential nature of science characterized by objectivity and neutrality as essential imperatives. The author focuses on the intersection between feminist historiography and studies in the natural sciences to present a methodological approach that allow both the recuperation of women's agency in cases in which their advances were buried or hidden previously and the investigation into the reasons for such historical confinement and the consequences for the production of knowledge and for the authoritative perspective resulting from the erasure of women's participation. The first step in this methodological approach was to research the biographies of women in history and to underscore some of the achievements that had not been revealed, while simultaneously exploring the causes underlying such concealment from the intellectual and academic fields. The articles shows a significant transformation that takes place when gender as a category of analysis and interpretation is included in the intersection of historiography with the natural sciences: new resources are revealed, new methods of research are developed and, finally, new knowledge is generated. The author shows how biographical studies transcend individuality to become exemplary documents of the sociopolitical and intellectual context in which the lives of these women were framed. In such manner, they provide relevant clues for the interpretation of androcentric bias and the consequences for the distortion of resulting scientific paradigms. The article explores the example of the life and work of North American scientist Barbara McClintock, who becomes a relevant illustration not only because of her achievements in the field of genetics, but also due to her personal philosophy of life and scientific generation of knowledge. She firmly believed in the combination of reason and emotion as fundamental tools for scientific discovery; her great intellectual capacity along with her relevant contributions to the sequencing of the genetic code and to the development of genetic transposition makes of McClintock a perfect example of the methodological proposal described in the article. MacClintock was aware of the discrimination she experienced and demanded equal opportunity and recognition. This case study explores the context in terms of scientific, social and political discrimination and exposes the bias that even today is present in many situations to convey the need of these type of methodological interventions in the intersections of historiography with the natural sciences.
Lorena Pajares Sánchez in "A feminist basis for participatory research: Knowledge, gender and participation, or the necessary dialogue for transformation" focuses on the area of social research and explores how the underlying concepts of participatory research and popular education born in Latin America in the 70s relate theoretically and methodologically to those of feminist research and epistemology developed from the 80s to support action-oriented participatory research with a gender perspective. The article defends the notion of methodological miscegenation that may result in new scientific practices to develop both the discursive and transformative potential of social research in combination with feminist research. Through a methodology based on a critical review of the main theoretical postulates of both types of research, the author analyzes how participatory research and contemporary feminist research and epistemology share many perspectives: both are based on the rupture of the dichotomy between science and politics, propose the incorporation of new knowing subjects, defend a relational idea of knowledge and value the possibilities offered by the link as a central unit of analysis, among other aspects. Through this methodological approach and by contrasting such postulates and perspectives, the author argues that feminist research largely coincides with the foundations of participatory research. Moreover, in a dialogical and critical exercise, it strengthens and expands its epistemological principles through two main keys: the incorporation of women as epistemic and political subjects and the incorporation of a new interpretative framework of reality that allows us to create new knowledge. The article argues for the need to develop a feminist approach to participatory tools that privileges the study of the connections between subjects, identities and social, cultural and economic positions, historical contexts, experiences and narratives to make visible the intersection between gender, class, race, ethnicity or sexuality as central elements of the framework for the interpretation and transformation of our social and political places. The final section of the critical review carried out in this article defends that an inclusive dialogue between participatory and feminist contributions will greatly advance the explanatory and transformative potential of social research.
María Silvestre Cabrera, María López Belloso, and Raquel Royo Prieto examine ten recent feminist social science doctoral theses in order to examine the ways in which these examples of innovative feminist research studies are able to carry out the imperatives of Sandra Harding's ideals of Feminist Standpoint Theory (FST). The authors developed a "Harding Test" that is here applied to this new research in order to assess several central aspects of it including its feminist commitments and empirical contributions. Through the close analysis of this specific set of doctoral dissertations, the authors show how recent feminist work has been able to incorporate marginalized voices into their production of situated knowledge. The essay provides an extensive literature review on intersectionality within the academic conversation on feminist standpoint theory, emphasizing the notion that the concept of intersectionality interrogates both the invisibility of marginalized voices as well as the specific privileging of androcentrism. Recent research using Feminist Standpoint Theory is endeavors to place researcher and subject on a similar footing without privileging one over the other. This challenging but centrally important research commitment forms the core focus of the essay: the authors seek to show how and to what extent the dissertations were able to articulate the voices of marginalized subjects. The essay finds that feminist standpoint theory was employed meaningfully and successfully in each case and shows the range and depth of research on this point. Subject matter included marginalized positions such as single mothers, Moroccan immigrant women, and victims of domestic violence and/or sexual violence. Experiences of "exclusion and risk" were also multiple, and included unemployment, pregnancy, and coping. The authors of the dissertations under evaluation here all cited personal motivations in their selection of research topic. Their personal connections to the study topic provided opportunities for informed and meaningful expression from the point of view of research participants. For example, authors working on the subject of immigration were themselves immigrants. Although only one thesis explicitly mentions FST, the use of qualitative methods toward objectives centering around understanding and. comprehension of the subjects and their point of view was central to all. Their research conclusions focus carefully on these same objectives, with some forays into suggestions for improving the situation under analysis. The essay's authors conclude that these dissertations are successful examples of FST research based on three main criteria. The dissertations use the experiences of women to provide new knowledge; they support women and center their experiences against androcentric privilege, and they place the researcher on the same level as the study's object. These three objectives make up the "Harding test" posited by the authors based on the work of Sandra Harding. This study thus brings Harding's classic work on feminist methodology into the present day, showing how its central tenets have influenced and enriched contemporary research.
María Luisa Jiménez Rodrigo, in "Possibilities of documentary research for the intersectional analysis of equality policies" addresses the possibilities of documentary research for the intersectional analysis of gender equality policies. She offers an analysis of the strategic plans of equality, valid in 2019 (or most recent) developed in the Spanish context at State and autonomic level. The analysis of equality policies, through the examination of texts produced for their design, provides concrete data that can empirically enrich discussions around the advantages and limitations of different approaches to addressing inequalities. The two main research points focus on the recognition of intersectionality and multiple inequalities and on the attention to groups located in multiple positions of inequality. The conclusions of the detailed and comprehensive analysis carried out in this article provide some relevant conclusions. First, recognition of intersectionality as a guiding principle of equality policies is still scarce in plans developed at the regional and national levels. Second, although many of the plans included in the analysis state the importance of inequalities and multiple discriminations, their concretion in specific measures is partial and insufficient, with under-representation, or even exclusion, of numerous situations of discrimination and multiple inequality, which intersectional studies point to as fundamental. The analysis reveals an underrepresentation of interests and needs of racialized, migrant, and ethnic minority women in addition to an insensibility regarding affective-sexual diversity. What this work attests to is that methodological analysis such as the one offered here become an effective manner of advancing towards social justice since the article makes evident that mainstreaming can help policies to be more effective and inclusive by bringing into focus groups that often remain invisible after homogeneous approaches. The intersectional policy proposal allows for different implementation strategies. The configuration of specific policies located at the intersections of social inequalities and focused on multi-marginalised groups and the cross-cutting incorporation of the intersectional perspective in unitary policies will make it possible to address sectors of the population that remain excluded or that lack visibility. This methodological proposal thus addresses spaces for intervention that often remain unattended in conventional research.
Finally, in the article titled "Facing heteropatriarchal epistemologies: theoretical-methodological elements for a Feminist analysis on violence against women" Tania Martínez Portugal carries out a critical analysis of the social imaginaries around violence against women through the study of its impact within activist communities of the Basque Country. This case study, which ranges from the formal spaces of meeting and work to informal spaces of socialization of the group or collective, offers the possibility of constructing an alternative narrative on the social context, the myths surrounding the victim and the abuser, as well as the expressions of violence themselves. When faced with the forms and processes through which heteropatriarchy creates knowledge and constructs discourses around violence against women, feminist resistance has worked tirelessly in the construction of new frameworks and interpretations. As a result of the theoretical review and epistemological reflection that emerges from a research of the phenomenon in the activist communities of the Basque Country, this methodological proposal is sustained on the analysis of ten narratives of women activists who have survived violence. Agency and the inclusion of the women's voices are essential as the impulse generating the analysis, and a revision of the stereotypes associated with the abuser is provided. The article thus offers a methodological proposal that influences the systemic and structural character of violence against women, contributes to the production of knowledge related to the postulates of feminist epistemologies, and is respectful of the research community. The proposal captures many of the theoretical contributions that feminist authors inside and outside the academy have developed, and without which the present effort advances knowledge to a better understanding of the conditions and consequences of contexts in which violence against women takes place.
Together the contributions to this volume provide a wide range of innovative approaches to recognizable fields of academic inquiry, in many instances combining insights and methods from distinct fields to produce a new hybrid research design that sheds light on a contemporary subject or problem. The range of mergers between fields include taking a biographical historiographic approach to physical science, approaching social media with both quantitative and qualitative methods, restructuring the representation of aging in mainstream film through a mingling of feminist film theory and aging studies, and providing real-world mentoring experiences for youth and female mentors from STEAM fields. Each of these complex interventions serves to transform knowledge by focusing on some aspect of gender or of women's experience, while the specific definitions of these key elements might be operationalized differently in each essay. Several explore styles of participatory research that elicit a commitment to the transformative power of the research process for participants, and endeavour not only to place the participant's experience at the centre, but to level the relationship of power between researcher and study subject. These essays go beyond reviewing and explaining feminist research, but also demonstrate how feminist methods can be deployed to address contemporary situations and dynamics. These essays provide an international perspective on their subjects as well, with several essays focusing on historical events in Spain, while others examine such topics as films from the global South, feminist activism in Chile, or the experiences of immigrant women. What these articles have in common is, first, an effort to provide an intellectual challenge to create new knowledge either by reviewing previous theories and method-ologies to add new contexts, perspectives and concepts or by generating new fields of study through interdisciplinary research. Second, they all share a profound political commitment to work towards gender equality and social justice. Finally, all articles convey the need to push the boundaries of research and to expand the limits of traditional academic writing in order to achieve new ways of investigating feminist methodologies. We sincerely hope that the readers share the passion that we, as editors, have experienced through the process of editing this monographic number. To finish we warmly thank María José Camacho and the editorial board at Investigaciones feministas for their invaluable help throughout this rewarding endeavour. | 2020-07-16T09:01:57.387Z | 2020-06-14T00:00:00.000 | {
"year": 2020,
"sha1": "d969a4e70e0feb0711fe63b686032aec32c79f7c",
"oa_license": "CCBY",
"oa_url": "https://revistas.ucm.es/index.php/INFE/article/download/70122/4564456553956",
"oa_status": "GOLD",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "940fffcd2e3e555c24233173fd7f1939645a1a1c",
"s2fieldsofstudy": [],
"extfieldsofstudy": [
"Sociology"
]
} |
85274306 | pes2o/s2orc | v3-fos-license | Suppression of the neutral protease activity of macrophages treated with asbestos in vitro.
Macrophages are often conspicuous in asbestos-induced inflammatory lesions. Chrysotile type B elicits macrophages into the peritoneal cavity of mice which produce high levels of the neutral protease, plasminogen activator; in vitro addition of these same fibers to mouse peritoneal macrophages stimulates enzyme production. It is reported here that, for endotoxin-elicited mouse peritoneal macrophages fed chrysotile type B in vitro, the increased plasminogen activator activity is suppressed by low concentrations of anti-inflammatory steroids. Other active drugs include colchicine and vinblastine. These studies are considered important, as they suggest an approach to controlling the levels of a potentially deleterious enzyme system (PA-plasmin) from macrophages treated with asbestos fibers.
Introduction
Asbestosis is characterized by inflammation and fibrosis surrounding the respiratory bronchioles and alveoli and leads to permanent impairment of lung function (1). Macrophages are found in normal alveoli and are the prominent cell type found in chronic inflammatory lesions, including those induced by asbestos (2). Macrophages are able to release in vitro a number of molecules which are probably important in the initiation and maintenance of an inflammatory reaction, for example, neutral proteases, complement components, endogenous pyrogens, oxygen metabolites, prostaglandins and analogs, proliferation factors for neighboring cells, etc. (3).
For monocyte macrophages in vitro, the activity of the neutral protease, plasminogen activator (PA), shows positive correlations with stimuli [for example, asbestos (4), endotoxin (5), lymphokines (6), and phorbol esters (7,8)] and suppressive drugs [for example, glucocorticoids (9)] associated with an inflammatory response. A role for macrophage PA activity in cell migration from blood to tissue and in the processes of tissue destruction and activation of several inflammatory enzyme cascades has previously been *Memorial Sloan-Kettering Cancer Center, 1275 York Avenue, New York, 10021 U.S.A. Present address: Department of Medicine, University of Melbourne, The Royal Melbourne Hospital, Victoria, 3050, Australia. proposed (9). We have been able to show that IARC chrysotile type B elicits macrophages into the peritoneal cavity of mice which produce high levels of PA (4,10); in vitro addition of these fibers to mouse peritoneal macrophages stimulates enzyme production.
The present studies indicate that the in vitro stimulation of macrophage PA activity can be suppressed by anti-inflammatory glucocorticoids and, under certain culture conditions, by colchicine and vinblastine.
Fibrinolysis Measurement
The peritoneal cells (2 x 106) were plated on 1II-fibrin-coated wells (Linbro Disposo Trays, FB 16-24TC) (9) for 24 h in DB supplemented with 5% heat-inactivated (560C, 30 min) fetal bovine serum (FBS). At this stage, soybean trypsin inhibitor (STI) (100 ,ug/mL) was added to block any fibrinolysis. The cells were washed twice with PBS and placed again in DB + 5% FBS + STI. The particulate preparations were resuspended by passage through a syringe and 26G needle and carefully distributed in the culture medium. The particles were generally added for 2 hr, after which time >95% of the cells contained 20-50 latex particles. The macrophages were then washed three times with PBS and placed in DB + 5% acidtreated FBS (ATFBS) for assay. The acid treatment removes interfering protease inhibitors (9). Fibrinolysis was estimated' by withdrawing aliquots of medium and counting the radioactivity in a Packard Autogamma scintillation spectrometer. The plasminogen dependence of the fibrinolytic activity can be shown by assaying in the presence of ATFBS from which the plasminogen has been removed (9).
Measurement of Secreted Plasminogen
Activator Peritoneal cells (2 x 106) were plated in DB + 5% FBS on regular linbro trays. At 24 hr after plating, the cells were washed twice with PBS and placed again in DB + 5% FBS but also in the presence of the particles for an additional 24 hr. After this period, the cultures were washed twice with PBS and placed in DB containing 0.05% lactalbumin hydrolyzate (LH) (9). After 24 hr, the serum-free condi-tioned media (CM) were collected; also, the cells were washed twice with PBS, lysed by the addition of Triton X-100 (0.2% v/v in water) and scraped from the surface of the dish with a plastic policeman for protein determination.
Serum-free CM were kept frozen at -20°C until assay for PA activity. For this purpose, 1"I-fibrincoated wells of Linbro Disposo Trays were used, and the assay mixture contained 2,ug human plasminogen (purified from outdated human plasma) and 10 M.*L CM in 250 ML 0.1 M Tris chloride, pH 8.1 (9).
All experiments included appropriate controls for all reagents and media. No plasminogen-independent fibrinolysis could be detected when plasminogen was not included in the assay mixture. One unit of PA is defined as the amount that stimulates the release of 10% of the initial radioactivity in 4 hr (9).
Lysozyme
Lysozyme in serum-free CM was assayed by measuring the initial rate of lysis of a suspension of Micrococcus luteus, as described by Gordon et al. (11). Human lysozyme diluted in DB + 0.05 % LH was used as standard.
Lysosomal Enzymes
N-Acetyl-f-D-glucosaminidase and p-galactosidase were assayed as described by Bowers et al. (12). Protein Protein was measured by the method of Lowry et al. (13) with bovine serum albumin as standard.
Results
Enhanced Macrophage PA Activity after Asbestos Exposure in Vitro Phagocytosis of digestible and nondigestible materials enhances the PA activity of mouse peritoneal macrophages (4,5). Figure 1 demonstrates that asbestos fibers can increase the PA activity of peritoneal macrophages obtained from endotoxin-injected mice and also PBS-injected mice as a control population. Comparable results were obtained whether the asbestos addition was allowed for 2 hr or for 24 hr prior to the PA measurement. For asbestos doses less than 100 1.g, there did not appear to be any gross alterations in cell morphology, but with 100,g fiber there was some cell rounding and detachment after 3-4 days. The control, unstimulated peritoneal macrophages released PA activity in the presence of the asbestos, but there was a longer lag period than for the endotoxin-induced cells. The fibrinolytic activity was predominantly (> 95%) plasminogen-dependent, and hence PA activity was being monitored.
Inhibition of in Vitro-Stimulated Fibrinolytic Activity with Drugs
If the assumption is correct that the generation of excessive PA activity around macrophages interacting with asbestos fibers would be deleterious, then it would be desirable to find drugs which might suppress this local proteolysis. In Table 1, a comparison is made of three steroids for their effect on the asbestos-stimulated plasminogen-dependent fibrinolytic activity. The glucocorticoids, dexamethasone and prednisolone, are effective whereas progesterone is not. The data using latex feeding to cells are included for comparison. This inhibition of the increased fibrinolysis persists for at least 72 hr, the maximum assay time. Table 2 lists the relative efficacies of different steroids. Also included for comparison are colchicine and vinblastine, drugs implicated in the control of secretory processes. Certain glucocorticoids are again effective. It is interesting that the activity profile for the steroids is similar to that found for their suppression of the PA activity of thioglycollate-in- (Table 2) but were quite effective if harvest fluid is assayed for the PA activity from asbestos-treated cells (see below). It can be noted that the PA activity of the endotoxin-primed cells themselves was also inhibited. From Table 1 and 2, it can be observed that Progesterone 39 a Peritoneal cells were cultured for 24 hr on '25I fibrin-coated Linbro wells in the usual manner, before the asbestos or latex feeding. After washing, the solid particles were added in DB + 5% FBS and 100 pg/mL STI for 2 hr at 370C. After washing to remove excess particles, DB + 5% ATFBS was added in presence or absence of drug. Samples assayed as in Fig. 1. there is a comparable effect of the drugs for cells fed latex particles. The most potent glucocorticoid tested was dexamethasone and, as can be seen from Table 3, it is effective even at doses as low as 10-9 M. This low dose was also found to be effective on thioglycollate-induced macrophages (9).
Enzyme Secretion after Asbestos Exposure in Vitro
The above experiments were all carried out with cells plated directly on the fibrin substrate. Macrophages obtained from endotoxin-injected mice can also secrete elevated PA levels after exposure to asbestos (or latex) in vitro (Table 4); the increase is predominantly extracellular (data not shown), indicating that enzyme synthesis and secretion were tightly coupled (9). There was no effect of asbestos (or latex) on the secretion either of lysozyme or of two lysosomal enzymes, N-acetyl-f-D-glucosamindase and f3- galactosidase (Table 4). Further evidence in support of the concept of differential secretion control for PA and for lysozyme and lysosomal enzymes is the fact that dexamethasone and colchicine, drugs which have a dramatic effect on the extracellular PA levels, fail to alter the levels of lysozyme and lysosomal enzymes from macrophages when they are treated with asbestos or latex particles.
Discussion
Previous studies (4,10). have indicated that intraperitoneal injection of IARC chrysotile type B into mice induces an inflammatory exudate and that macrophages cultured from such exudates have elevated levels of PA. It was also shown that peritoneal macrophages when treated with the same asbestos fibers in vitro are stimulated to produce more PA activity. The possible significance of macrophage PA activity in inflammation has been outlined (9,10). It has been shown above that as little as 0.1 ,ug of the asbestos fibers can raise the PA activity of endotoxin-primed mouse peritoneal macrophages. It is tempting to speculate that, since the in vitro responses resemble those found with latex particles, it is phagocytosis of the asbestos fibers which results in the heightened protease activity. However, until a quantitative analysis of fiber uptake is made, such a conclusion is unwarranted, particularly as other agents, such as phorbol esters (7,8), concanavalin A (7) lymphokines (6) and colonystimulating factors (14), which are presumably not acting via a phagocytic pathway, can all stimulate macrophage PA activity.
The enhanced PA activity is suppressed by antiinflammatory glucocorticoids. It might be profitable if more were known about how glucocorticoids are working in this context; perhaps these drugs are inducing a "macrocortin-like" molecule which has been proposed to account for the manner in which they suppress cellular prostaglandin production (8,15). It is again worth noting that, for macrophages, lysozyme and two lysosomal enzymes activities appear to be under different regulatory control to the PA activity. Studies of carcinogenesis by various forms of asbestos, with and without hydrocarbon carcinogens, have shown that chrysotile can augment considerably the number of malignant tumors produced by benzo[a]pyrene (16,17): these studies and others have suggested that asbestos fibers can function as cocarcinogens (promoters). Interestingly, we have shown that the tumor promoter in skin, namely, 12-O-tetradecanoyl-phorbol-13-acetate (TPA), and asbestos fibers, elicit similar acute inflammatory responses in rabbit skin (18) and, as mentioned above, can both increase the PA activity of macrophages. We have suggested that vascular alterations due to an inflammatory reaction to both asbestos fibers and to TPA ought to be considered in any analysis of the development of neoplasms resulting from their action (8,18).
It is a pleasure to recognize the excellent technical assistance of B. Bunkin. I would also like to acknowledge Dr. E.
Reich, The Rockefeller University, in whose laboratory the initial observations were made. This work was supported by the National Institutes of Health Grant, ES-02014-01, the American Cancer Society grant and by a grant from the National Health and Medical Research Council of Australia. The author is a Senior Research Fellow of the National Health and Medical Research Council of Australia. | 2014-10-01T00:00:00.000Z | 1983-09-01T00:00:00.000 | {
"year": 1983,
"sha1": "a18c15b88b19421d2a91ef1601e7535fb17d3bf3",
"oa_license": "pd",
"oa_url": "https://doi.org/10.1289/ehp.8351103",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "a18c15b88b19421d2a91ef1601e7535fb17d3bf3",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"extfieldsofstudy": [
"Medicine",
"Biology"
]
} |
233955011 | pes2o/s2orc | v3-fos-license | Research on the Design of Evaluation System for Coordinated Development of Beijing-Tianjin-Hebei Region
By analyzing the main factors affecting the coordinated development of the Beijing-Tianjin-Hebei region, the construction of the Beijing-Tianjin-Hebei regional coordinated development evaluation system plays an important supporting role in accelerating the progress of the coordinated development of the Beijing-Tianjin-Hebei region. Based on the overall situation of the national economic and social development of Beijing-Tianjin-Hebei region and from the dimensions of science and technology, economy, society and ecology, the paper conducts the qualitative screening of the evaluation indicators of coordinated development of Beijing-Tianjin-Hebei region by using method of policy literature review and combining the two categories of indicators of "policy orientation" and "document reference". On this basis, the principal component — correlation analysis method is used to quantitatively screen the evaluation indicators of the coordinated development of the Beijing-Tianjin-Hebei region. Then, the evaluation index system for the coordinated development of the Beijing-Tianjin-Hebei region can be determined. This evaluation index system helps to evaluate the coordinated development status of the Beijing-Tianjin-Hebei region from a comprehensive perspective, and makes up for the shortcomings of the existing results that are from a single perspective.
I. INTRODUCTION
The Beijing-Tianjin-Hebei region accounts for 2.3% of China's land area, carries 8% of China's population, and creates nearly 11% of China's total economic output. It is an important engine for promoting China's national economic and social development. The issuance and implementation of policy documents such as the "Beijing-Tianjin-Hebei Coordinated Development Plan Outline" and the "Beijing-Tianjin-Hebei National Economic and Social Development Plan During the 13th Five-Year Plan" provide important practical guidance for the design of evaluation indicators for the coordinated development of the Beijing-Tianjin-Hebei region. The "Beijing-Tianjin-Hebei National Economic and Social Development Plan During the 13th Five-Year Plan" clearly formulated 9 key development tasks including innovation and development, transformation and upgrading, and green development. At the same time, a number of special plans for science and technology, industry, ecological and environmental protection, transportation, and education have been implemented to continuously reduce the imbalance in the development of the Beijing-Tianjin-Hebei region. These policies and systems have provided important policy support for accelerating the coordinated development of Beijing-Tianjin-Hebei and narrowing the development gap of Beijing-Tianjin-Hebei. In the context of the implementation of the Beijing-Tianjin-Hebei coordinated development strategy, and based on the overall national economic and social development of the Beijing-Tianjin-Hebei region, constructing a relatively complete coordinated development evaluation system that is compatible with the economic and social development goals of the Beijing-Tianjin-Hebei region is conducive to provide theoretical guidance and decision support for the coordinated development practice of Beijing-Tianjin-Hebei.
II. LITERATURE REVIEW
Since the establishment of the Beijing-Tianjin-Hebei regional coordinated development strategy, Beijing-Tianjin-Hebei government management departments and academia have carried out a large number of practical and empirical studies. Aiming at the design of evaluation indicators for the coordinated development of the Beijing-Tianjin-Hebei region, technological innovation, economic development, social governance, and ecological environment have now become key factors to accelerate the development of the Beijing-Tianjin-Hebei region. Existing research focuses on the design of evaluation indicators around the dimensions of science and technology, economy, society, and ecology. Among them, the dimensions of science and technology mainly include indicators of science and technology resource stock, talent reserve, science and technology output, science and technology environment, achievement transformation, and science and technology output performance. [1][2][3][4][5] Economic dimensions mainly include economic scale, level, benefit, structure, quality, environment and other indicators [6][7][8][9][10][11][12][13][14]. Social dimensions mainly include public services, people's lives, medical education, human settlements, social structure, social welfare and other indicators [15][16][17][18][19][20][21][22][23][24][25][26][27]. The ecological dimensions focus on indicators such as ecological endowment, pollution status, and governance results [28][29][30][31][32][33][34].
To sum up, the coordinated development of the Beijing-Tianjin-Hebei region is fundamentally driven by innovation, and its physical content and key support are to accelerate the transformation and upgrading of the economy and industry. At the same time, ecological construction provides an important guarantee for the coordinated development of the Beijing-Tianjin-Hebei region. The evaluation of the coordinated development of the Beijing-Tianjin-Hebei region is a systematic evaluation conducted from the multi-dimensional perspectives of science and technology, economy, society, and ecology in the Beijing-Tianjin-Hebei region, which is beneficial to provide theoretical guidance and decision support for the coordinated development practice of the Beijing-Tianjin-Hebei region. Based on the findings of existing research literature, scholars currently have no unified standard dimension in the design of the evaluation index system for the coordinated development of the Beijing-Tianjin-Hebei region, and they tend to construct corresponding evaluation indicators from the single-dimensional perspective of technology, economy, society, and ecology. Few scholars consider integrating science and technology, economy, society, ecology and other dimensions into a unified framework system on the basis of evaluation from single-dimensional perspective, and establish a more complete coordinated development evaluation index that is compatible with the economic and social development goals of the Beijing-Tianjin-Hebei region. Therefore, based on the overall situation of the national economic and social development of Beijing-Tianjin-Hebei, it is necessary to systematically design the evaluation index system of the coordinated development of the Beijing-Tianjin-Hebei region from the multi-dimensional perspectives of science and technology, economy, society, ecology, etc. And then, the key constraints of the coordinated development of the Beijing-Tianjin-Hebei region can be explored to guide the coordinated development in the Beijing-Tianjin-Hebei region.
A. Qualitative screening methods of evaluation indicators
The evaluation of the coordinated development of the Beijing-Tianjin-Hebei region cannot be limited to the research from the single perspective such as technological innovation, economic industry, and ecological environment. In practice, the scientific and technological, economic, social, and ecological dimensions should be integrated into a unified framework system to design the evaluation index system for the coordinated development of the Beijing-Tianjin-Hebei region. To this end, with reference to the "Beijing-Tianjin-Hebei Coordinated Development Plan" and the "Beijing-Tianjin-Hebei National Economic and Social Development Plan During the 13th Five-Year Plan" and other policy documents, the "policy-oriented" indicators for the development of the Beijing-Tianjin-Hebei region are defined. At the same time, referring to the core journal literature with the theme of "Beijing-Tianjin-Hebei Coordinated Development", the literature combing method is adopted to determine the "document reference" index for the coordinated development of the Beijing-Tianjin-Hebei region. Combining the two types of indicators of "policy orientation" and "document reference", the initial indicators for the evaluation of coordinated development in the science and technology dimension (see "Table I") and economic, social and ecological dimensions (see "Table II") of Beijing-Tianjin-Hebei region can be obtained respectively. Research, volume 165 TABLE I. INITIAL INDICATORS OF SCIENCE AND TECHNOLOGY DIMENSIONS FOR THE EVALUATION OF COORDINATED DEVELOPMENT IN THE BEIJING-TIANJIN-HEBEI It can be seen from " Table II"
B. Quantitative screening methods of evaluation indicators
As the science and technology, economic, social, and ecological dimensions of the Beijing-Tianjin-Hebei region involve many related indicators, the principal componentcorrelation analysis method is used to screen the indicators for dimensionality reduction. Principal component analysis is using the idea of dimensionality reduction to convert multiple indicators into a few comprehensive indicators (principal components), in which each principal component can reflect most of the information of the original variable. The principal component analysis method is used to Advances in Economics, Business and Management Research, volume 165 screen the indicators of science and technology, economy, society and ecological dimensions, and the indicators that have a large contribution to the principal components are screened out, and the indicators of science and technology, economy, society and ecology are obtained. At the same time, due to the high correlation between the indicators and the phenomenon of information overlap, the principal component analysis method cannot solve this problem. At this time, the correlation analysis method is used to supplement the screening of indicators. The correlation analysis method can eliminate the index with high repeatability by calculating the correlation coefficient between the indexes, and eliminate the influence of the linear correlation of the indexes. This paper adopts the principal component analysiscorrelation analysis method. And the initial index screening steps for the evaluation of the coordinated development of the Beijing-Tianjin-Hebei region are as the following: First, it can use principal component analysis to quantitatively pre-screen the initial indicators according to the factor loading. Then, it can screen the index with the principal component factor load greater than 0.9, and the second or third principal component factor load with the largest absolute value.
Second, it is required to use correlation analysis method. According to the size of correlation coefficient, a secondary quantitative screening on the remaining indicators after principal component analysis and screening can be performed. In this paper, the author calculates the correlation coefficients between any two indicators under the secondary indicator level in the dimensions of science and technology, economy, society, and ecology in the Beijing-Tianjin-Hebei region, and sets the threshold value M (0<M<1) of the index correlation coefficient. If the correlation coefficient of the two indicators is less than the threshold value M, then two indicators are retained at the same time; if the correlation coefficient between the two indicators is greater than the threshold value M, the indicator with the smaller absolute value of the factor load in the two indicators is deleted, that is, the indicator with small influence on the evaluation result. In this paper, the threshold value M = 0.9.
A. The design of scientific and technological indicators
for the evaluation of coordinated development in the Beijing-Tianjin-Hebei region According to " Table I The test results show that the KMO value of science and technology indicators is 0.767, which is greater than the minimum value 0.5, indicating that the initial indicator structure is reasonable and suitable for factor analysis. The Sig value of the Bartlett sphericity test is 0, indicating that there is a correlation between the science and technology indicators, and the principal component can be extracted. The extracted principal components can explain 86.36% of the original indicator information. First, it is required to quantitatively pre-screen the initial scientific and technological indicators according to the size of the factor loading, and screen out the indicators with the principal component factor loading greater than 0.9 and the secondary or third principal component factor loading with the largest absolute value, and obtain the Secondly, it is required to carry out the secondary quantitative screening of scientific and technological indicators according to the size of the correlation coefficient. It is necessary to calculate the correlation coefficient (see "Table V") between any two indicators in the secondary indicator layer retained after passing the principal component screening, and use the threshold M value for screening. Among them, under the investment in technological innovation, R&D personnel (X1), R&D personnel full-time equivalent (X2), and R&D personnel equivalent full-time equivalent in high-tech industry (X3) are highly correlated, which is greater than the threshold value 0.9, but R&D personnel (X1) has the largest factor loading among the three and has the largest contribution to the principal component, so X1 is retained. Under the environment of technological innovation, R&D internal expenditures in high-tech industries (X5), expenditures for new product development in high-tech industries (X6), and local fiscal science and technology expenditures ( X8)are highly correlated, greater than the threshold value 0.9, but the local fiscal science and technology expenditure (X8) has the largest factor loading among the three, and has the largest contribution to the principal component, so X8 is retained. Under the output of technological innovation, the number of patent applications granted (X9), the number of patent applications accepted (X10), and the number of invention patent applications accepted (X11) are highly correlated, which is greater than the threshold value 0.9. However, the number of patent applications accepted (X10) has the largest factor loading among the three and has the largest contribution to the principal component, so X10 is retained. Under the effect of scientific and technological innovation, the correlation coefficient of patent ownership per 10,000 population (X12), technology market turnover (X13), and high-tech industry new product sales revenue (X14) is less than the threshold value 0.9, so X12, X13 and X14 are retained.
B. Design of economic indicators for the evaluation of coordinated development in the Beijing-Tianjin-Hebei region
According to " Table I", economic indicators mainly include economic scale indicator, economic structure indicator and economic quality indicator. It is required to select per capita GDP (Y1), GDP growth rate (Y2), local fiscal general budget revenue (Y3), total social fixed asset investment (Y4), total retail sales of consumer goods (Y5), total import and export volume of the place where the business entity is located (Y6), the added value of the secondary industry (Y7) and industrial added value (Y8) are indicators of economic scale; the ratio of output value of primary industry to GDP (Y9), the ratio of output value of secondary industry to GDP (Y10), and the ratio of output value of tertiary industry to GDP (Y11) are indicators of economic structure; the overall labor productivity (Y12) and the urbanization rate of permanent population (Y13) are indicators of economic quality. According to the significance of each indicator, the principal component analysis method is used to make quantitative prescreening of economic indicators. It is required to use SPSS20 to perform KMO and Bartlett tests on 13 economic indicators in Beijing, Tianjin, Hebei, and Beijing-Tianjin-Hebei from 2009 to 2017 (see "Table VII"). The test results show that the KMO value of economic indicators is 0.683, which is greater than the minimum value 0.5, indicating that the initial indicator structure is reasonable and suitable for factor analysis; the Sig value of Bartlett's sphericity test is 0, indicating that there is a correlation between economic indicators, and the principal component can be extracted. The extracted principal components can explain 91.5% of the original indicator information. According to the size of the factor loading, it is required to carry out a secondary quantitative screening of economic development indicators, and screen out the indicators with the first principal component factor loading greater than 0.9 and the second or third principal component factor loading with the largest absolute value. Then, the principal component screening results of the economic indicators are obtained (see "Table VIII"). According to the size of the correlation coefficient, the second quantitative screening of economic indicators is carried out. By calculating the correlation coefficient between any two indicators in the secondary index layer retained after quantitative pre-screening (see "Table IX"), the threshold value M is used for screening. Among them, under the economic scale, local fiscal general budget revenue (Y3), and total retail sales of social consumer goods (Y5) are highly correlated, which is greater than the threshold value 0.9, but the factor loading of total retail sales of social consumer goods (Y5) is the largest, which contributes the most to the principal component. Therefore, Y5 is retained. The correlation coefficient among GDP growth rate (Y2), total retail sales of consumer goods (Y5) and industrial added value (Y8) is less than the threshold value 0.9, so all of them are retained. Under the economic structure and economic quality, the remaining indicators are the proportion of tertiary industry output value to GDP (Y11) and total labor productivity (Y12), so they are retained. By using principal component analysis and correlation analysis, GDP growth rate (Y2), total retail sales of consumer goods (Y5), industrial added value (Y8), the proportion of tertiary industry output value to GDP (Y11), and total labor productivity (Y12) are finally determined as economic indicators (see "Table X").
C. The design of social indicators for the evaluation of coordinated development in the Beijing-Tianjin-Hebei region
According to " Table I indicator, the principal component analysis method is used to make the quantitative screening of social indicators. In order to make the sample structure reasonable, the 21 indicators were divided into two parts for testing. First, it is required to perform KMO and Bartlett tests on the first 16 livelihood and welfare indicators and the last 5 social security indicators in Beijing, Tianjin, Hebei, and Beijing-Tianjin-Hebei from 2009 to 2017 (see "Table XI"). The test results show that the KMO value of the people's livelihood and welfare indicator is 0.808, and that of the social security is 0.742, both of which are greater than the minimum value 0.5, indicating that the initial indicator structure is reasonable and suitable for factor analysis; the Sig value of the Bartlett sphericity test is 0, indicating that there is a correlation between social indicators, and the principal component can be extracted. The extracted principal components can explain 89.90% and 98.09% of the original indicator information respectively. First, it is required to perform the first quantitative screening of social indicators according to the size of the factor loading, and screen out the indicators with the principal component factor loading greater than 0.9 and the second or third principal component factor loading with the largest absolute value, so as to obtain the principal component screening results of the social indicators (see "Table XII"). According to the size of the correlation coefficient, the second quantitative screening of social indicators is carried out. By calculating the correlation coefficient (see "Table XIII") in the secondary index layer retained
Advances in Economics, Business and Management Research, volume 165
after quantitative pre-screening, the threshold value M is used for screening. Among them, with the improvement of people's livelihood, the correlation coefficient between the urban registered unemployment rate (Z1) and the RMB deposit balance of urban and rural residents at year end (Z2) is less than the threshold value 0.9, while Z1 and Z2 are retained; under the governance of education, the correlation coefficient among the enrollment of ordinary colleges and universities (Z4), the ratio of students to teachers in colleges and universities (Z5) and the local financial expenditure on education (Z6) is lower than the threshold value 0.9, while Z5 and Z6 are retained; under the scale of post and telecommunications, the end-ofyear mobile phone users (Z9) are retained; under the traffic scale, freight volume (Z10), highway mileage (Z12) and railway operating mileage (Z13) are highly correlated, which are greater than the threshold value 0.9, but the factor loading of railway operating mileage (Z13) is the largest, which contributes the most to the principal component, so Z13 is retained; with the medical improvement, the number of beds in health institutions (Z14), the number of medical institutions (Z15) and the number of health personnel (Z16) are highly correlated, which is greater than the threshold value of 0.9, but the factor loading of the number of beds in medical and health institutions (Z14) is the largest, which contributes the most to the principal component, so Z14 is retained. Under the scale of insurance, the number of urban employees participating in endowment insurance (Z17), unemployment insurance (Z18), urban employees' basic medical insurance (Z19), industrial injury insurance (Z20) and birth insurance (Z21) are highly correlated, which is higher than the threshold value of 0.9. However, compared with the number of insured at the end of the year (Z20), the number of urban workers participating in the basic medical insurance at the end of the year (Z19) is frequently used, so Z19 is retained. Through principal component analysis and related analysis methods, the town's registered unemployment rate (Z1), RMB deposit balance of urban and rural residents at year end (Z2), the number of students enrolled in ordinary colleges and universities (Z4), the student-teacher ratio in ordinary colleges and universities (Z5), and local financial government expenditure on education (Z6), mobile phone users at the end of the year (Z9), road operating mileage (Z13), the number of beds in medical and health institutions (Z14), and the number of participants in the basic medical insurance for urban employees at the end of the year (Z19) are social development indicators (see "Table XIV") .
D. Ecological indicator design of the evaluation for coordinated development in Beijing-Tianjin-Hebei region
According to " Table I", ecological indicators mainly include resource consumption, ecological protection and environmental pollution. Energy consumption (V1), energy consumption per unit GDP (V2) and total water consumption (V3) are selected as resource consumption indicators; forest coverage rate (V4), forest stock volume (V5), local fiscal environmental protection expenditure (V6) and investment in wastewater treatment project completed (V7), investment in waste water treatment project (V8) and investment completed in waste gas treatment project (V9) are ecological protection indicators; household garbage clearance volume (V10), wastewater discharge (V11), COD demand (v12), ammonia nitrogen emission (V13) and sulfur dioxide emission (V14) are environmental pollution indicators. The principal component analysis method is used to make the quantitative screening of ecological indicators. Firstly, it is required to perform KMO and Bartlett test on 14 ecological indicators of Beijing, Tianjin, Hebei and Beijing-Tianjin-Hebei region from 2009 to 2017 respectively (see "Table XV"). The test results show that the KMO value of the ecological indicators is 0.742, which is greater than the minimum value of 0.5, indicating that the initial indicator structure is reasonable and suitable for factor analysis; the Sig value of the Bartlett sphericity test is 0, indicating that there is a correlation between ecological indicators, and the principal component can be extracted. The extracted principal components can explain 89.60% of the original indicator information.
First, it is required to carry out quantitative prescreening of ecological indicators according to the size of the factor loading, and screen out the indicator with the principal component factor loading greater than 0.9 and the second or third principal component factor loading with the largest absolute value, and obtain the principal component screening results of the ecological indicators (see "Table XVI") . Secondly, according to the size of the correlation coefficient, the secondary quantitative screening of ecological indicators is carried out. By calculating the correlation coefficient between any two indicators in the secondary index layer retained after quantitative pre-screening (see "Table III-Table XVII"), the threshold value M is used for screening. Among them, under resource consumption, energy consumption (V1) is highly correlated with total water consumption (V3), which is greater than the threshold value of 0.9, but the factor loading of energy consumption (V1) is the largest, so the energy consumption (V1) is retained. Under ecological protection, the correlation coefficient among forest coverage rate (V4), forest growing stock Advances in Economics, Business and Management Research, volume 165 (V5) and local fiscal expenditure on environmental protection (V6) is less than the threshold value of 0.9, so they are all retained. Under the environmental treatment, the correlation coefficient among total wastewater discharge (V11), COD emission (V12), ammonia nitrogen emission (V13) and sulfur dioxide emission (V14) is less than the threshold value of 0.9, so they are all retained. Through principal component analysis and correlation analysis, energy consumption (V1), forest coverage rate (V4), forest growing volume (V5), local fiscal environmental protection expenditure (V6), total wastewater discharge (V11), COD emission (V12), ammonia nitrogen emission (V13), and sulfur dioxide emission (V14) are finally determined as ecological indicators (see "Table XVIII").
V. CONCLUSION
Based on relevant literature, practical empirical exploration, and overall situation of the Beijing-Tianjin-Hebei national economy and social development, and from the multi-dimensional perspectives of science and technology, economy, society, and ecology, the coordinated development evaluation indicator system of the Beijing-Tianjin-Hebei region is systematically designed. This indicator system can comprehensively reflect the differences in the development of different dimensions in the Beijing-Tianjin-Hebei region in different periods, make up for the lack of separate analysis of existing results from different dimensions, and better grasp the comprehensive problems in the development of the Beijing-Tianjin-Hebei region. | 2021-05-08T00:03:22.798Z | 2021-02-21T00:00:00.000 | {
"year": 2021,
"sha1": "0c073f1ee1a5f13979ab52c67aade39800c010b8",
"oa_license": "CCBYNC",
"oa_url": "https://www.atlantis-press.com/article/125952964.pdf",
"oa_status": "GOLD",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "5b6f53403f69716c2ff06ab65781f358848e7cba",
"s2fieldsofstudy": [
"Economics",
"Geography",
"Environmental Science"
],
"extfieldsofstudy": [
"Geography"
]
} |
258547877 | pes2o/s2orc | v3-fos-license | Characteristics and research status among clinical trials in cardio‐oncology by bibliometric and visualized analysis
Abstract Background We aim to establish the characteristics of published cardio‐oncology research of clinical trials by bibliometric analysis and to talk about the prospects and difficulties facing the development of cardio‐oncology. Methods Search of data related to clinical trials in cardiac oncology from 1990 to 2022 from the Web of Science core collection. Using CiteSpace to perform co‐citation analysis of authors, countries (regions) and institutions, journals and cited journals, cited authors and cited literature, and keywords. Results Of the 607 clinical trial studies, the number of papers published per year has increased over time. The regions with the greatest influence were North America (especially the United States) and Europe. Multicenter research has always been the focus of cardio‐oncology research, but cross‐regional cooperation was still lacking. Myocardial toxicity caused by anthracyclines has received the earliest attention and has been studied for the longest time. Meanwhile, the efficacy and cardiotoxicity of new anticancer drugs always came into focus, but at a slow pace. Few studies on myocardial toxicity were related to the treatment of tumors except breast cancer. Risk factors, heart disease, adverse outcomes, follow‐up, and intervention protection were the major hotspots revealed by co‐citation cluster. Conclusions There is great potential for the development of clinical trials in cardio‐oncology, especially in multicenter cooperation across different regions. Expansion of tumor types, myocardial toxicity of different drugs, and effective interventions in the research direction and design of clinical trials are necessary.
| INTRODUCTION
With the continuous improvement of cancer treatment technology, patient survival times are improving and cancer has become increasingly a chronic disease. [1][2][3] Moreover, the complications of cancer have become the main factor affecting the survival time and quality of life in patients, 4,5 especially the cardiovascular toxicity related to cancer treatment. Epidemiological studies reveal that cancer patients associated with cardiovascular diseases (CVDs) have a 3.78 times higher risk of all-cause mortality than patients without CVDs, and CVDs killed 8.8% of cancer patients prematurely during long-term survival. 6 Some researchers have published reviews of the documentation on cardiotoxicity due to cancer treatment, [7][8][9] and the incidence, clinical manifestations, monitoring, and/or protection were summarized. Besides, some authoritative organizations or institutions, such as the American Society of Clinical Oncology (ASCO), the Heart Failure Association (HFA), and the European Society of Cardiology (ESC), [10][11][12][13] have released clinical practice guidelines about cardio-oncology. The imaging surveillance, treatment, or intervention recommendations are main challenges for oncologist that is lack of scientific evidence from randomized clinical trials. Clinical trials provide more powerful evidence for diagnosis and treatment which is important in cardio-oncology. However, the development of clinical trials researches remains unclear and the summaries of cardio-oncology are still lacking.
Bibliometric analysis is a mathematically and statistically based method for analyzing a large number of heterogeneous studies. 14 The combination of visual processing tools, such as CiteSpace, helps to assemble data on a field's contribution from multiple perspectives, including different countries/regions, institutions, journals, authors, cocitation networks, detailed research trends, or hotspots. 15 It is a way to accurately capture and integrate data from different sources, visualizing the connections between complex data through knowledge maps. The knowledge maps obtained through CiteSpace allow readers to quickly know the major contributing countries, institutions and individuals and to have a clear understanding of the hot spots of the research direction.
The present study aims to provide a general understanding of the developments in clinical trials of cancer cardiotoxicity through a literature search and screening to analyze and summarize clinical trials of cardiotoxicity in oncology published since 1990. Interpreting and synthesizing these articles can help in forecasting possible trends and provide a reference for future researchers, especially those interested in but unfamiliar with the field.
| Database sources and strategies for searching
A bibliographic search was completed on April 28, 2022 to minimize bias due to database updates using the Web of Science Core Collection (WoSCC) database. The following search strategies were used: cardiotoxic* or cardiotoxicity (Topic) and cancer* or neoplasm* (Topic) and clinical trial (All Fields) and article (Document Type) and English (Language). The timespan was from 01-01-1990 to 28-04-2022 (publication date). After the preliminary data search, the two researchers individually filtered all the manuscripts to ensure their relevance to the topic of this study. Cases, papers that were not clinical trials, or papers that were reviews and meta-analyses were excluded.
| Bibliometric analysis
The search results were analyzed using Web of Science to extract histograms that showed the disciplinary distribution of publications. All records and citations for these publications were then transferred from the WoSCC database, preserved in .txt format and loaded into CiteSpace software V6.1.R2. Using the following configuration: time slices from January 1990 to April 2022, each slice 1 year. The selection used a modified g-index in each slice: g 2 ≤ k ∑ i≤g c i , k ∈ Z + , k = 25. The procedure operated as a first step in data cleaning. If there were no replications, the original data were used directly; otherwise, the duplicates were removed prior to subsequent analysis. For coauthor network analysis, selecting one at a time for "Country," "Institution," and "Author" in the Node Type parameters section, leaving the rest of the settings as defaults. In the co-citation analysis, once the data were imported into CiteSpace, nodes were selected as "Reference," "Cited Journal," and "Cited Author", respectively. For keyword analysis, the relevant parameters were selected below: "Keyword" as the Node Type, the "Cosine" was chosen to calculate the strength of the relationship, and the pruning parameter regions "Pathfinder" and "pruning slicing network" were used to simplify the network and highlight the important structural features of the network. In the keyword burst detection, selecting "Keyword" as the node type and then using "Cosine" to calculate the burst intensity. Finding the top 40 keywords with the most explosive intensity. The results were presented as visual graphs in nodes, the more frequently the element appeared or was referenced, the larger the size of the node. The word similarities in the literature were categorized and scored by a specific algorithm, then the highest scoring word in each cluster was picked as the representative, the tag of the cluster. Full details were sorted out and displayed in Microsoft Word 2019.
| Analysis of publications
Altogether 953 of these publications matched the inclusiveness criteria under our search strategy. A total of 345 articles were excluded based on the exclusion criteria, covering 15 cases, 42 papers that were not clinical trials, and 288 papers that were reviews and meta-analyses. After removing duplicate entries, 607 clinical trial papers were finally included in the analysis. The annual number of publications and disciplinary distribution of cardiotoxicity clinical trials in oncology are shown in Figure 1. The quantity of clinical trials published each year was limited to a certain number, but the overall trend was gradually increasing. Among the clinical studies on tumor cardiotoxicity, the most were published in oncology (75.6%, 459/607), and other included cardiac cardiovascular systems (11.2%, 68/607), pharmacology pharmacy (10.4%, 63/607), hematology (4.0%, 25/607), radiology, nuclear medicine and medical imaging (3.8%, 23/607), medicine general internal (3.0%, 18/607), obstetrics gynecology (2.8%, 17/607), medicine research experimental (2.5%, 15/607), and pediatrics (1.6%, 10/607).
| Co-authorship network analysis
A country analysis of the distribution of these publications indicated that the 10 countries with the most collaborative papers were as follows: the United States (217, 35.7%), Italy (112, 18.5%), Canada (59, 9.7%), England (56, 9.2%), China (48,7.9%), Germany (48,7.9%), France (42, 6.9%), Spain (37, 6.1%), the Netherlands (37, 6.1%), and Belgium (34, 5.6%). Their relationship is displayed in Figure 2A. The co-authorship between institutions is shown in Figure 2B. The Dana-Farber Cancer Institute was the most published research institution, closely followed by Duke University and Mem Sloan Kettering Cancer Center. The details of the articles published by country and institution are shown in Table S1. There were many authors engaged in clinical studies of cancer cardiotoxicity, and Figure 3A shows authors who had published more than two studies. The top three authors were Thomas Suter, Santoro, and Azarnia.
A co-citation was defined as being cited by two other texts at the same time. 16 Co-cited authors and co-cited journals were then retrieved from the co-cited documents. There was a total of 19,392 references in 607 articles, among which 1057 references were co-cited references, with a total number of citations of 2417. The co-cited references map shows the relationship between these documents ( Figure 3B). The Journal of Clinical Oncology, New England Journal of Medicine, Circulation and Annual Oncology published most of top 10 in terms of total citations (Table 1). Furthermore, these journals were in the top ten co-cited journals and the other six of the ten were Cancer, Lancet, European Journal of Cancer, British Journal of Cancer, Cancer Research and Seminars in Oncology. Slamon wrote the article that was cited most frequently, who along with F I G U R E 2 National networks and institutional networks of co-authors for clinical trials of cardiotoxicity in oncology from 1990 to 2022 accessed via CiteSpace. (A) Collaborative country network visualization map (based on co-author countries) revealed the impact of each node; (B) Visualization of co-authored institutional networks by degrees of citation; countries and institutions with more than five publications can be seen in the map.
Romond E and Cardinale D were the top three with the highest citations. Full details of the top 10 co-cited journals and authors could be found in Table S2.
| Keyword analysis
As a highly condensed version of the content of the paper, to a certain extent, the keywords could summarize the topic of the paper in a simple and direct way. A keyword co-occurrence network was an analysis method based on text content. The top 10 keywords used were as follows: cardiotoxicity, chemotherapy, breast cancer, therapy, doxorubicin, trial, paclitaxel, heart failure, randomized trial, and adjuvant chemotherapy. The circles in Figure 4A represent keywords and the larger the circle is, the more frequently the keyword appears. When the quantity of keywords was too large, it was rather difficult to identify F I G U R E 3 Networks of co-authorship and literature co-citations. (A) The links between each author represented collaboration (coauthorship). Individual links between authors indicated collaborations (co-authors). Node size as a proportion of number of co-authors of the author. The figure showed author labels that had published more than two studies. (B) A network of reference co-citations. Round nodes indicated citations; the line between the nodes suggested the frequency with which two references were cited simultaneously. the research topic to which they belong, and cluster analysis could be helpful in solving this problem ( Figure 4B). Figure 4B shows the top 10 keyword clusters based on the log-likelihood ratio (LLR) algorithm. They demonstrated many areas of concern for cancer cardiotoxicity research, including patients (#3 metastatic breast cancer and #8 elderly), treatment-related (#1 trastuzumab, # 2 radiotherapy, #5 monoclonal antibody, #7 pertuzumab, and # 9 Herceptin), and complications (# 6 doxorubicin cardiotoxicity and # 0 heart failure). The burst detection results revealed some articles that have attracted the attention of fellow scientists. Citation bursts recorded the duration and intensity of each burst or the duration and intensity of the burst state, respectively. Application of keyword burst detection allowed quick insight into future research trends ( Figure 5). A red line indicates a sudden increase in the use of a keyword during the relevant period. By contrast, a blue line illustrates relative unpopularity.
The keyword burst detection identified adriamycin as the top hot topic during 1990-2012. Moreover, heart failure and follow-up have been the top two focal points from 2013 to 2022. Within the top 40 keywords with the strongest burst strength, chemotherapeutic drugs accounted for a large portion, including adriamycin, epirubicin, dexrazoxane, conventional doxorubicin, monoclonal antibody, liposomal doxorubicin, paclitaxel, trial comparing doxorubicin, adjuvant trastuzumab, and trastuzumab. Some diseases became the focus of the researchers several times before 2012, such as advanced breast cancer, carcinoma, breast carcinoma, Kaposis sarcoma, and metastatic breast cancer. But few specific diseases had become hot spots since 2012. Reduced cardiotoxicity, heart failure, induced cardiotoxicity, heart disease, ventricular ejection fraction, and cardiac dysfunction, which were closely related to heart disease, had gained the attention of scientists since 2006.
| Generation information
We conducted a documentary bibliometric analysis of 608 cardiac oncology clinical trials in the WoSCC database from 1990 to 2022. This paper summarizes the cooperative relationship and research topics in the field of tumor cardiology, which provides value for future multi-regional cooperation and clinical trials.
First, our literature analysis shows that this is a topic of great interest, with researchers from all over the world working in this direction. Since the starting point of our literature search, the number of clinical studies on tumor cardiotoxicity published each year has generally increased.
Although it is increasing every year, the absolute quantity is not very large, which also reflects that there are certain difficulties in the implementation of research on this topic. In addition, a record 16 articles have been published to date in 2022, and while this count does not reflect the full-year output, it is expected to increase in 2022.
Oncology is the discipline that published the most clinical studies in oncology cardiology, accounting for more than three-quarters of the literature published in this field. The study of cancer disease itself remains the focus, with other disciplines such as cardiovascular systems and pharmacology reflecting a focus on symptoms and oncology drugs. The proportion of imaging medicine and nuclear medicine indicates that the detection method of cardiotoxicity of tumors has also received certain attention.
The co-authors' map of countries and regions shows that the links between countries are strong, with the United States at the center of cooperation. Of the top 10 countries, only China is from Asia, and the others are European and American countries. Combined with the characteristics of ethnic distribution, more studies are needed in various countries to obtain the characteristics of tumor cardiology in Asian populations. In consistency with the nationwide dispersal, eight of the 10 (80%) most creative institutions are from the United States, with the remaining two belonging to the United Kingdom and Norway. In addition, four of the 10 institutions are cancer centers, four are general hospitals, and the remaining two are universities. Cancer centers and large hospitals have an advantage in conducting this kind of research because clinical research on tumor cardiology requires clinical patients.
For the co-authorship map, there is very little collaboration between the authors, and their relationships are scattered. Santoro,27,28 Thomas Suter,[29][30][31][32][33] and Azarnia [34][35][36] are the top three most productive authors. They are both experts in the field of oncology and have conducted multiple studies and published many high-quality articles in their respective areas of expertise. However, they are not frequently cited by other authors. Slamon, from the Jonsson Comprehensive Cancer Center, University of California-Los Angeles, is the author with the highest citations and published the top 1 cited article with the highest citations. 17 Obviously, the author has made remarkable contributions to the chemotherapy treatment of breast cancer.
The most quoted articles in a given period may be a milestone in the development of a particular period. Among the top 10 cited articles, eight of the studies involved breast cancer patients, 17,19-22,24-26 and all were F I G U R E 5 Keywords with the strongest article burst intensity on clinical trials of cardiotoxicity in oncology between 1990 and 2022. Keywords highlighted in red indicates a spike in the frequency of use of the keyword over the period. Blue represents a period of comparative decreasing popularity; Year means the first year when keyword was detected from our documents; Strength of each keyword was calculated by the software CiteSpace and the higher the number, the hotter the topic; Begin and end year represents the beginning and end of a popular research discipline. published in high-quality magazines, which means that this research area has received significant attention and has made significant progress. An analysis of the top 10 favored journals demonstrates that 60% (6/10) of the journals have an impact factor greater than 10. These journals differ greatly in their impact factors, showing a polarization, which indicates that the research of tumor cardiology is full of opportunities and challenges. In summary, these papers are of great significance to the study of tumor cardiology.
| Keywords
Keywords reflect on the central theme and essential content of the article and can be reasonable characteristics of topical research. Furthermore, keyword burst detection is recognized as an indication of cutting edge or emerging trends in research.
The analysis of the keyword map shows two research focuses in oncological cardiology: The study population and the etiology of cardiotoxicity. Breast cancer patients accounted for the majority of the study subjects 17,[19][20][21][22][24][25][26]33 and different diseases are made up of different populations. Most pediatric cancer patients are children with leukemia 37-40 meanwhile the elderly are mostly patients with advanced breast cancer metastatic breast cancer and hematological malignancy. [41][42][43][44][45] This is consistent with the epidemiology of different malignancies and their treatment modalities. 1 Breast cancer is the most prevalent in women and the treatment modalities of chemotherapy and radiotherapy (especially in the left breast) are potentially harmful to the heart. Leukemia is the most common type of cancer in children and is often treated with anthracyclines the cardiotoxicity of which has received widespread attention [46][47][48][49] Paclitaxel and radiotherapy are closely linked to cardiac toxicity. Paclitaxel is used in chemotherapy regimens for a variety of malignancies, [50][51][52][53][54][55] and radiotherapy for chest tumors also causes an increased risk of cardiotoxic ity. [34][35][36][56][57][58][59] Its efficacy and side effects naturally become a new focus of research when a drug is developed and put into use. The adjuvant therapy of trastuzumab in breast cancer patients positive for HER-2 has attracted attention since the beginning of the century. 19,22,25,29,51 The observation of the therapeutic effect of trastuzumab in breast cancer and the development of analogized trastuzumab have been a concern of scientists. [29][30][31][32][33]60 Furthermore, trastuzumab has shown good results in oncology treatment and is also used for other tumors besides breast cancer. [61][62][63][64][65][66] Monoclonal antibodies representing HER-2-targeted therapy are the focus of tumor cardiology research, and trastuzumab, pertuzumab and herceptin are the representative drugs. While anthracycline chemotherapy drugs represented by doxorubicin also occupy a considerable proportion in cardio-oncology research. 67,68 As early as the 1990s, cardiotoxicity caused by doxorubicin has been widely studied by scientists. 69 Cardiotoxicity caused by anthracyclines has been an ongoing concern of pediatrics. 46,70,71 However, the research on reduction of cardiotoxicity is unsatisfactory, especially the use of dexrazoxane in children.
According to citation bursts analysis, the topic of tumor treatment prognosis has become a research hotspot since 2006, when safety and reduced cardiotoxicity became keywords with the strongest citations. The concentration of study has switched more from cancer treatment itself to changes in cardiac function caused by cancer treatment. In recent years, heart failure and follow-up have become the focus of clinical research. [72][73][74][75] Follow-up time varies considerably between clinical trials, ranging from 18 months to 10 years. Heart failure is noted by many clinical medical practitioners as a serious adverse cardiovascular event, but progression to heart failure is often irrevocable. The subclinical myocardial injury in early stage may be more important for improving long-term prognosis, and some earlier changes need to be attended to, such as left ventricular dysfunction and myocardial injury. The monitoring of cancer cardiotoxicity has gradually progressed from the initial exploration of detection methods to the early assessment of cardiotoxicity, 76-78 more clinical trials on subclinical cardiotoxicity testing are required. Echocardiography and ventricular ejection fraction are current research hotspots and will remain hot in the future. 79 For the methods, ultrasound and MRI will play a greater role in the study of oncologic cardiology.
| Strengths and limitations
As far as we know, it is the first bibliometric analysis to provide a description of milestones and trends in clinical trials in cardiac oncology. This analysis has the merit of objectivity, as it illustrates areas of development in the field and spotlights areas that have not been completely explored. Yet, there are some restrictions to this study. First, we only retrieved data from the WoSCC database and did not include other databases, such as PubMed, so some articles were inevitably omitted. Second, due to the requirements of CiteSpace, our analysis only covers articles in the WOS database with English keywords or abstracts. Finally, bibliometric research is a quantitative analysis of academic publications that can only be performed in cited and indexed journals but not in unindexed journals, papers, books, or government reports. In forthcoming research, we will use methodological evaluations to gain a deeper perspective on this topic.
| CONCLUSION
In conclusion, the findings of this bibliometric study provide insights into trends in the progress of clinical trials in oncology cardiology over the past 30 years. There have been numerous clinical trials of oncologic cardiology in breast cancer patients, which have continued to gain traction and have been the focus of scientists' attention in recent years but have gradually shifted from the initial focus on treatment drugs to prognosis. Heart failure has become a major concern in recent years, as has follow-up. Cardiac function testing will become a new direction for scientists to conduct clinical trials in tumor cardiology.
CONFLICT OF INTEREST STATEMENT
There are no conflicts of interest.
DATA AVAILABILITY STATEMENT
The data that support the findings of this study are available from the corresponding author upon reasonable request. | 2023-05-08T06:16:54.725Z | 2023-05-06T00:00:00.000 | {
"year": 2023,
"sha1": "85c27d8652b4791227aa02c6975bcdc43122f7b6",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1002/cam4.6045",
"oa_status": "GOLD",
"pdf_src": "Wiley",
"pdf_hash": "7f84b3d91d8e9edbbd7a5c106005a40aab42f502",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
270361886 | pes2o/s2orc | v3-fos-license | A Synoptic View of Mantle Plume Shapes Enabled by Virtual Reality
The shapes of mantle plumes are sensitive to mantle viscosity, density structure, and flow patterns. Increasingly, global tomographic models reveal broad plume conduits in the lower mantle and highly tilting conduits in the mid and upper mantle. Previous studies mostly relied on 2D slices to analyze plume shapes, but fully investigating the complexity of 3D plume structures requires more effective visualization methods. Here, we use immersive headset‐based virtual reality (VR) to visualize the full‐waveform global tomographic models SEMUCB‐WM1 and GLAD‐M25. We develop criteria for the identification of plume conduits based on the relationship between the plume excess temperature and the VS anomaly (δVS). We trace 20 major plume conduits, measure the offsets of the conduits in azimuth and distance with respect to the hotspots, calculate the tilt angle, and evaluate the δVS along all traced conduits. We compare our traced conduits with the conduits predicted by global mantle convection models and vertical conduits. The wavespeed variations along conduits traced from each tomographic model are slower than modeled or vertical conduits, regardless of which tomographic model they are evaluated in. The shapes of traced conduits tend to differ greatly from modeled conduits. Plume ponding and the emergence of secondary plumes, which could result from a combination of compositional variations, phase transitions, small‐scale convection, and variations in viscosity, can contribute to the complex observed plume shapes. The variation of δVS along the traced conduits and complex plume shapes suggest a thermochemical origin of many plumes.
Introduction
Deep mantle plumes originating from the Core Mantle Boundary (CMB) are thought to have a broad head, which generates Large Igneous Provinces (LIPs), and a narrower tail, which forms long-lived hotspots (e.g.Richards et al., 1989).The geochemical diversity of hotspot lavas, which are also known as Ocean Island Basalts (OIBs), reflects the entrainment and transport of different mantle materials by ascending plumes.Hence, understanding the shapes of mantle plumes is important for linking the rock record with deep mantle structures, including the Large Low Shear Velocity Provinces (LLSVPs) and Ultra Low Velocity Zones (ULVZs).Plume shape is influenced by the global pattern of mantle circulation as well as the intrinsic buoyancy and viscosity variations within ascending plumes and the ambient mantle.Seismic tomography is the only geophysical method that currently resolves plume-scale features at all mantle depths.Tomographic models shape our understanding of mantle plumes and naturally become a constraint on numerical models that aim to understand their structure and evolution.These comparisons bridge our theoretical models to tomographic images of mantle plumes and help advance our understanding of the physical and chemical properties of mantle plumes.Here we analyze the shapes of mantle plumes using immersive 3D visualization based on two recent global tomographic models and consider the implications of plume shape for the pattern of global mantle circulation and the variation of mantle viscosity.
Mantle plumes that rise to the surface have previously been described conceptually as primary and secondary plumes (Courtillot et al., 2003) on the basis of their buoyancy fluxes, upper mantle seismic signature, and the isotopic variations in OIBs.Primary plumes rise directly from the CMB, whereas secondary plumes rise from the superswells or broad primary plumes that pond below the upper mantle.State-of-art global tomographic models show patterns of slow shear velocity (Vs) resembling both types of plumes, although the plume shapes revealed by tomographic models have more complexities than what is proposed by the schematic plume model of Courtillot et al. (2003).
There has been considerable debate about whether hotspots are preferentially located at the edges of the Pacific and African LLSVPs (Torsvik et al., 2006;Steinberger & Torsvik, 2012) or whether they are associated with the LLSVP edges and interiors (Austermann et al., 2014;Davies et al., 2015;Doubrovine et al., 2016).These two hypotheses have different geodynamics implications: whether plumes rise from the edge of the pile-like LLSVPs (Tan et al., 2011;Hassan et al., 2015), or the LLSVPs are cluster of plumes (Davaille & Romanowicz, 2020).
Two complementary approaches have been taken to understand the evolution of mantle plumes.First, some numerical models of thermal and thermochemical plume ascent focus on idealized plumes and incorporate a high degree of physical realism at the expense of describing the geologic context of specific plumes within Earth's mantle (Dannberg & Sobolev, 2015;H. Liu & Leng, 2020).A second class of numerical models focuses on the influence of the global mantle flow associated with Earth's tectonic history on plume conduits at the expense of a complete treatment of mantle rheology, phase transitions, and plume buoyancy (e.g., Steinberger & O'Connell, 1998).For the first class of studies, the comparison between the shape of the observed and the modeled plume conduits is only qualitative because idealized models do not attempt to reproduce the detailed dynamics of specific plumes.The second class of models does make testable predictions of plume shape that can be qualitatively and quantitatively compared with plumes resolved in tomographic models but only in terms of the wavespeed variations (Boschi et al., 2007).
Plume shapes depend on both the inherent properties of a plume and the surrounding mantle conditions, so they provide information about the composition and dynamics of plume and mantle.For example, the amount and direction of shear of plume conduits reflect the large-scale mantle flow pattern.Changes in the conduit radius could indicate the viscosity variations across the mantle.The stagnation of plumes helps to reveal the influence of the pressure-induced phase transitions on mantle convection.It is crucial to measure the shapes of plume conduits quantitatively to make more appropriate and meaningful connections between numerical models and tomographic observations.Measuring plume shapes from tomographic models requires effective visualization of what are three-dimensional (3D) datasets, but most approaches to their visualization have involved two-dimensional (2D) slicing or the rendering of isosurfaces (surfaces defined by a constant value) on a 2D medium such as a computer screen or a paper (French & Romanowicz, 2015;Tsekhmistrenko et al., 2021;Celli et al., 2021).The understanding and insight gained from 2D visualizations of 3D data may be different than that gained through immersive 3D visualization.For example, the 2D cross-section of a plume cluster associated with the Pacific LLSVP seems to imply that the conduits of plume Samoa, and Tahiti are not resolved above 660 km depth (Figure 1a).However, the conduits of these plumes extend out of the vertical cross-section plane, as shown in Figure 1b.Selecting an isosurface with a specific negative δV S to represent the boundary of a plume reveals plume shapes better than 2D cross sections and allows us to "see through" the non-negative δV S that obscures our view.However, these approaches may fail if the shape of a plume is best represented by different isosurface values at different depths or when many plumes are clustered.In the first case, visualizing plumes requires observing the structures of many different δV S isosurfaces simultaneously.In the second case, the isosurfaces representing boundaries of conduits usually obscure each other, making it tricky to identify an individual conduit if the observer is outside the cluster.This is the scenario for the plumes feeding Pitcairn, Macdonald, Marquesas, Tahiti, Samoa, and Easter, which are located close together within the Pacific LLSVP (Figure 2).
Visualizing seismic tomographic models in a virtual reality (VR) environment can help to overcome these barriers.Immersive visualization allows an observer to explore mantle structures from within and view them quickly from arbitrary vantage points.Immersive 3D visualization is not new in geoscience research but has not seen widespread adoption due to the lack of commodity VR hardware and related software.Previously, the usage of VR environments centered on large, immobile, and expensive "cave" environments (e.g., Billen et al., 2008).As VR headset devices have become more preva-lent, immersive 3D visualization is becoming more accessible due to its lower cost and greater portability, presenting the potential to enable new discoveries.
The remainder of the paper is structured as follows.We establish a quantitative procedure to define mantle plume conduits and discuss the advantages and limitations of our conduit-choosing criteria.We present our traced conduits for well-resolved plumes in SEMUCB-WM1 (French & Romanowicz, 2014) and GLAD-M25 (V S ) (Lei et al., 2020) and the quantitative measurement of these conduits.We demonstrate that our traced conduits are more consistent with the distributions of slow seismic velocities than geodynamic model predictions.We discuss the implications and potential applications of this study.
Methods
The two tomographic models analyzed in this study, SEMUCB-WM1 and GLAD-M25, are state-of-art global tomographic models based on full waveform inversion (FWI).SEMUCB-WM1 inverts for 3-D variations in Voigt-average isotropic V S and radial anisotropy parameter ξ and parameterizes them radially using (continuous) cubic b-splines and laterally using spherical splines.Its starting model is SEMum2 (French et al., 2013) above 800 km and SAW24B16 (Mégnin & Romanowicz, 2000) below.The crust is approximated by a smooth anisotropic layer to account for the crustal effects on wave propagation and dispersion.GLAD-M25 inverts for the bulk sound speed and vertically and horizontally polarized V S in the mantle above 660 km.Its starting model is S362ANI (Kustowski et al., 2008) for the mantle and Crust2.0(Bassin et al., 2000) for the crust.As in the starting model S362ANI, GLAD-M25 uses a parameterization that includes first-and secondorder discontinuities in the radial direction, permitting abrupt changes in the pattern of heterogeneity across the mantle transition zone (MTZ).Both of the global tomographic models resolve broad plumes rising from the CMB to the upper mantle beneath many hotspots (French & Romanowicz, 2015;Lei et al., 2020).These enforced vertical discontinuities in GLAD-M25 could introduce artifacts to the resolved plume shapes around the MTZ, but plume structures resolved in the lower mantle should remain robust, discussed later.
We define plume conduits based on three considerations.First, we require plume conduits to be continuous pathways from the lithosphere to the CMB.Second, we require that plume conduits be slower than average across all mantle depths (i.e., having a negative δV S ).Third, we seek plume conduits for which the temperature anomaly implied by wave speed variations is consistent with petrological constraints on plume excess temperature.The third criterion may not always be satisfiable due to limitations in tomographic modeling, discussed later.
Following our criteria, we manually traced the conduits of 20 plumes (listed in Table S1), of which the buoyancy flux is larger than 1000 kg/s (Jackson et al., 2021) and are well-resolved in both SEMUCB-WM1 and GLAD-M25.We exclude the Yellowstone plume as it is only well-resolved in GLAD-M25.We include the Canary and St. Helena plumes, of which the buoyancy flux is only 800 and 500 kg/s, respectively, because similar plume shapes are clearly resolved in both tomographic models.Moreover, the OIBs associated with both hotspots display isotopic signatures supporting a deep mantle origin.
The plume conduits are traced in a headset-based immersive 3D visualization environment.We use the Valve Index VR headset and controllers and the Paraview 5.10.0 (Ahrens et al., 2005) visualization software.The identification of plume conduits was carried out using the following steps: 1.The traced conduit (TC) of each plume can be divided into an upper-mantle, a mid-mantle, and a lower-mantle part.We first identify candidate conduits (CCs) -conduit-like vertical negative δV S structures -that extend vertically across the mid mantle near each surface hotspot.There may be multiple candidate conduits for each hotspot, and we seek conduits that are closer to the hotspot's surface expression.
2. We use pipelines (control points connected by line segments) to represent the pathway of the traced conduit, where the control points are assigned every 200 km from 250 to 2450 km depth.We seek an upper-mantle TC, which connects the surface hotspot with the upper-end of the mid-mantle TC, and a lower-mantle TC, which starts from the lower-end of the mid-mantle TC.Where there is ambiguity, we prefer more vertical plume conduits.
3. After tracing the plume conduits, we validate our TCs according to two criteria.
First, the δV S along a TC should not be positive.Second, we use the plume and ambient mantle potential temperature calculated from olivine-liquid equilibria (Putirka, 2008) to estimate the excess temperature of plumes.We then calculate the profile of d(ln V S )/dT (Figure S1) assuming that the plume has a pyrolitic composition and use the profile of d(ln V S )/dT to calculate δV S corresponding to the petrologicallyestimated excess temperature at all depths for each plume that has an estimation.
δV S along the TC should be comparable to δV S converted from the petrologicallyestimated excess temperature at some depths above 1250 km.The second criterion is not hardwired because the variable resolution, parameterization, and regularization of global tomographic models can all contribute to modeled V S variations.
Results
We describe the general properties of the traced plume conduits (Figure 2), starting from describing the slowness of the traced conduits.We then describe overall trends in the amount of offset from the surface location, the tilt (measured in degrees away from the vertical) of plume conduits, and the depths at which large offsets or tilts occur.We describe the shapes of individual plume conduits in greater detail later.
Slowness along plume conduits
The δV S along conduits traced from SEMUCB-WM1 and GLAD-M25 is generally between 0% and -2%, comparable with each other (Figure 3-4).We find that plumes originating from the African LLSVP are faster than plumes stemming from the Pacific LLSVP above ∼ 1250 km depth in SEMUCB-WM1 and at all depths in GLAD-M25 (Figure 5b,c,g,h).We also evaluate the average δV S of conduits traced from SEMUCB-WM1 in GLAD-M25 as well as conduits traced from GLAD-M25 in SEMUCB-WM1 (Figure 5d, e, i, j).When plumes traced in one tomographic model are evaluated in the other tomographic model, the average δV S along TCs around the Pacific LLSVP remains negative at all depths, while it is negative only in the lower mantle for TCs around the African LLSVP.
Observed morphology
Tilt angles along the traced conduits generally remain smaller in the lower mantle (usually < 60 • ) than in the upper mantle with a few exceptions (Figure 6).For example, the Louisville and Azores plumes have a tilt angle (60 -70 • ) below 2000 km in SEMUCB-WM1.A comparison of the tilt angles of plumes (Figure 6) and the offsets of plume conduits (azimuth and distance, shown in Figure 7) shows that large tilt angles are associated with abrupt changes in offset distances and/or azimuths of TCs.Changes in offset azimuths and distances are small where the tilt is closer to vertical.The azimuth of a conduit is measured by assuming its hotspot as the origin, 0 degree at the north, and counting clock-wise.Due to the manual process of conduit tracing, the uncertainty in tilt of TCs is at least 5 • .Hence TCs with tilt less than this should be interpreted as nearly vertical.We do not report the average tilt angle of each conduit because these values do not accurately describe the shape of conduits.For example, in SEMUCB-WM1, the TC of Samoa has a similar average tilt angle (16.9 • ) to the TC of Pitcairn (16.1 • ).
However, the TC of Samoa appears to be ponded and deflected at 660 and 410 km depth, while the TC of Pitcairn tilts gently across the whole mantle.
Plume conduits traced in SEMUCB-WM1 and GLAD-M25 usually root at locations offset from their surface hotspots by 5 -10 • and most of the offset occurs in the upper mantle.A few plume conduits show larger offsets.The TCs of Galapagos, San Felix, and Tahiti root at locations offset from their surface hotspots by more than 10 • in both tomographic models (Figure 2 and 7).The offsets of conduits traced from SEMUCB-WM1 in the upper mantle can easily exceed 5 degrees (Figure 7), which converts to > 500 km offsets, while those of conduits traced from GLAD-M25 appear to be much smaller.
Paired plumes
In SEMUCB-WM1, the MacDonald and Pitcairn plumes seem to branch from the same conduit in the lower mantle and the Macdonald plume is significantly deflected at ∼ 1250 km depth (Figure 8a).The Canary and Cape Verde plumes also appear to share the same conduit from the CMB to at least ∼ 1250 km depth and branch into two conduits separated by ∼ 15 • in the upper mantle (Figure 8b).
In GLAD-M25, we identify CC with a similar shape as what is observed in SEMUCB-WM1 below the Canary and Cape Verde hotspots.We interpret Canary and Cape Verde as two adjacent plumes rising parallel to each other though this CC could be interpreted as either two separate conduits or one broad plume branching into two secondary plumes as it crosses the 660 km discontinuity.CCs of the Pitcairn and Macdonald plumes look less like those in SEMUCB-WM1.These two plumes seem to emerge from different locations at the CMB and merge into a broad plume conduit between 660 and 2000 km depth and branch again above 660 km depth.
The San Felix and Juan Fernandez plumes are another potential paired plumes.
These two plumes generally share the same CC in the mid-mantle in both tomographic models (Figure S2).We interpret it as two adjacent plumes rising parallel to each other and trace their conduits based on this interpretation.The conduit of San Felix is not resolved between 1250 and 660 km in SEMUCB-WM1 and above 660 km in GLAD-M25.
The conduit of Juan Fernandez is generally well resolved at all depths in both tomographic models.
Iceland
The Iceland plume is generally vertical in both tomographic models, but the detailed shape of the plume is different.Starting from the surface hotspot, the traced conduit from SEMUCB-WM1 is offset towards the northeast above ∼ 350 km and then offset back towards the hotspot at ∼ 660 km.The conduit remains generally vertical below 660 km and slightly tilts towards the east below ∼ 2000 km (Figure 6,7, and 8c).
Its TC from GLAD-M25 is vertical above 660 km, tilts first towards the east between 660 and 1000 km depth then towards the west between ∼ 1250 and 1500 km depth, and remains vertical below 1500 km.
Hawaii
The Hawaii plume appears to be mostly vertical in SEMUCB-WM1, while it appears to largely tilt towards the southeast in GLAD-M25.Its conduit is well resolved in SEMUCB-WM1 but not well resolved between 410 and 660 km depth in GLAD-M25 (Figure 8d).Although the TCs from SEMUCB-WM1 and GLAD-M25 are not consistent, both tomographic models resolve a similar CC between 660 and 1250 km depth below the surface hotspot location and a similar CC location at the CMB (Figure 8d).
Samoa, St Helena, Reunion, and Caroline
Similar CCs are identified in both tomographic models for the Samoa, St Helena, Reunion, and Caroline plumes.These plumes remain nearly vertical or slightly tilt in the lower mantle and tilt more heavily in the upper mantle (Figure 9a-c).We noticed that amplitudes of negative δV S along these TCs from SEMUCB-WM1 vary smoothly and reach a maximum near 660 km.Amplitudes of the negative δV S along these TCs from GLAD-M25, however, decrease abruptly above the 660 km discontinuity.These negative δV S amplitudes are larger (slower) than those of conduits traced from SEMUCB-WM1 by 0.5-1.0 % δV S below ∼ 2000 km (Figure 3 and 4).
Azores, Easter, Galapagos, Kerguelen, Marquesas, and Tahiti
We notice that for the Azores, Easter, Galapagos, Kerguelen, Marquesas, and Tahiti plumes, similar CCs are resolved in the two tomographic models but different TCs are identified (Figure 9d
Discussion
We first demonstrate the reliability of our traced conduits to justify that our TCs represent seismically slow paths through the mantle.We then compare our TCs with modeled conduits and discuss the reasons for their differences.Next, we discuss the implications for mantle and plume dynamics from our observed plume shapes and slowness along conduits.We conclude our discussion by proposing some applications of our TCs in future studies of plume dynamics.
Reliability of traced conduits
Seismic tomography is a mixed-determined inverse problem, and there exist many possible Earth structures that are equally compatible with seismic observables.The shapes of plumes could vary between different regional and global tomographic models due to different parameterization/regularization choices and different earthquake events used to constrain the tomographic models (French & Romanowicz, 2015;Wamba et al., 2021Wamba et al., , 2023)).Hence, one might question the veracity of mantle plume shapes determined on the basis of seismic tomography.Several lines of evidence suggest that the imaged and traced plume conduits are likely representative of real mantle structures.First, the slow V S structures near many hotspots are similar between the two models, suggesting that the imaged features are robust.Second, the average slowness along TCs is much greater than the average slowness along modeled or vertical conduits (Figure 5a-c, f-h).To further assess the robustness of the traced plume conduits, we evaluate the slowness along Pacific TCs obtained from SEMUCB-WM1 and GLAD-M25 in other P-and S-velocity tomographic models.We find that our Pacific TCs traced from GLAD-M25 are slower than the MCs and vertical conduits (VCs) in the lower mantle (below ∼ 660−1000 km depth) when they are evaluated in most of the other models (Figure 5i and S2 g-k).Our Pacific TCs traced from SEMUCB-WM1 are slower than the MCs and vertical conduits (VCs) but in a more restricted depth range between ∼ 1250 and 2100 km depth .(See Text S2 in the Supporting Information for more details.)This suggests that both sets of traced conduits, especially TCs from GLAD-M25, are more compatible with many other tomographic models than the modeled and vertical conduits in the mid to lower mantle.
Comparison between traced and modeled conduits
Simplified numerical models of mantle plume shapes have been used widely in geodynamics to understand the mobility of deep mantle hotspots and to establish the moving hotspot reference frames necessary for absolute plate reconstructions (e.g., Matthews et al., 2016).We compare modeled conduits (MCs) from (Steinberger & Antretter, 2006) with our traced conduits.These numerical models of plume dynamics start with a mantle buoyancy structure based on a tomographic model filtered to long wavelength.The buoyancy structure is reconstructed backwards in time through the reversal of buoyancy forces and the application of time-reversed plate reconstructions at the surface while ignoring the effects of thermal diffusion, which cannot be time-reversed due to non-uniqueness.This yields a model of long-wavelength (much longer wavelength than the widths of plumes) mantle flow in space and time.Then, initially vertical plume conduits are advected by the flow field forward in time.Previous studies demonstrated that the shapes of MCs are not very sensitive to the tomographic model used to compute the mantle flow field, the details of the plate reconstructions used, or the detailed mantle viscosity structure (Steinberger & O'Connell, 1998;Steinberger, 2000;Steinberger & Antretter, 2006;Williams et al., 2019).
The tilt angles and offsets of MCs show that most of MCs slightly tilt (tilt angle < 30 • ) below 660 km.This is likely because the deformation rate is slow due to the high viscosity of the lower mantle.Larger tilt angles (up to > 90 • ) of MCs observed above 660 km (Figure 6) are mainly due to the oscillations of the tightly spacing conduit elements in the lower-viscosity upper mantle.The offsets of modeled conduits (shown in Figure 7) show that MCs in fact tilt gently at these depths.Our TCs suggest that plumes generally slightly tilt in the lower mantle, but large tilt angles in the mid-mantle below 660 km are observed for many TCs from both tomographic models (e.g., Macdonald, Samoa, St Helena, and Tristan) (Figure 6).TCs generally have more complex shapes than MCs, especially in the mid-mantle.
Although the paths of TCs and MCs are generally not in very good agreement (Figure 2, Table S1), there are a couple of exceptions.TCs of plumes located at the edge of LLSVPs (Canary, Juan Fernandez, San Felix, St Helena, and Reunion)(Figure 7) seem to agree with their MCs better than TCs of plumes located near the center of LLSVPs.
TCs of these plumes share similar offset directions with their MCs, while the MCs have 5 -10 • more total offset distances than the TCs.These plumes have relatively simple plume shapes, that is, the offset direction of a TC does not change with depth.TCs of plumes located around the center of LLSVPs are usually vertical in the lower mantle but meander in the middle and upper mantle.Because of the physics included in the models, all MCs only have simple plume shapes (without stagnation or meandering).They are always smooth curves extending from the LLSVPs to the surface hotspots.We discuss this difference more in the next section.
The average seismic velocities of the TCs, MCs, and VCs are slower than the ambient mantle at all depths.However, TCs from SEMUCB-WM1 are up to 6 times slower than MCs and 3.7 times slower than VCs in the upper mantle, while they are 1.2-3 times slower than MCs and VCs in the lower mantle.TCs from GLAD-M25 are 1.1-3 times slower than MCs and VCs across the mantle.The average velocities of MCs are slower than the those along VCs only in the lower mantle (Figure 5a, f), which is consistent with the analysis of MCs and VCs done using older tomographic models (Boschi et al., 2007).
The δV S along MCs is often close to 0% or even positive in the upper mantle (Figure 3 and 4), while the δV S along TCs is negative in most cases.There are a few exceptions in SEMUCB-WM1 (Cape Verde and San Felix) and GLAD-M25 (Azores, Canary, Hawaii, San Felix, Tahiti, and Tristan).In these cases, no CC can be identified at some depths in the upper mantle.This may indicate that the global tomographic model does not resolve the plume conduit at these depths.It is expected that plume radius can significantly decrease as a plume rising from the more viscous lower mantle to the less viscous upper mantle (Leng & Gurnis, 2012).
Implications of the slowness along plume conduits
The excess temperature of a purely thermal plume conduit is not expected to change significantly with depth since plumes rise rapidly relative to the thermal diffusion timescale and mantle heat production is negligible on the timescale of material ascent through a plume conduit.For example, the exothermic phase transition (olivine to wadsleyite) at 410 km depth, and shear heating may be able to increase the temperature of a plume, but they are secondary effects compared with the plume's inherent excess temperature.This implies that if a mantle plume is purely thermal, the amplitude of its δV S should generally vary following the thermodynamically determined d(ln V S )/dT profile with depth.
Our observations from both tomographic models, however, show that the variation of δV S along plume conduits almost never strictly follow the d(ln V S )/dT profile, which suggests that non-thermal variations are present in plume conduits.
Non-thermal variations in mantle plumes include differences in intrinsic composition, water content, grain size, and melt fraction.At the 410 km discontinuity, the phase transition from wadsleyite to olivine may result in water release when plume materials rise and cross this boundary because wadsleyite has a higher water-bearing ability than olivine (W.Wang et al., 2019).Increasing water content can reduce V S (C.Liu et al., 2023) and may cause partial melting in this region, further reducing V S (Chantel et al., 2016).Isotopic measurements of OIBs and numerical models suggest that LLSVPs may be composed of a variety of different materials, ranging from primordial materials that get preserved at the CMB since the differentiation in early Earth's evolution (Labrosse et al., 2007;Deschamps et al., 2012) to piles of recycled oceanic crusts (Olson & Kincaid, 1991;Brandenburg & van Keken, 2007).For many of the traced conduits, we find that δV S in the lowermost mantle is slower than expected on the basis of d(ln V S )/dT .The incorporation of compositionally-distinct material within the lowermost mantle is one possible explanation for the slower than expected velocities (Figure 3 and 4).
The systematically faster plumes (in the upper-and mid-mantle) originating from the African LLSVP than those originating from the Pacific LLSVP (Figure 5b, c, g, h) are consistent with previous estimates of plume excess temperature based on upper mantle wavespeed variations (Bao et al., 2022).Y. Wang and Wen (2007) and He and Wen (2009) also show that the two LLSVPs have different shape and topology.They may indicate that the two LLSVPs have different origins, but we cannot rule out the possibility that the faster plumes from the African LLSVP are caused by different seismic data coverage between the Pacific and the Atlantic regions.
Implications of diverse plume shapes
The shape of a plume conduit depends on both the plume's properties and its interaction with its surrounding mantle.Buoyancy, which is determined by ∆ρ, the difference between the effective density of a plume and the density of its surrounding mantle (∆ρ = ρ plume − ρ mantle ), controls the behaviours of a plume as it rises.The buoyant ascent of plume material and its interaction with the large-scale mantle flow will re-sult in different plume conduit shapes.The composition of the plume, the pressure induced phase transitions, and the excess temperature (temperature difference between the potential temperature of a plume and the ambient mantle) together determine ∆ρ.
When a plume has a positive buoyancy (∆ρ < 0), it will rise, and it will start sinking when it has a negative buoyancy.When ∆ρ is close to or slightly smaller than 0, a plume could be ponded or develop a variety of complex shapes (Kumagai et al., 2008;Xiang et al., 2021).
The mantle viscosity structure and flow patterns of the ambient mantle also affect plume shapes.The mobility of a plume, that is how easily it gets deformed, is expected to be smaller in a more viscous than in a less viscous region (H.Liu & Leng, 2020).Largescale mantle flows driven by thermal convection, surface plate motion, and subduction could shear plume conduits or largely deflect the secondary plume stemming from a ponding primary plume (Steinberger, 2000;Farnetani & Samuel, 2005).
The more complex shapes of our TCs than the MCs suggest that the mantle convection models used to determine MCs may not consider all major factors affecting plume shapes, especially in the mid-mantle across and below the MTZ, where plume ponding and large tilt angles are only observed in TCs.
First, the mid-mantle below the MTZ could have significant viscosity variations (Marquardt & Miyagi, 2015;Rudolph et al., 2015;Shim et al., 2017), which indicates a more complex rheology than the numerical models' assumption that only a few deformations occur and diffusion creep is predominant at these depths (Ferreira et al., 2019).Furthermore, the transition from ringwoodite to bridgmanite at 660 km, which can lead to plume ponding at this depth, is not considered neither.In return, the numerical models lack the ability to produce plumes that are ponded and deflected at different depths due to their simplified physics, which does not consider the composition variations, phase transitions, nor a temperature-dependent or strain-rate-dependent viscosity.
Second, the mantle flow field converted from the global tomographic model (Steinberger & O'Connell, 1998) may not be accurate at a smaller scale due to our current incomplete understanding of mantle dynamics.MCs are determined based on the assumption that a plume rose to the surface vertically within a short time and left a vertical 100kilometer-radius conduit that gets passively advected by the large-scale mantle flows later.
However, this assumption is only valid if mantle plumes are purely thermal.Recent seismic tomographic models have imaged plume conduits with a radius of ∼ 500 km (French & Romanowicz, 2015) and much more complex morphology (Tsekhmistrenko et al., 2021;Celli et al., 2021;Wamba et al., 2023).Such broad plumes may not only be passively advected, but also influence the mantle flow field.Plumes with such large radius would have buoyancy fluxes that are much higher than previous estimations (Sleep, 1990;King & Adam, 2014).Together with the complex plume shapes, they suggest that many, if not all, mantle plumes are thermochemical rather than purely thermal.For example, a plume that incorporates an eclogitic component has a lower buoyancy flux and a larger radius than a purely thermal plume, which is more consistent with observations (Dannberg & Sobolev, 2015).
At ∼ 410 km depth, previous numerical models suggest that plumes with some eclogitic component will have a buoyancy barrier due to the different phase transitions that occur in pyrolitic and eclogitic materials.This buoyancy barrier can result in plume ponding and the emergence of a secondary plume (Farnetani & Samuel, 2005;Dannberg & Sobolev, 2015).It can potentially explain the ponding of Samoa, a large tilt angle, and a large change in offset distance observed in SEMUCB-WM1 at this depth (Figure 6,9a).
Large tilt angles at 660 km depth mostly reflect plume ponding, which could be caused by the combined effect of the ∼ 30-fold viscosity increase from above to below 660 km suggested by many geophysical studies (Hager, 1984;Mitrovica & Forte, 1997) as well as the endothermic phase transition from ringwoodite to bridgmanite (Faccenda & Dal Zilio, 2017).The phase transition can cause plume ponding as the hotter plume materials undergo this phase transition at a shallower depth, hindering ascent.Several scenarios may happen after a primary plume is ponded at this depth.First, the primary plume could penetrate the 660-discontinuity broadly while some plume materials are ponded.
These ponding materials become so hot that there is a significant viscosity reduction, allowing the conduit to be laterally deflected by hundreds of kilometers (Tosi & Yuen, 2011).This scenario is observed for St. Helena and Tristan in both tomographic models (Figure 6, 7, 9b and S6).
When the primary plume cannot penetrate the 660-discontinuity in the first place, significant amount of plume materials will accumulate at this depth.The ponding materials will spread like a pancake and secondary plumes can develop from anywhere above the ponding zone.As a result, the offset distance between an upper-mantle secondary plume and a lower-mantle primary plume is not large, while the offset azimuth can be irrelevant to the flow patterns (Caroline in GLAD-M25, Azores, Iceland, Reunion in both tomographic models) (Figure 6, 7, 8c, 9c, S3 c and d).They may resemble the "plumetree" model proposed in Liu and Leng (2020), which requires a thin low-viscosity layer beneath the 660 km ponding depth and a low-viscosity upper mantle to allow secondary plume(s) develop from any part of the ponding materials.
At a greater depth ∼ 1250 km, large tilt angles observed of Tahiti in both tomographic models, Hawaii in GLAD-M25, and Kerguelen in SEMUCB-WM1 (Figure 6) could arise if the viscosity is higher around this depth than in the mantle above and below it.
Owing to the higher viscosity, conduits tilt less around this depth, so the conduit above this depth could be preferentially deflected by mantle flow.Some inversions of geophysical data suggest that there exists a viscosity hump, a one-to-two-order of magnitude viscosity increase, between 800 and 1200 km depth (King & Masters, 1992;Mitrovica & Forte, 1997;Rudolph et al., 2015).Studies on mineral physics also suggest that the increasing strength of ferropericlase (Marquardt & Miyagi, 2015;Deng & Lee, 2017) and decreasing the iron-enrichment in bridgmanite (Shim et al., 2017) at the mid-mantle depth can both result in this mid-mantle viscosity hump.
Another mechanism that may produce large tilt angles at ∼ 1000−1250 km (Canary and MacDonald in SEMUCB-WM1) is plume ponding and secondary plumes emerging.This mechanism is proposed by Wamba et al. (2023) to explain alternating vertical conduits and horizontal ponding zone observed for the Reunion and Comores plumes from ∼ 1000 km depth to the top of the asthenosphere in the latest tomographic models.There is no known endothermic phase transition, which could cause plume ponding, at these depths.However, a denser mantle below ∼ 1000 km depth due to its higher basalt content (Ballmer et al., 2015) could cause plume ponding at this depth if the thermal expansion effect is not strong enough to reduce the plume effective density to be smaller than the mantle density above ∼ 1000 km (Xiang et al., 2021).Seismic observations imply a not-global discontinuity presenting at 1000 km depth (Zhang et al., 2023), which may indicate a compositional layered mantle.
Other than these various behaviours of a single plume conduit, plume merging may further complicate the observed plume shapes.For example, we identify two CCs for Galapagos in the mid-mantle that merge into one CC with < 1% δV S above 660 km in SEMUCB-WM1.It may represent that two adjacent conduits are ponded at 660 km and the ponding zones of them merge into one conduit or these two resolved CCs are caused by a lack of resolution in SEMUCB-WM1 as they are only observed in SEMUCB-WM1.The TCs of Macdonald and Pitcairn from GLAD-M25 suggest these two plumes merge in the midmantle and branch above 660 km.Merging of two adjacent plumes has been demonstrated by both lab experiment (Moses et al., 1991) and numerical models (e.g., Lewis-Merrill et al., 2022;Brunet & Yuen, 2000), and the branching of the merged conduit could be explained by secondary plumes emerging from a ponding plume.
Given all these uncertainties in our interpretations of plume dynamics from observed plume shapes, our TCs are useful for future numerical modeling.For example, idealized plume models can explore under which geodynamics setting, the observed plume shapes can be reproduced.Our TCs can also provide a better schematic model for future studies to interpret the geochemical heterogeneity of OIBs from different hotspots.For example, previous studies have tried to interpret the heterogeneous isotopic signals of OIBs from neighbouring hotspots by correlating them with the vertical projection of the hotspots onto the CMB (Huang et al., 2011;Harpp & Weis, 2020) or interpreting these isotopic signals under simplified schematic plume models (Williams et al., 2019;Cordier et al., 2021).Our TCs can provide information about potential inter-plume interactions and the ascent history of plumes, which can be critical to the interpretation of geochemical observations.
Conclusion
Broad plumes clustering around LLSVPs have been recognized from the latest global tomographic models.Our study presents a systematic analysis of the pathways of these plume conduits.We carried out an analysis of the shapes of plume conduits in an immersive headset-based virtual reality (VR) environment.The wavespeed variations along the traced conduits from SEMUCB-WM1 and GLAD-M25 generally appear to be slower than the conduits predicted by geodynamic models and vertical conduits in the mid to lower mantle depth regardless of which tomographic models they are evaluated in.The traced conduits are 1.1 -3 times slower than either modeled or vertical conduits.This suggests that our manually-traced conduits are more consistent with the locus of slow seismic velocities within the mantle than either the vertical conduits that some authors have assumed when relating surface observables to deep mantle structures or the shapes of plume conduits predicted using physically simplified geodynamic models.Moreover, our traced conduits are more consistent with the petrologically-determined excess temperature than either of the other types of conduits.
In our manually traced conduits, the total amount of offset from the surface to the deep mantle is comparable between many traced and modeled conduits (usually smaller than 10 • ), while the offset direction of traced and modeled conduits usually differ.Some traced conduits of plumes stemming from the edge of the LLSVPs (Canary, Juan Fernando, Reunion, San Felix, and St Helena) tend to be 5 -10 • less offset than their modeled conduits, but the traced and modeled conduits share similar offset directions.Our traced conduits reveal a tendency for plumes to stagnate or to be offset at mid-mantle depths (660 -1250 km), a behavior that is not captured in modeled conduits.Previous geophysical studies, mineral physics studies, and geodynamics modeling provide multiple mechanisms that could contribute to plume ponding or deflection, including the buoyancy barrier induced by phase transitions and the viscous decoupling of conduits.The large variations of V S anomaly along plume conduits and the complex observed plume shapes together suggest that many plumes are thermochemical.Our analysis of plume conduit shapes provides a dataset that can be of value across multiple disciplines including geodynamic modeling, geochemistry, and mineral physics.
SEMUCB-WM1
Modeled GLAD-M25 March 30, 2024, 11:35pm and S3-5).One of the main causes is the poor inter-model agreement above 660 km and below 2000 km.The other main cause is that the δV S of CCs with similar shapes can amplify at different depths in different tomographic models.It can result in very different interpretations of the most-reasonable conduit path.
Figure 1 .
Figure 1.(a) Cross section of Pacific plumes in SEMUCB-WM1 and the location of the crosssection on the map, and (b) the 3D image of -2%, -1.2%, and -0.75% δVS isosurfaces taken from the same region.
Figure 2 .Figure 5 .
Figure 2. Traced and modeled (Steinberger & Antretter, 2006) plume conduits in SEMUCB-WM1 (top) and in GLAD-M25 (bottom).The colorful dots represent modeled conduits, while black-white dots represent traced conduits.The green circles represent the location of hotspots.The background shows δVS at 2850 km depth.Plate motions in the spreading-aligned mantle reference frame of Becker et al. (2015) are shown with gray arrows.
Figure 6 .
Figure 6.The depth profile of tilt angle along 20 plume conduits.Blue represents the conduits modeled in Steinberger and Antretter (2006).Red represents the traced conduits in SEMUCB-WM1.Yellow represents the traced conduits in GLAD-M25.The gray line marks the 60 • angle.
Figure 7 .
Figure 7. Azimuth and offset distance of model-predicted conduits and conduits traced in SEMUCB-WM1 and GLAD-M25 with respect to hotspots.Blue represents the azimuth of a conduit at different depths.Green represents the angular offset between a conduit and its hotspot.
Figure 8 .
Figure 8. Cross section and map view of the traced conduits of a) Macdonald and Pitcairn, b) Cape Verde and Canary, c) Iceland, d) Hawaii in SEMUCB-WM1 and GLAD-M25.From top to bottom, the dash lines represent 410, 660, and 1250 km depth.
Figure 9 .Figure S1 .Figure S2 .
Figure 9. Cross section and map view of the traced conduits of a) Samoa, b) St Helena, c) Reunion, d) Easter similar to Figure 8.
Figure S3 .
Figure S3.Cross section and map view of the traced conduits of San Felix and Juan Fernandez
Figure S4 .:
Figure S4.Cross section and map view of the traced conduits of a) Caroline, b) Louisville, | 2024-06-09T20:41:46.175Z | 2024-06-01T00:00:00.000 | {
"year": 2024,
"sha1": "eeb07a0b7fe33fe7b54c678c973582330934ac15",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1029/2024gc011517",
"oa_status": "GOLD",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "eeb07a0b7fe33fe7b54c678c973582330934ac15",
"s2fieldsofstudy": [
"Geology",
"Computer Science"
],
"extfieldsofstudy": []
} |
245731551 | pes2o/s2orc | v3-fos-license | Perception of Fairness of Interaction in Handling Complaints of Users of Travel Agency Services: Evidence from Serbia and Croatia
Service-dominant logic observes the user as a co-creator of value in the process of providing services. In the case of a complaint, as a result of dissatisfaction, the user and the travel agency become a co-creator of the value of service recovery. The perception of interactionist fairness is one of the determinants of a user’s complaint behaviour. Interactionist fairness is seen in the extent to which users consider to be treated fairly in terms of their interaction with a travel agency employee during the service recovery process. The research presented in this paper is based on determining the differences in perception of fairness of interaction in handling complaints of users of travel agency services in terms of their socio-demographic characteristics. The survey was conducted on a sample that included 297 respondents from the territory of Serbia and Croatia. The proposed hypotheses were tested using the Mann – Whitney U test and the Kruskal–Wallis test. The purpose of this paper is to point out to travel agencies the importance of users’ reactions to unsatisfactory service as a kind of reflection of cooperation in service recovery.
This work is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License.
Introduction
The user gains experience by using the service and develops a perception of its performance that affects satisfaction and overall assessment of the spending experience (Jaakkola et al., 2015;Lin et al., 2016). In case of an unsatisfactory experience, a complaint may arise as a response from the user. Submitting a complaint develops interaction between customers and employees on the first line of service. The perception of interactionist justice in this paper is viewed as one of the determinants of appellate behaviour. Carlson et al. (2016) and Söderlund et al. (2017) define the perceived value as an overall assessment, an individual's compromise between benefit and sacrifice. Factors that influence forming of user perception are numerous and range from general to special. Choi et al. (2014) argue that individuals' justice perception of service recovery attempts affects their emotional response such as disappointment, happiness, and pleasure, affecting attitudinal loyalty and behavioural loyalty. Following the set goal, the subject of the research is the influence that socio-demographic characteristics of the service users of travel agencies have on their perception of fairness of interaction when handling complaints.
In the case of a complaint, the interaction between employees and users is contextualized. The context shapes the interaction itself and how the employee accesses user resources (Plé, 2016;Tao et al., 2016). The quality of interaction is determined by interpersonal relationships that take place during the process of recovery service. After sending a complaint and responding to it, the user forms an impression. The theory of justice is useful in explaining the formation of impressions after the service recovery process and results (Maxham III et al., 2003). Accordingly, a threedimensional approach to justice (distributive, procedural, and interactionist justice) is distinguished (McCollough et al., 2000;Mattila, 2001;Maxham III, 2001;Maxham III et al., 2002;2003;Karatepe, 2006;Gyung et al., 2010).
Interactionist justice includes how information is exchanged and the results are communicated. According to Mattila (2001), interactionist justice is the correctness of perceiving the user's treatment. According to Maxham III et al. (2002), interactionist justice is viewed to the extent in which those users feel treated fairly in terms of their interaction with the first line of service during the recovery process (includes elements of kindness, honesty, interest in justice, and how the complainants view upon the effort). Interactionist justice deals with interpersonal relationships when making procedures and the outcome of delivery. Based on a review of papers in this field (Homburg et al., 2010;Phau et al., 2008;Baron et al., 2005), it can be seen that there is no consensus regarding the interpretation of the influence of socio-demographic characteristics on the perception of interactionist justice. This paper aims to identify the differences in perception of interactionist justice depending on the different sociodemographic characteristics of users.
Methodology
The paper presents a part of the research that examines differences in the perception of fairness of interaction when responding to complaints depending on users' sociodemographic characteristics (gender, age, level of education, monthly income, place of residence, number of family members). The research was conducted from January to April 2018 on a suitable sample that included 297 respondents of different gender, ages, and levels of education in the Republic of Serbia and the Republic of Croatia.
For this paper, a section of the questionnaire related to the socio-demographic characteristics of the respondents and a section related to the perception of fairness of interaction during the service recovery process was used.
The structure of the sample was dominated by female respondents (70%); the largest share of the sample consisted of respondents under 27 years of age (35%), while the number of respondents from 58 to 67 years was the smallest (8.75%). Half of the respondents had a college or university degree (50.50%). The largest number of respondents (29.63%) had a monthly income of over 600 EUR. 87.54% of respondents lived in the city, while 62.62% of the total sample lived with one or two family members.
A review of the literature and research objectives leads to the following hypotheses: • H1: There is a statistically significant difference in the perception of fairness of interaction in handling complaints between tourist service users of different socio-demographic characteristics. In addition to the main one, six auxiliary hypotheses are set: • H1.1: There is a statistically significant difference in the perception of fairness of interaction in handling complaints between tourist service users of different gender • H1.2: There is a statistically significant difference between tourist service users of different ages in the perception of fairness of interaction in handling complaints • H1.3: There is a statistically significant difference between tourist services users of different levels of education in the perception of fairness of interaction in handling complaints • H1.4: There is a statistically significant difference in the perception of fairness of interaction in handling complaints between tourist service users with different amounts of monthly income • H1.5: There is a statistically significant difference between tourist service users of different places of residence in the perception of the fairness of the interaction in handling complaints • H1.6: There is a statistically significant difference in the perception of fairness of interaction in handling complaints between tourist service users with different numbers of family members The data were processed statistically in the software SPSS version 21. Mann -Whitney U test and Kruskal-Wallis were used for testing the proposed hypotheses. Table 1 shows testing of the H1.1 hypothesis and determining statistically significant differences in the perception of the fairness of interaction in handling complaints concerning the gender of tourist services users.
Relationship of perceptions of fairness and gender
The Man-Whitney U test did not reveal a statistically significant difference in perception of fairness of interaction in handling complaints of men (Md = 3.33, n = 59) and women (Md = 3.33, n = 99) in the Republic of Serbia, U = 2751.5, Z = -0.614, p = 0.539, r = 0.05 (small effect). Men and women in the Republic of Serbia equally perceived the fairness of interaction in handling complaints.
The Man-Whitney U test did not reveal a statistically significant difference in perception of fairness of interaction in handling complaints of men (Md = 4.00, n = 31) and women (Md = 3.83, n = 108) in the Republic of Croatia, U = 1580.0, Z = -0.498, p = 0.618, r = 0.04 (small effect). Men and women in the Republic of Croatia equally perceived the fairness of interaction in handling complaints. Source: the authors' calculation Since the Man-Whitney U test in the entire sample (the Republic of Serbia and the Republic of Croatia) did not reveal a statistically significant difference in the perception of fairness of interaction in handling complaints according to the respondents' gender, the H1.1 hypothesis is rejected. Thus, there is no difference in the perception of fairness of interaction in handling complaints between service users of a different gender. Table 2 shows testing of the H1.2 hypothesis and determining statistically significant differences in perception of fairness of interaction in handling complaints concerning an elderly user of tourist services.
Relationship of perceptions of fairness and age
The Kruskal-Wallis test did not reveal a statistically significant difference in perception of fairness of interaction in handling complaints according to the age of respondents in the Republic of Serbia (Gp1, n = 88: 18-27 years, Gp2, n = 38: 28-37 years, Gp3, n = 18: 38-47 years, Gp4, n = 12: 48-57 years, Gp5, n = 2: 58-67 years), c2 (4, n = 158) = 1.451, p = 0.835. Respondents of different ages in the Republic of Serbia equally perceived the fairness of interaction in handling complaints. The Kruskal-Wallis test revealed a statistically significant difference in perception of fairness of interaction in handling complaints according to the age of respondents in the Republic of Croatia (Gp1, n = 16: 18-27 years, Gp2, n = 16: 28-37 years, Gp3, n = 11: 38-47 years, Gp4, n = 24: 48-57 years, Gp5, n = 24: 58-67 years and Gp6, n = 48: 68 and older), c2 (5, n = 139) = 11.221, p = 0.047. Respondents of different ages in the Republic of Croatia perceived the fairness of interaction in handling complaints differently; respondents between 58 and 67 years of age had the highest perception of fairness of interaction in handling complaints, while respondents between 48 and 57 years of age had the lowest perception of fairness in handling complaints.
Bearing in mind that the Kruskal-Wallis test in the Republic of Serbia did not reveal a statistically significant difference in the perception of fairness of interaction in handling complaints. In contrast, in the Republic of Croatia, it did, the H1.2 hypothesis is partially accepted. Table 3 shows testing the H1.3 hypothesis and determines a statistically significant difference in the perception of the fairness of the interaction in handling complaints concerning the level of education of tourist service users. The Kruskal-Wallis test did not reveal a statistically significant difference in the perception of fairness of interaction in handling complaints according to the level of education of respondents in the Republic of Serbia (Gp1, n = 1: primary school or less, Gp2, n = 51: high school, Gp3, n = 65: college or university, Gp4, n = 41: master's or doctorate), c2 (3, n = 158) = 2.915, p = 0.405. Respondents with different levels of education in the Republic of Serbia equally perceived the fairness of interaction in handling complaints.
Relationship of perceptions of fairness and education
The Kruskal-Wallis test did not reveal a statistically significant difference in the perception of fairness of interaction in handling complaints according to the level of education of respondents in the Republic of Croatia (Gp1, n = 25: high school, Gp2, n = 85: college or university, Gp3, n = 29: master's or doctorate), c2 (2, n = 139) = 0.422, p = 0.810. Respondents with different levels of education in the Republic of Croatia equally perceived the fairness of interaction in handling complaints.
Bearing in mind that the Kruskal-Wallis test in the entire sample (the Republic of Serbia and the Republic of Croatia) did not reveal a statistically significant difference in the perception of fairness of interaction in handling complaints according to the level of education of the respondents, the H1.3 hypothesis is rejected. Table 4 shows testing of the H1.4 hypothesis and determining a statistically significant difference in the perception of the fairness of interaction in handling complaints concerning tourist service users' monthly income. Bearing in mind that the Kruskal-Wallis test in the Republic of Serbia did not reveal a statistically significant difference in the perception of fairness of interaction in handling complaints according to the amount of respondents' monthly income. In contrast, in the Republic of Croatia, it did reveal a statistically significant difference. The H1.4 hypothesis can be partially accepted. Table 5 shows testing of the H1.5 hypothesis and determining a statistically significant difference in the perception of the fairness of interaction in handling complaints concerning the place of residence of tourist service users.
Relationship of perceptions of fairness and rural vs. urban areas
The Mann-Whitney's U test revealed a statistically significant difference in the perception of fairness of interaction in handling complaints by respondents from urban (Md = 3.33, n = 136) and rural areas (Md = 4.17, n = 22) in the Republic of Serbia, U = 1089.0, Z = -2.066, p = 0.039, r = 0.16 (small effect). In the Republic of Serbia, Vol. 7 No. 1 respondents from rural areas perceived fairness in handling complaints more than respondents from urban areas. Table 5 Mean Rank and Median of perceptions of fairness of interaction in handling complaints in Serbia and Croatia across the place of residence of tourist service users Bearing in mind that Mann-Whitney's U test in the Republic of Serbia revealed a statistically significant difference in the perception of fairness of interaction in handling complaints according to respondents' place of residence. At the same time, in the Republic of Croatia, it did not; it can be concluded that the H1.5 hypothesis is partially accepted. Table 6 shows testing of the H1.6 hypothesis and determining a statistically significant difference in perception of fairness of the interaction in handling complaints concerning the number of family members of tourist service users. The Kruskal-Wallis test revealed a statistically significant difference in perception of fairness of interaction in handling complaints according to the number of respondents' family members in the Republic of Serbia (Gp1, n = 42: 1-2 members, Gp2, n = 86: 3-4 members, Gp3, n = 30: 5 and more members), c2 (2, n = 158) = 5.940, p = 0.051. Respondents with different numbers of household members in the Republic of Serbia had different perceptions of the fairness of interaction in handling complaints, i.e., respondents with five or more members had the highest perception of fairness of interaction in handling complaints. In comparison, respondents with two family members had the lowest perception of fairness of interaction in handling complaints.
Relationship of perceptions of fairness and family size
The Kruskal-Wallis test did not reveal a statistically significant difference in the perception of fairness of interaction in handling complaints according to the number of respondents' family members in the Republic of Croatia (Gp1, n = 31: 1-2 members, Gp2, n = 100: 3-4 members, Gp3, n = 8: 5 and more members), c2 (2, n = 139) = 1.963, p = 0.375. Respondents with different household members in the Republic of Croatia equally perceived interaction fairness in handling complaints.
Bearing in mind that the Kruskal-Wallis test in the Republic of Serbia revealed a statistically significant difference in the perception of fairness of interaction in handling complaints according to the number of respondents' family members. At the same time, it did not in the Republic of Croatia; it can be concluded that the H1.6 hypothesis is partially accepted.
Conclusion
By testing the auxiliary hypotheses, it was found that there were no differences in perception of fairness of the interaction in handling complaints depending on gender and level of education as socio-demographic characteristics of the respondents. There was a statistically significant difference in age and monthly income in the Republic of Croatia. In contrast, in the Republic of Serbia, the difference was identified for residence and the number of family members as socio-demographic characteristics. After examining the existence of difference in perception of fairness of interaction in the handling of complaints concerning individual socio-demographic characteristics of respondents (users of tourist services) through six auxiliary hypotheses, it can be concluded that the main hypothesis H: There is a statistically significant difference in perception of fairness of interaction in handling complaints between tourist service users of different socio-demographic characteristics, is partially accepted.
In the Republic of Croatia, respondents over the age of 58 had the highest perception of fairness of interaction in handling complaints, which agrees with the results of the research by Phau et al. (2008), Ngai et al. (2007), but also deviates from the results they pointed out for gender and monthly income. Regarding the monthly income in the Republic of Croatia, the users with the lowest income had the highest perception of interaction in handling complaints. In the Republic of Serbia, the users who lived in the countryside and had five or more family members had the highest perception of interaction fairness. Therefore, different cultures, years of age, amount of monthly income, place of residence, and the number of family members can form different perceptions of the fairness of interaction.
As the research results showed a difference in the perception of fairness in four out of the six auxiliary hypotheses, it can be concluded that the users' reaction to unsatisfactory service reflects their socio-demographic characteristics. The limitations of this study may serve as a basis for future research that may focus on determining the extent to which socio-demographic characteristics participate in the overall perception of interactionist justice. In addition to socio-demographic characteristics, the psychographic characteristics of users should be included. To obtain information | 2022-01-06T16:18:03.160Z | 2021-12-07T00:00:00.000 | {
"year": 2021,
"sha1": "92ba623661d9c520db1095878c2bc196c1edf5be",
"oa_license": "CCBYNC",
"oa_url": "https://hrcak.srce.hr/ojs/index.php/entrenova/article/download/20248/11126",
"oa_status": "HYBRID",
"pdf_src": "Adhoc",
"pdf_hash": "1e81f51563bdf3e4e7e7838009fd69c774f10965",
"s2fieldsofstudy": [
"Business"
],
"extfieldsofstudy": []
} |
48544696 | pes2o/s2orc | v3-fos-license | Optimal Contours for High-Order Derivatives
As a model of more general contour integration problems we consider the numerical calculation of high-order derivatives of holomorphic functions using Cauchy's integral formula. Bornemann (2011) showed that the condition number of the Cauchy integral strongly depends on the chosen contour and solved the problem of minimizing the condition number for circular contours. In this paper we minimize the condition number within the class of grid paths of step size h using Provan's algorithm for finding a shortest enclosing walk in weighted graphs embedded in the plane. Numerical examples show that optimal rectangular paths yield small condition numbers even in those cases where circular contours are known to be of limited use, such as for functions with branch-cut singularities.
Introduction
To escape from the ill-conditioning of difference schemes for the numerical calculation of high-order derivatives, numerical quadrature applied to Cauchy's integral formula has on various occasions been suggested as a remedy (for a survey of the literature, see Bornemann 2011). To be specific, we consider a function f that is holomorphic on a complex domain D 0; Cauchy's formula gives 1 for each cycle Γ ⊂ D that has winding number ind(Γ; 0) = 1. If Γ is not carefully chosen, however, the integrand tends to oscillate at a frequency of order O(n −1 ) with very large amplitude (Bornemann 2011, Fig. 4). Hence, in general, there is much cancelation in the evaluation of the integral and ill-conditioning returns through the backdoor. The condition number of the integral 2 is (Deuflhard and Hohmann 2003, Lemma 9.1) κ(Γ, n) = Γ |z| −n−1 | f (z)| d|z| Γ z −n−1 f (z) dz and Γ should be chosen as to make this number as small as possible. Equivalently, since the denominator is, by Cauchy's theorem, independent of Γ, we have to minimize d(Γ) = Γ |z| −n−1 | f (z)| d|z|.
(2) Bornemann (2011) considered circular contours of radius r; he found that there is a unique r * = r(n) solving the minimization problem and that there are different scenarios for the corresponding condition number κ * (n) as n → ∞: • κ * (n) → ∞, if f is in the Hardy space H 1 ; • lim sup n→∞ κ * (n) M, if f is an entire function of completely regular growth which satisfies a non-resonance condition of the zeros and whose Phragmén-Lindelöf indicator possesses M maxima (a small integer). Hence, though those (and similar) results basically solve the problem of choosing proper contours for entire functions, much better contours have to be found for the class H 1 . Moreover, the restriction to circles lacks any algorithmic flavor that would point to more general problems depending on the choice of contours, such as the numerical solution of highly-oscillatory Riemann-Hilbert problems (Olver 2011). 3 In this paper, we solve the contour optimization problem within the more general class of grid paths of step size h (see Fig. 1; we allow diagonals to be included) as they are known from Artin's proof of the general, homological version of Cauchy's integral theorem (Lang 1999, IV.3). Such paths are composed from horizontal, vertical and diagonal edges taken from a (bounded) grid Ω h ⊂ D of step size h. Now, the weight function (2), being additive on the abelian group of path chains, turns the grid Ω h into an edge-weighted graph such that each optimal grid path W * becomes a shortest enclosing walk (SEW); "enclosing" because we have to match the winding number condition ind(W * ; 0) = 1. An efficient solution of the SEW problem for embedded graphs was found by Provan (1989) and serves as a starting point for our work.
Outline of the Paper. In Section 2 we discuss general embedded graphs in which an optimal contour is to be searched for; we discuss the problem of finding a shortest enclosing walk and recall Provan's algorithm. In Section 3 we discuss some implementation details and tweaks for the problem at hand. Finally, in Section 4 we give some numerical examples; these can easily be constructed in a way that the new algorithm outperforms, by orders of magnitude, the optimal circles of Bornemann (2011) with respect to accuracy and the direct symbolic differentiation with respect to efficiency.
Contour Graphs and Shortest Enclosing Walks
By generalizing the grid Ω h , we consider a finite graph G = (V, E) embedded to D, that is, built from vertices V ⊂ D and edges E that are smooth curves connecting the vertices within the domain D. We write uv for the edge connecting the vertices u and v; by (2), its weight is defined as A walk W in the graph G is a closed path built from a sequence of adjacent edges, written as (where+ denotes joining of paths) it is called enclosing the obstacle 0 if the winding number is ind(W; 0) = 1. The set of all possible enclosing walks is denoted by Π. As discussed in §1, the condition number is optimized by the shortest enclosing walk (not necessarily unique) The problem of finding such a SEW was solved by Provan (1989): the idea is that with P u,v denoting a shortest path between u and v, any shortest enclosing walk W * = w 1 w 2+ w 2 w 3+ · · ·+ w m w 1 can be cast in the form (Provan 1989, Thm. 1) W * = P w 1 ,w j+ w j w j+1+ P w j+1 ,w 1 for at least one j. Hence, any shortest enclosing walk W * is already specified by one of its vertices and one of its edges; therefore Provan's algorithm finds W * by, first, building the finite setΠ; second, by removing all walks from it that do not enclose z = 0; and third, by selecting a walk from the remaining candidates that has the lowest total weight. Using Fredman and Tarjan's (1987) implementation of Dijkstra's algorithm to compute the shortest paths P u,v , the run time of the algorithm is known to be (Provan 1989, Corollary 2)
Implementation Details
We restrict ourselves to graphs Ω h given by finite square grids of step size h, centered at z = 0-with all vertices and edges removed that do not belong to the domain D. Since Provan's algorithm just requires an embedded graph but not a planar graph, we may add the diagonals of the grid cells as further edges to the graph (see Fig. 1). 4 For such a graph Ω h , with or without diagonals, we have 4 These diagonals increase the number of possible slopes which results, e.g., in improved approximations of the direction of steepest descent at a saddle point of d(z) (Bornemann 2011, §9) or in a faster U-turn around the end of a branch-cut, see Fig. 5. The latter case leads to some significant reductions of the condition number, see Fig. 4.
: the color coding shows the size of log d(z); with red for large values and green for small values. The smallest level shown is the threshold, below of which the edges of W * do not contribute to the first couple of significant digits of the total weight. The plots illustrate that W * and W v * differ typically just in a small region well below this threshold; consequently, both walks yield about the same condition number. On the right note the five-leaved clover that represents the combination of algebraic and essential singularity at z = −1.
3.1. Edge Weight Calculation. Using the edge weights d(uv) on Ω h requires to approximate the integral in (3). Since not much accuracy is needed here, 5 a simple trapezoidal rule with two nodes is generally sufficient: Althoughd(uv) will typically have an accuracy of not more than just a few bits for the rather coarse grids Ω h we work with, we have not encountered a single case in which a more accurate computation of the weights would have resulted in a different SEW W * .
3.2. Reducing the size ofΠ. As described in Section 2, Provan's algorithm starts by building a walk for every pair (v, e) ∈ V × E and then proceeds by selecting the best enclosing one. A simple heuristic, which worked well for all our test cases, helps to considerably reduce the number of walks to be processed: Let Recall that optimizing the condition number is just a question of order of magnitude but not of precise numbers. Once the contour Γ has been fixed, a much more accurate quadrature rule will be employed to calculate the integral (1) itself, see §3.5. and define W v * as a SEW subject to the constraint Obviously W * and W v * do not need to agree in general, as v * does not have to be traversed by W * . However, since v * is the vertex with lowest weight, both walks differ mainly in a region that has no, or very minor, influence on the total weight and, consequently, also no significant influence on the condition number. Actually, W * and W v * yielded precisely the same total weight for all functions that we have studied (Fig. 2 3.3. Size of the Grid Domain. The side length l of the square domain supporting Ω h has to be chosen large enough to contain a SEW that would approximate an optimal general integration contour. E.g., if f is entire, we choose l large enough for this square domain to cover the optimal circular contour: l > 2r * , where r * is the optimal radius given in Bornemann (2011); a particularly simple choice is l = 3r * . In other cases we employ a simple search for a suitable value of l by calculating W * for increasing values of l until d(W * ) does not decrease substantially anymore. During this search the grid will be just rescaled, that is, each grid uses a fixed number of vertices; this way only the number of search steps enters as an additional factor in the complexity bound.
Multilevel Refinement of the SEW.
Choosing a proper value of h is not straightforward since we would like to balance a good approximation of a generally optimal integration contour with a reasonable amount of computing time. In principle, we would construct a sequence of SEWs for smaller and smaller values of h until the total weight of W * does not substantially decrease anymore. To avoid an undue amount of computational work, we do not refine the grid everywhere but use an adaptive refinement by confining it to a tubular neighborhood of the currently given SEW W * (see Fig. 3): 1: calculate W * within an initial grid; 2: subdivide each rectangle adjacent to W * into 4 rectangles; 3: remove all other rectangles; 4: calculate W * in the newly created graph. As long as the total weight of W * decreases substantially, steps 2 to 4 are repeated. It is even possible to tweak that process further by not subdividing rectangles that just contain vertices or edges of W * having weights below a certain threshold. By geometric summation, the complexity of the resulting algorithm is where H denotes the step size of the coarsest grid and h = H/2 k the step size after k loops of adaptive refinement. An analogous approach to the constrained W v * -variant of the SEW algorithm given in §3.2 reduces the complexity further to comparison initial graph initial walk first refined grid first refined walk Of course, trapezoidal sums would also benefit from some recursive device that helps to neglect those nodes which do not contribute to the numerical result.
Numerical Results
Table 1 displays condition numbers of SEWs W ⇤ on rectangular grids as compared to the optimal circles C r⇤ for a couple of functions; Fig. 5 shows some of the corresponding contours. For entire f we observe that W ⇤ , like the optimal circle C r⇤ , automatically traverses the saddle points of d(z). It was shown in Bornemann which is close to the best possible bound O(h −1 ) given by the work that would be needed to just list the SEW.
3.5. Quadrature Rule for the Cauchy Integral. Finally, after calculation of the SEW Γ = W * , the Cauchy integral (1) has to be evaluated by some accurate numerical quadrature. We decompose Γ into maximally straight line segments, each of which can be a collection of many edges. On each of those line segments we employ Clenshaw-Curtis quadrature in Chebyshev-Lobatto points. Additionally we neglect segments with a weight smaller than 10 −24 times the maximum weight of an edge of Γ, since such segments will not contribute to the result within machine precision. This way we not only get spectral accuracy but also, in many cases, less nodes as would be needed by the vanilla version of trapezoidal sums on a circular contour: Fig. 4 shows an example with the order n = 300 of differentiation but accurate solutions using just about 200 nodes which is well below what the sampling condition would require for circular contours (Bornemann 2011, §2.1). Of course, trapezoidal sums would also benefit from some recursive device that helps to neglect those nodes which do not contribute to the numerical result. Figure 4. Illustration of the spectral accuracy of piecewise Clenshaw-Curtis quadrature on SEW contours for a function with a branch-cut singularity. For larger n, we observe a significant improvement by adding diagonals to the grid. We get to machine precision for n = 10 and loose about two digits for n = 300. (Note that for optimized circular contours the loss would have been about 6 digits for n = 10 and about 15 digits for n = 300; cf. Bornemann 2011, Thm. 4.7). Table 1. Condition numbers for some f (z): r * are the optimal radii given in Bornemann (2011); W * was calculated in all cases on a 51 × 51-grid with l = 3r * (in the last two cases l was found as in §3.3). For 1/Γ(z), the peculiar order of differentiation n = 2006 is one of the very rare resonant cases (specific to this entire function) for which circles give exceptionally large condition numbers (cf. Bornemann 2011, Table 5). In the last example, differentiation is for z = 1/ √ 2.
Numerical Results
Table 1 displays condition numbers of SEWs W * as compared to the optimal circles C r * for five functions; Table 2 gives the corresponding CPU times and Fig. 5 shows some of the contours. (All experiments were done using hardware arithmetic.) The purpose of these examples is twofold, namely to demonstrate that: (1) the SEW algorithm matches the quality of circular contours in cases where the latter are known to be optimal such as for entire functions; (2) the SEW algorithm is significantly better than the circular contours in cases where the latter are known to have severe difficulties.
Thus, the SEW algorithm is a flexible automatic tool that covers various classes of holomorphic functions in a completely algorithmic fashion; in particular there is no deep theory needed to just let the computation run.
In the examples of entire f we observe that W * and W v * , like the optimal circle C r * would do, traverses the saddle points of d(z). It was shown in Bornemann (2011, Thm. 10.1) that, for such f , the major contribution of the condition number comes from these saddle points and that circles are (asymptotically, as n → ∞) paths of steepest decent. Since W * can cross a saddle point only in a horizontal, vertical, or (if enabled) diagonal direction, somewhat larger condition numbers have to be expected. However, the order of magnitude of the condition number of C r * is precisely matched. This match holds in cases where circles give a condition number of approximately 1, as well as in cases with exceptionally large condition numbers, such as for f (z) = 1/Γ(z) in the peculiar case of the order of differentiation n = 2006 (cf. Bornemann 2011, §10.4).
for instance, f (z) = (1 − z) 11/2 gives κ(C r * , n) ∼ 0.16059 · n 13/2 . On the other hand, W * gives condition numbers that are orders of magnitude better than those of C r * by automatically following the branch cut at (1, ∞). The latter example can easily be cooked-up to outperform symbolic differentiation as well: using Mathematica 8, the calculation of the n-th derivative of f (z) = exp(1/(1 + 8z) 1/5 )(1 − z) 11/2 J 0 (z) at z = 1/ √ 2 takes already about a minute for n = 23 but had to be stopped after more than a week for n = 100. Despite the additional difficulty stemming from the combination of an algebraic and an essential singularity at z = −1, the W v * version of the SEW calculates this n = 100 derivative to an accuracy of 13 digits in less than 4 s; whereas optimized circular contours would give only about 3 correct digits here (see Fig. 6).
While many more such numerical experiments would demonstrate that reasonably small condition numbers are obtainable in general, 6 the study of rigorous condition number bounds for the SEW has to be postponed to future work. | 2012-07-31T14:21:16.000Z | 2011-07-03T00:00:00.000 | {
"year": 2011,
"sha1": "4326849f5c73fc31b6800d0b18954de69fd5c24b",
"oa_license": null,
"oa_url": "http://www-m3.ma.tum.de/foswiki/pub/M3/Allgemeines/FolkmarBornemannPublications/IMA_J_Numer_Anal-2013-Bornemann-403-12.pdf",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "27ee75813d53c236cbc5b64cea1ff2c96d237536",
"s2fieldsofstudy": [
"Mathematics"
],
"extfieldsofstudy": [
"Mathematics"
]
} |
232131135 | pes2o/s2orc | v3-fos-license | Changes in the Neurochemical Coding of the Anterior Pelvic Ganglion Neurons Supplying the Male Pig Urinary Bladder Trigone after One-Sided Axotomy of Their Nerve Fibers
The present study investigated the effect of unilateral axotomy of urinary bladder trigone (UBT)-projecting nerve fibers from the right anterior pelvic ganglion (APG) on changes in the chemical coding of their neuronal bodies. The study was performed using male pigs with immunohistochemistry and quantitative real-time PCR (qPCR). The animals were divided into a control (C), a morphological (MG) or a molecular biology group (MBG). APG neurons supplying UBT were revealed using the retrograde tracing technique with Fast Blue (FB). Unilateral axotomy resulted in an over 50% decrease in the number of FB+ neurons in both APG ganglia. Immunohistochemistry revealed significant changes in the chemical coding of FB+ cells only in the right ganglion: decreased expression of dopamine-B-hydroxylase (DBH)/tyrosine hydroxylase (TH) and up-regulation of the vesicular acetylcholine transporter (VAChT)/choline acetyltransferase (ChAT), galanin (GAL), vasoactive intestinal polypeptide (VIP) and brain nitric oxide synthase (bNOS). The qPCR results partly corresponded with immunofluorescence findings. In the APGs, genes for VAChT and ChAT, TH and DBH, VIP, and NOS were distinctly down-regulated, while the expression of GAL was up-regulated. Such data may be the basis for further studies concerning the plasticity of these ganglia under experimental or pathological conditions.
Introduction
The main source of autonomic innervation of the lower urinary tract of mammals is the pelvic plexus [1][2][3][4]. Its morphology is characterized by high interspecies diversity [2,3] ranging from a network containing numerous small ganglia in humans, pigs, rabbits, dogs and cats, to two clearly distinguished parts in the guinea pig: the cranial nerve network, with a large, paired ganglion called the anterior major pelvic ganglion (AMPG), and the caudal nerve network with numerous diffuse nerve ganglia [2]. In the male rat, most neurons form one large ganglion, called the major pelvic ganglion (MPG) and several additional (accessory) ganglia [2,3].
In the male domestic pig, very numerous pelvic neurons are organized in an orderly manner, and the largest and most cranial group of nerve cells are the anterior pelvic ganglia (APG) located on both sides of the bladder neck [1]. Pelvic ganglia are distinguished from other ganglia of the autonomic nervous system because they have a "mixed", sympatheticparasympathetic character [1][2][3]5]. Adrenergic neurons dominate in the cranial ganglia, while in the caudal ganglions they occur scarcely.
Previous studies have shown the involvement of APG neurons in the autonomic innervation of boar urogenital organs. Fibers originating from the right and left APG supply the trigone and the neck of the bladder and are responsible for the innervation of the vas deferens, seminal glands and the prostate body [4].
Axotomy is a factor that leads to the degeneration and even death of nerve cells [7]. The disruption of the neurite brings to stop the retrograde transport of neurotrophic factors from peripheral tissues to the perikaryon, which is responsible for maintaining the basic functions of the cell body [7]. Nerve cells demonstrate numerous adaptation processes, enabling them to adapt to new conditions and to defend themselves against the effects of axotomy. The available data on the influence of axotomy on neurons of the autonomic system refer to cells located in the paravertebral ganglia (sympathetic trunk ganglia, including SChG, superior cervical ganglion-SCG, stellate ganglion) and prevertebral ganglia (MPG, CaMG, paracervical ganglion-PCG) [8][9][10][11][12][13][14][15][16][17][18][19]. Knowledge how pelvic neurons react to damage to their axons is fragmentary. The only available data concern changes in the chemical coding of CaMG and APG neurons supplying the testes in the boar after unilateral and bilateral orchidectomy [20,21].
The goal of this study was to create an experimental model to study the influence of damage to the autonomic nerve fibers supplying the urogenital organs on the biology of their neurons using the domestic pig. The domestic pig is the animal species most closely related to humans in anatomical, histological and physiological aspects [4,22,23]. Therefore, the experiment aimed to determine the effect of the unilateral axotomy of the nerve fibers projecting from the right APG to UBT on the number and distribution of UBT-projecting nerve cells in the right and left APG, the expression of selected biologically active substances in the right and left APG neurons (adrenergic neuronal markers-DBH, TH), cholinergic cell markers (VAChT, ChAT) and selected neuropeptides (VIP, NPY, GAL and NOS) at the level of protein and mRNA.
Distribution and Number of FB+ Neurons in the APG Ganglia Supplying the UBT
In the control group, FB+ neurons were located in bilateral ganglia, and their total number was 16,539 (3308 ± 786.3 SEM). There was a difference in their number between the right and left APG. The right APG was characterized by a higher number of retrogradely labeled nerve cells compared to the left APG, amounting to 11,372 (2274 ± 575 SEM) and 5167 (1033 ± 312.2 SEM), respectively ( Figure 1). Although the distribution of FB + neurons was uniform, in the right ganglion they were concentrated mainly in the area of the exit of the nerve fibers to the seminal vesicle and the urinary bladder trigone. In the experimental group, as a result of the right-sided axotomy, there was a icant change in the number of FB+ neurons in both ganglia ( Figure 1). Both in th and left APG there was a decrease in their number. The total number of FB+ cells wa (1536 ± 766 SEM) including 5415 in the right ganglion (1083 ± 607.2 SEM) and 2246 left ganglion (452.8 ± 162.2 SEM). In the right APG, FB + neurons were present ma the exit of nerve fibers to the seminal vesicle and UBT, while in the left APG the concentrated mainly on one side, within the area supplied by the hypogastric nerv ures 1 and 2a,b). In the experimental group, as a result of the right-sided axotomy, there was a significant change in the number of FB+ neurons in both ganglia ( Figure 1). Both in the right and left APG there was a decrease in their number. The total number of FB+ cells was 7679 (1536 ± 766 SEM) including 5415 in the right ganglion (1083 ± 607.2 SEM) and 2246 in the left ganglion (452.8 ± 162.2 SEM). In the right APG, FB + neurons were present mainly at the exit of nerve fibers to the seminal vesicle and UBT, while in the left APG they were concentrated mainly on one side, within the area supplied by the hypogastric nerve (Figures 1 and 2a,b).
Control Group
Immunohistochemical studies showed that FB+ neurons of the anterior pelvic ganglia formed three major neuronal populations. The vast majority of such cells in the right and left APG (60.39 ± 1.09% and 70.92 ± 0.84%, respectively) belonged to the group of adrenergic neurons (TH/DBH-IR). The second-largest population was cells belonging to the group of cholinergic neurons (ChAT/VAChT-IR), which accounted for 39.14 ± 0.94% of FB+ neurons on the right and 30.46 ± 1.16% on the left. The smallest population was made up of cells belonging to the group of NANC neurons (non-adrenergic/non-cholinergic), which made 1.64 ± 0.37% in the right ganglion, and 1.62 ± 0.29% of the total FB+ neuronal population in the left ganglion ( Figure 3). Immunohistochemical staining also revealed that the FB+ neurons of both APG ganglia were immunoreactive for VIP (Figure 4a Moreover, it was observed that the percentage of FB+ neurons containing the studied neuropeptides did not differ significantly between the right and left ganglia. NPY-IR cells accounted for 21.17 ± 0.49% of right and 22.28 ± 1.08% of the left APG retrogradely labeled neurons, while VIP-IR perikaryons accounted for 4.40 ± 0.19% and 4.69 ± 0.22%, and GAL 0.92 ± 0.03% and 0.98 ± 0.02%, respectively ( Figure 3).
Experimental Group
In the experimental group, it was observed that the unilateral axotomy significantly changed the chemical coding of FB+ neurons in the right APG ( Figure 8). Immunohistochemical studies showed that after cutting the nerve fibers there was a drastic decrease in the number of FB+/TH/DBH-IR cells (from 60.39 ± 1.09% to 34.40 ± 2.43%) and a slight increase in the number of FB+/VAChT/ChAT-IR neurons (from 39.14 ± 0.94% to 48.00 ± 2.81%) as well as of NANC neurons (from 1.64 ± 0.37% to 19.49 ± 1.72%) in the right ganglion. Additionally, there was a significant increase in the number of VIPimmunoreactive FB+ nerve cells (from 4.40 ± 0.19% to 37.39 ± 1.13%) (Figure 4e Immunohistochemical studies showed that FB+ neurons of the anterior pelvic ganglia formed three major neuronal populations. The vast majority of such cells in the right and left APG (60.39 ± 1.09% and 70.92 ± 0.84%, respectively) belonged to the group of adrenergic neurons (TH/DBH-IR). The second-largest population was cells belonging to the group of cholinergic neurons (ChAT/VAChT-IR), which accounted for 39.14 ± 0.94% of FB+ neurons on the right and 30.46 ± 1.16% on the left. The smallest population was made up of cells belonging to the group of NANC neurons (non-adrenergic/non-cholinergic), which made 1.64 ± 0.37% in the right ganglion, and 1.62 ± 0.29% of the total FB+ neuronal population in the left ganglion ( Figure 3). Immunohistochemical staining also revealed that the FB+ neurons of both APG ganglia were immunoreactive for VIP ( Figure 4a The arrows indicate single, varicose GAL-IR fibers (c,d). The images were taken from blue, green and red fluorescent channels. Blue, green and red channels were digitally superimposed. Scale bar = 50 µm. (e-h) Confocal laser scanning microscope images showing a section of the right (R) APG from the experimental (E) group. Arrows indicate FB+ (e) GAL-positive (g) neurons, but no TH-IR (f). Very numerous GAL-IR fibers (g,h) marked with an arrowhead. The images were taken from blue, green and red fluorescent channels. Blue, green and red channels were digitally superimposed. Scale bar = 50 µm.
Control Group
Immunohistochemical staining showed numerous (++++) VAChT-positive intraganglionic nerve fibers of the right and left APG (Figure 7b). They formed bundles which, penetrating deep into the ganglia, separated the clusters of adrenergic and cholinergic neurons. Smooth VAChT-IR fibers, characterized by a longitudinal course, were located outside the clusters of nerve cells, in the peripheral regions of APG. It was observed that only single VAChT-immunoreactive fibers also contained VIP and NPY immunoreactivity. ChAT-IR fibers appeared as single (+), and their distribution was uniform over the On the basis of the performed immunohistochemical staining, it was also found that the right-sided axotomy did not cause significant changes in the chemical coding of FB+ neurons in the left APG ( Figure 10). FB+/TH/DBH-IR cells accounted for 69.49 ± 1.14% of the total left APG retrogradely labeled neurons. FB+/VAChT/ChAT-IR perikaryons made up 31.54 ± 1.21% and NANC neurons made up 1.72 ± 0.29% of FB+ neuronal population. In total, 21.48 ± 0.97% of FB+ neurons were immunoreactive for NPY, 3.49 ± 0.09% for VIP and 1.42 ± 0.23% for GAL. As in the control group, no cells expressing bNOS were found ( Figure 9).
Distribution and Immunohistochemical Characteristics of Intraganglionic Nerve Fibers
Immunohistochemical staining showed numerous (++++) VAChT-positive intraganglionic nerve fibers of the right and left APG (Figure 7b). They formed bundles which, penetrating deep into the ganglia, separated the clusters of adrenergic and cholinergic
Control Group
Immunohistochemical staining showed numerous (++++) VAChT-positive intraganglionic nerve fibers of the right and left APG (Figure 7b). They formed bundles which, penetrating deep into the ganglia, separated the clusters of adrenergic and cholinergic neurons. Smooth VAChT-IR fibers, characterized by a longitudinal course, were located outside the clusters of nerve cells, in the peripheral regions of APG. It was observed that only single VAChT-immunoreactive fibers also contained VIP and NPY immunoreactivity. ChAT-IR fibers appeared as single (+), and their distribution was uniform over the area of the ganglia. They were visible as bands of smooth fibers running between nerve cells, and only a few showed the presence of GAL. In the middle part of the ganglia, scarce (++) varicose VIP-IR fibers (Figures 4c and 11c) were observed, which were located between the FB+/TH/DBH-IR and FB+/VAChT/ChAT-IR perikaryons. The examined ganglia also showed a small (++) number of evenly spaced varicose NPY-IR fibers (Figure 8b). Most of them surrounded both adrenergic and cholinergic cells, and individual fibers divided the ganglia into smaller regions. No bNOS-and TH-immunoreactive fibers were found in either the right or left APG, although single (+) and unevenly distributed varicose GAL-IR fibers ( Figure 12c) and DBH-IR were observed.
Experimental Group
After axotomy, it was found that there was a significant increase in the density of DBH-IR and TH-IR fibers in both the right and left APG, while the number of VAChT-IR fibers decreased dramatically. The DBH-IR fibers were found in large numbers (+++) (Figures 4f and 6f) and were arranged evenly throughout the cross-sections of the examined ganglia. They mainly supplied DBH-positive neurons and were part of the bundles of fibers that divided the ganglia into smaller regions. The vast majority of DBH-IR fibers showed immunoreactivity to NPY, and only a small number of them were VIP-IR (Figure 4h). The intraganglionic TH-IR fibers were numerous (+++) and unevenly distributed throughout the right and left APG. Most often they were concentrated around TH-positive neurons, forming numerous "basket-like structures". Only some of the fibers surrounding TH-IR cells showed immunoreactivity to GAL. It was also found that a small number of TH-IR fibers were clustered in the caudal pole of the ganglia and they were largely simultaneously GAL-positive. In the experimental group, it was observed that in the right and left APG there are evenly distributed single VAChT-IR fibers belonging to the varicose fiber population. Only a few of them showed immunoreactivity to VIP and NPY. After unilateral axotomy, a significant increase in the density of VIP-IR (Figures 4g and 10g) and GAL-IR (Figures 5g and 11g) fibers was noted in the right and left APG. VIP-IR and GAL-IR fibers were evenly distributed, however, in the right ganglia they were slightly more numerous (++++) than in the left ganglia (+++). The vast majority of VIP-IR fibers were varicose fibers that formed "basket-like structures" around VIP-containing neurons. The varicose GAL-IR fibers most frequently entwined the GAL-containing cells and formed bundles separating cholinergic cells from adrenergic cells. In the experimental group, an increase in NPY-IR fiber density was noted only in the right APG (+++). In the left ganglion, their number was not changed (++). Most often they were visible as varicose fibers, mostly surrounding DßH-IR cells and forming basket-like structures around NPY-IR neurons. Single, smooth NOS-IR fibers were observed only in the terminal part of the right APG, while they were absent in the left ganglion.
(++) varicose VIP-IR fibers (Figures 4c and 11c) were observed, which were located between the FB+/TH/DBH-IR and FB+/VAChT/ChAT-IR perikaryons. The examined ganglia also showed a small (++) number of evenly spaced varicose NPY-IR fibers (Figure 8b). Most of them surrounded both adrenergic and cholinergic cells, and individual fibers divided the ganglia into smaller regions. No bNOS-and TH-immunoreactive fibers were found in either the right or left APG, although single (+) and unevenly distributed varicose GAL-IR fibers (Figure 12c) and DBH-IR were observed. Figure 11. (a-d) Confocal laser scanning microscope images showing a section of the right (R) APG from the control (C) group. The arrow indicates the FB+ neuron (a) containing VIP (c) but VAChT negative (b). The arrowhead indicates the low number of VIP-IR fibers (c,d). The images were taken from blue, green and red fluorescent channels. Blue, green and red channels were digitally superimposed. Scale bar = 50 µm. (e-h) Confocal laser scanning microscope images showing a section of the right (R) APG from the experimental group. Arrows indicate FB+ neurons (e) simultaneously immunoreactive (h) against VAChT (f) and VIP (g). A large number of VIP-IR fibers (g,h) were marked with the arrowhead. The images were taken from blue, green and red fluorescent channels. Blue, green and red channels were digitally superimposed. Scale bar = 50 µm.
from the control (C) group. The arrow indicates the FB+ neuron (a) containing VIP (c) but VAChT negative (b). The arrowhead indicates the low number of VIP-IR fibers (c,d). The images were taken from blue, green and red fluorescent channels. Blue, green and red channels were digitally superimposed. Scale bar = 50 µm. (e-h) Confocal laser scanning microscope images showing a section of the right (R) APG from the experimental group. Arrows indicate FB+ neurons (e) simultaneously immunoreactive (h) against VAChT (f) and VIP (g). A large number of VIP-IR fibers (g,h) were marked with the arrowhead. The images were taken from blue, green and red fluorescent channels. Blue, green and red channels were digitally superimposed. Scale bar = 50 µm. containing ChAT (f). A large number of GAL-IR varicose fibers (g,h) were marked with arrowheads. The images were taken from blue, green and red fluorescent channels. Blue, green and red channels were digitally superimposed. Scale bar = 50 µm.
Expression of Genes Coding for the Synthesis of Selected Biologically Active Substances in the Right and Left APG
The qPCR study showed that, as a result of the right-sided axotomy, the expression of genes responsible for the synthesis of biologically active substances in both APG ganglia changed. Compared to the control group, gene expression for VAChT and ChAT decreased, respectively, 3.95-fold and 21.46-fold in the right ganglion, and 2.88-fold and 4.82-fold in the left ganglion (Figure 13a,b). It was also observed that the expression of genes encoding DBH and TH decreased and was, respectively, 10.25 times and 4.72 times lower in the right APG and 5.53 times and 5.90 times lower in the left APG than in the control group (Figure 13a,b). Moreover, it was found that the expression of the NOS gene showed a 14.80-fold decrease in the right and a 7.57-fold decrease in the left APG compared to the control group (Figure 13a,b). It was also observed that, after axotomy, VIP gene expression was 12.06 times lower in the right APG and 3.54 times lower in the left APG than in the control (Figure 13a,b). Moreover, in the right APG there was a 34.18-fold increase in the expression of genes for GAL, while in the left ganglion no statistically significant changes were observed in relation to the control group (Figure 13a,b).
In the right APG, the expression of the gene encoding NPY was 1.61 times higher than in the control, but there were no statistically significant changes in the expression of this gene in the left ganglion (Figure 13a,b).
Discussion
It was observed that the neurons projecting to UBT were located in bilateral ganglia, and their number was significantly higher in the right APG than in the left APG and it amounted to approx. 68.76% and approx. 31.24% of all the retrogradely-labeled nerve cells, respectively. Based on immunohistochemical stainings, it was found that FB+ neurons of bilateral APG ganglia belonged to three neuronal populations: adrenergic, cholinergic and NANC. They also showed immunoreactivity to VIP, NPY and GAL, but not to bNOS. The above results are largely in line with the observations made previously in the same animal species by other researchers [4,24]. It is well known that autonomic neurons may respond to damage to their nerve projections through numerous morphological and functional changes [3]. The cutting of nerve fibers deprives the perikaryons of the neurotrophic factors responsible for maintaining their basic life functions, which results in the activation of processes that may lead to the degeneration and even apoptosis of the neuron [7,11,25,26]. The experiment described in this paper showed that the right-sided axotomy procedure changed the number of FB+ cells both in the ipsilateral and contralateral APG. It was found that more than 50% of the FB+ neurons in the left and right APG died as a Moreover, it was found that the expression of the NOS gene showed a 14.80-fold decrease in the right and a 7.57-fold decrease in the left APG compared to the control group (Figure 13a,b). It was also observed that, after axotomy, VIP gene expression was 12.06 times lower in the right APG and 3.54 times lower in the left APG than in the control (Figure 13a,b). Moreover, in the right APG there was a 34.18-fold increase in the expression of genes for GAL, while in the left ganglion no statistically significant changes were observed in relation to the control group (Figure 13a,b).
In the right APG, the expression of the gene encoding NPY was 1.61 times higher than in the control, but there were no statistically significant changes in the expression of this gene in the left ganglion (Figure 13a,b).
Discussion
It was observed that the neurons projecting to UBT were located in bilateral ganglia, and their number was significantly higher in the right APG than in the left APG and it amounted to approx. 68.76% and approx. 31.24% of all the retrogradely-labeled nerve cells, respectively. Based on immunohistochemical stainings, it was found that FB+ neurons of bilateral APG ganglia belonged to three neuronal populations: adrenergic, cholinergic and NANC. They also showed immunoreactivity to VIP, NPY and GAL, but not to bNOS. The above results are largely in line with the observations made previously in the same animal species by other researchers [4,24]. It is well known that autonomic neurons may respond to damage to their nerve projections through numerous morphological and functional changes [3]. The cutting of nerve fibers deprives the perikaryons of the neurotrophic factors responsible for maintaining their basic life functions, which results in the activation of processes that may lead to the degeneration and even apoptosis of the neuron [7,11,25,26]. The experiment described in this paper showed that the right-sided axotomy procedure changed the number of FB+ cells both in the ipsilateral and contralateral APG. It was found that more than 50% of the FB+ neurons in the left and right APG died as a result of unilateral axotomy. The obtained results indicate that the pathological factor (the cutting of the autonomic nerve fibers) can induce extensive death of nerve cells both in the directly denervated and adjacent neurons. Different results were obtained in sexually immature gilts subjected to bilateral axotomy of the posterior colic nerves (nervi colici caudales) projecting from CaMG to the colon. In this experiment, the number of FB+ neurons in both CaMG ganglia did not change [11]. On the other hand, in rats, it was shown that cutting the sublingual nerve (nervus hypoglossus) resulted in the death of approx. 30% of the motor neurons, while approx. 70% of them survive [2,27]. Therefore, it seems likely that the survival of nerve cells after damage to their axons depends on the intrinsic properties of perikaryons, as well as the properties of the surrounding cellular environment and the type of tissues they supply.
The authors' immunohistochemical studies showed that the right-sided axotomy significantly changed the chemical coding of right APG FB+ neurons, but did not cause significant changes in the retrogradely-labeled cells of the left ganglion.
In the experimental group, as a result of a unilateral axotomy, it was observed that the number of FB+/TH/DBH-IR neurons in the right APG decreased drastically from approx. 60% to approx. 34%. Similar results were obtained in studies of unilateral and bilateral orchidectomy in sexually mature boars. It was shown that both in animals undergoing right-sided and bilateral castration, the number of TH-IR neurons located in APG and CaMG decreased from approx. 60% to approx. 25-30%, and DBH-IR neurons from approx. 60 to approx. 50% [20]. A strong reduction of TH-immunoreactive neurons was also noted in CaMG in gilts undergoing bilateral axotomy of posterior colonic nerves (from approx. 87% to 61%), as well as following the partial or complete removal of the uterus [11,19]. These observations correspond with the changes noted in rat SCG neurons after cutting their axons [18]. It is believed that most or all pelvic adrenergic neurons are under the influence of nerve growth factor (NGF) which is synthesized in the tissues of the urogenital system [2]. Moreover, it is a widely accepted theory that the reduction of TH synthesis in axotomy-affected neurons derives from their deprivation of the neurotrophic factors, particularly NGF [8,21,[28][29][30][31].
It is interesting that after unilateral axotomy, a large number of TH-IR fibers (from-to +++) appeared and the number of DBH-IR fibers increased (from + to +++) in the right APG. This phenomenon is difficult to explain at the moment. It may be presumed that the appearance of intraganglionic adrenergic fibers is a consequence of compensatory mechanisms developed in response to the massive loss of FB+/TH/DBH-IR neurons.
In the course of this study, it was also observed that after unilateral axotomy there was a drastic decrease in the number of VAChT-IR fibers (from ++++ to +) and a significant decrease in the expression of the genes encoding VAChT and ChAT in the right APG, while the number of VAChT/ChAT-IR neurons increased slightly (by approx. 9%). In the available literature, there are only reports on a transient decrease in both the number of VAChT-IR of nerve endings and the level of ChAT mRNA as a result of unilateral sublingual nerve axotomy in the rat [32][33][34].
It is generally accepted that one of the most important and distinctive properties of axotomy-affected neurons is their ability to alter their chemical coding pattern [9,12,28,35]. The current study showed that the right APG from experimental animals showed a significant increase in the number of FB+/VIP-IR and FB+/GAL-IR cells, while the number of FB+/NPY-IR cells did not change significantly. Moreover, it was observed that approximately 18% of the retrogradely-labeled neurons started to show immunoreactivity to bNOS. An increase in VIP and GAL expression was also found in APG and CaMG neurons of boars after unilateral and bilateral orchidectomy [20]. In previous studies, it was also observed that axotomy causes a decrease in NPY expression, but does not affect the expression of VIP and NOS in CaMG neurons in gilts, while it induces the synthesis of these substances in rat SCG [11][12][13]17]. The above-cited data support the hypothesis that the changes in neuropeptide expression in axotomy-affected neurons appear to be organand species-specific [11].
Currently, it is believed that biologically active substances such as GAL, VIP and NOS are crucial for the survival of neurons and the regeneration process of nerve endings [36]. In this experiment, it was observed that the increase in the number of FB+ neurons immunoreactive to GAL (from approx. 1% to approx. 36%) was accompanied by a significant increase in the number of GAL-IR fibers (from + to ++++), which was also correlated with a significant (34-fold) increase in gene expression of this neuropeptide. It is believed that overexpression of galanin is most likely the result of the action of leukemia inhibitory factor (LIF), which is released by damaged neurolemmocytes [17,28,37]. Moreover, it is believed that the increase in VIP expression following the axotomy may also be the result of perikaryon exposure to LIF, as well as the effect of depriving them of the NGF supply [17]. In this study, it was found that in response to the unilateral axotomy there was a significant increase in the number of FB+/VIP-IR cells (from approx. 4 to approx. 37%) and VIP-IR fibers (from ++ to ++++) in the right APG, while the expression of the VIP gene was significantly decreased (12-fold). It should be noted that the increase in the number of nerve cells immunoreactive to a given neuropeptide does not always correlate with an increase in the expression of their genes. It can be presumed that the reason for the decrease in VIP gene expression was the termination of the increased protein synthesis, which, in turn, was manifested by a significant increase in the number of VIP-IR cells and fibers compared to the control group. To clarify this hypothesis it is necessary to do further studies determining the change in the expression of VIP at the gene level versus the number of neurons and VIP-IR fibers versus the time elapsed from the autonomic fiber axotomy.
It is noteworthy that after cutting the pelvic plexus fibers, the presence of FB+/bNOS-IR neurons was found in the right APG, which, in turn, was not present in the control group. The qPCR study showed that the expression of the bNOS gene was higher in the control group than in the experimental group. A similar observation was made in the case of the expression of the NPY gene. It was found that following the axotomy the number of FB+ neurons, showing NPY immunoreactivity did not change significantly, while the expression of the gene encoding this neuropeptide decreased significantly compared to the control group.
The axotomy-induced increase in the expression of GAL, VIP and bNOS in the right APG neurons implies that these substances may participate in the processes related to regeneration and increase the viability of pelvic neurons.
The current study showed that right-sided axotomy did not significantly change the chemical coding of FB+ perikaryons in the left APG, but induced changes in the level of expression of genes coding for the synthesis of most of the studied substances. It also affected the number and neurochemical coding of intraganglionic nerve fibers which were comparable to the changes noted in the right ganglion.
Experimental Animals
The study was conducted on 20 sexually immature boars of the Large White Polish breed (8 weeks old and weighing approx. 10-15 kg) from a commercial pig fattening farm. The pigs were kept in groups (n = 5) in cages located in ventilated rooms with appropriate temperature and humidity, under natural daylight. The animals were isolated from noise and had free access to water and feed. All boars were subjected to experimental treatments approved by the Local Ethical Committee (license no. 36/2016) affiliated with the National Ethics Commission for Animal Experimentation, Polish Ministry of Science and Higher Education. The animals were divided into two experimental groups, each consisting of ten animals: the MG-group for morphological research and the MBG-group for molecular biology research. Each of these groups was divided into two subgroups: a control group (MG-C, MBG-C) and an experimental group (MG-E, MBG-E) consisting of five pigs.
Surgical Procedures and Injection of a Neural Retrograde Tracer
The animals fasted for 18 h before the planned surgery. All surgical procedures were performed under general anesthesia according to the following scheme: first, the boars were premedicated with atropine (Polfa, Poland; 0.04 mg/kg BW, SC), and azaperone (Stresnil, Janssen, Belgium; 2.5 mg/kg BW, IM). After approx. 15 min an intramuscular injection of ketamine was performed (Ketamine 10%, Biowet, Poland; 10 mg/kg BW) for the induction of general anesthesia and propofol then was administered after 20 min into the ear marginal vein (Scanofol, Scanvet, Poland; 4 mg/kg BW). During the operation, heart rate and respiration were constantly monitored.
Animals Intended for Morphological Research
All animals from the morphological group (MG, n = 10) underwent midline laparotomy. The abdominal incision was made in the white line from the umbilicus to the pubic symphysis and the urinary bladder was exposed. Subsequently, a total of 16 µL of 5% Fast Blue neuronal tracer (FB; Dr. Illing, Gross-Umstadt, Germany) suspension was injected under the serosa of the bladder trigone using a Hamilton syringe fitted with a 26-G needle. Eight injections of 2 µL of FB suspension were performed within the right and left side of the UBT, keeping a similar distance between the injections. To avoid leakage of tracer solution from the injection injury, the needle was left at the injection site for several seconds. The wall of the injected organ was then rinsed with physiological saline and gently wiped with gauze. This procedure aimed to avoid contamination of the adjacent tissue structures with the injected dye, which could adversely affect the reliability of the obtained results (other neurons that do not supply UBT could be labeled). After three weeks, all animals in the morphological group (MG, n = 10) were re-operated and midline laparotomy according to the procedure described above was performed. In the boars from the experimental subgroup (MG-E, n = 5), the aim of the reoperation was the right axotomy of nerve fibers from the APG ganglion to the urinary bladder trigone. This procedure was performed in the same way for each animal. Nerves from the right anterior pelvic ganglion were cut halfway along their route towards the UBT. In animals from the control subgroup (MG-C, n = 5), only manual manipulation was performed in the right part of the pelvic plexus, without cutting the nerve fibers. This consisted of removing it from the peritoneal cavity and exposing it for a few seconds.
Animals Intended for Molecular Research
Boars from the molecular biology group (MBG, n = 10) were used for studies aimed at determining changes in the expression of genes coding selected biologically active substances using the RT-qPCR technique (quantitative PCR reaction preceded by reverse transcription). After induction of general anesthesia, the animals underwent midline laparotomy (as previously described) but, unlike the morphological group animals, they were not injected with the Fast Blue neural tracer. In pigs from the experimental subgroup (MBG-E, n = 5), the right axotomy of nerve fibers projecting from the right APG ganglion to UBT was performed (Figure 3). In contrast, in the boars from the control subgroup (MBG-C, n = 5) only manipulations of the right pelvic plexus were performed without cutting the nerves (as described in the MG-C subgroup).
Collection, Fixation and Preparation of Tissues for Analysis
One week after the surgery, all boars from both experimental groups (MG, n = 10 and MBG, n = 10) were euthanized. They were anesthetized in accordance with the previously described general anesthesia, and the overdose of propofol led to breathing arrest. When the respiratory action ceased, in the boars from the morphological group (MG, n = 10) the thoracic cavity was opened by cutting the sternum in the midline. A metal cannula was then inserted into the left ventricle of the heart through which the fixative fluid (4% paraformaldehyde solution in 0.1 M phosphate buffer pH 7.4) was infused. At the same time, the auricle of the right atrium was cut off to allow the outflow of blood from the blood vessels.
Abdominal cavities were opened immediately after transcardial perfusion was completed and the right and left APGs were collected. The obtained tissues were additionally fixed by immersing them in a 4% paraformaldehyde solution for about 20 min and then washed several times in a phosphate buffer (pH 7.4). Subsequently, the collected material was placed in an 18% buffered sucrose solution (pH 7.4) with the addition of sodium azide and stored under refrigerated conditions (temperature 4 • C) until the ganglia dropped to the bottom of the container.
After the tissues were impregnated with sucrose, the collected ganglia were frozen and sliced with a cryostat (Leica CM1860) into 16 µm-thick sections. The sections were applied to pre-labelled chromalum-coated glass slides. They were then were dried for about 20 min at room temperature, placed in sealed boxes and stored at −20 • C. All the sections containing FB-labelled nerve cells were processed for double-labelling immunofluorescence with antibodies listed in Table 1 and the immunolabelling techniques were applied as described previously [38]. Standard controls, i.e., preabsorption for the neuropeptide antisera (20 µg of appropriate antigen per 1 mL of the corresponding antibody at working dilution; all antigens purchased from Peninsula, Sigma or Dianova) and the omission and replacement of all primary antisera by non-immune sera were applied to test antibody and method specificity.
The immunostained sections were studied and photographed with a Zeiss Axiophot fluorescence microscope equipped with epi-illumination and an appropriate filter set for FITC, Alexa Fluor 488, 555 i 568 and FB, and with a confocal microscope (Zeiss LSM 710). The relationships between immunohistochemical staining and FB distribution were examined directly by interchanging filters. The sections originated from different representative regions of the ganglion (from one of three different ganglion levels-upper, middle, and lower one-third). To determine the percentage of particular neuronal populations, at least 300 FB+ (FB-labelled) neuronal profiles were investigated for each combination of antisera. All FB+ cells found in particular sections were counted. To avoid double-counting the same neurons, the neuronal cells were counted in every fourth section. The number of immunolabelled profiles was calculated as a percentage of neurons immunoreactive to a particular antigen related to all FB+ perikarya counted. Finally, data were pooled from all animals in particular groups, expressed as means ± SEM and analyzed with GraphPad Prism 8 software with a paired Student's t-test. The differences for which the level of significance was lower than 0.05 (p < 0.05) were considered statistically significant.
Molecular Research
After respiratory and cardiac arrest, the boars from the molecular group (MBG, n = 10) had their abdominal cavities opened and the right and left APGs were removed. The collected tissues were immediately placed in sterile tubes filled with RNAlater ® protection buffer (Qiagen, Düsseldorf, Germany) and stored at −20 • C. Total RNA was extracted using Total RNA Mini isolation kit (AA Biotechnology, Gdynia, Poland) and the cDNA samples were synthesized from respective high-quality matrix samples using Maxima First Strand cDNA Synthesis Kit for RT-qPCR (Thermo Scientific, Waltham, MA, USA). Quantitative real-time PCR was performed according to the method described by Kasica-Jarosz et al. (2018) [39] using SYBR Green (SYBR Select Master Mix, Applied Biosystems, Foster City, CA, USA) on 7500 Fast Real-Time PCR instrument (Applied Biosystems, Foster City, CA, USA). Oligonucleotide primers were designed using PrimerBLAST tool to detect gene expression of chosen markers [VACHT (slc18a1), ChAT (chat), DBH (dbh), TH (th), bNOS (N-nos), GAL (gal), VIP (vip), NPY (npy) and GAPDH (gapdh))]. Initial validation of reference genes revealed that for the purpose of the study, GAPDH showed the most efficient and equal expression among the samples. The values of the expression of the studied genes were calculated in each group as normalized to GAPDH expression. Each sample was analyzed in triplicate. The details are listed in Table 2. The obtained results were analyzed with GraphPad Prism 8.0 (GraphPad Software Inc., San Diego, CA, USA) using the Student's t-test for normally distributed data or using the Mann-Whitney test for data not meeting the assumptions of the normal distribution. The level of significance equal to 0.05 was adopted as statistically significant (p = 0.05).
Conclusions
This section is not mandatory but can be added to the manuscript if the discussion is unusually long or complex.
Data Availability Statement:
The data presented in this study are available in Supplementary Materials here as S1. | 2021-03-07T06:16:21.391Z | 2021-02-24T00:00:00.000 | {
"year": 2021,
"sha1": "41c413964f5087638c9d228febe80e0145225ef1",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/1422-0067/22/5/2231/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "517aa9e287a6afec82d1833db8678b3698793674",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
219761516 | pes2o/s2orc | v3-fos-license | Combating COVID-19 Pandemic in Bangladesh: A Memorandum from Developing Country
This paper is purposed to delineate the current situation of coronavirus disease 2019 (COVID-19) pandemic in Bangladesh as well as how the people in this country are encountering this threatening crisis. COVID-19 is a global pandemic that has rapidly spread throughout different countries of the world to date. After its emergence in China different continents started to get aware of COVID-19 though, due to its high transmissibility several countries have been quite deteriorated preventing it. All the countries are moving forward to the best of their abilities to find out any solution so that the whole world could get rid of this horrifying situation as soon as possible. The authors here have reported an overview of how the outbreak of COVID-19 has impacted Bangladesh and how the people of this country have been tracing the way to tackle this havoc. Also, authors tried to enunciate the changes that have brought around due to the crisis have offered us some fertile lessons. To conclude, the observation and current knowledge of the pandemic in Bangladesh so far suggest that special considerations might be anticipated to be highlighted pertaining to the COVID-19 outbreak in Bangladesh.
INTRODUCTION
The COVID-19 has infected more than 5.3 million people during the time of writing and caused several thousand deaths around the world and intimidating to harm to world population further with a high alarming risk. Among the infected ones, 1.99 million people have recovered while 336 thousand infected individuals have already passed away. Hence, to date, the recovery and death rates are respectively reported 38.49% and 6.60% (Source: Johns Hopkins Coronavirus Research Centre; https://coronavirus.jhu.edu/). COVID-19 is a communicable disease which is caused by the Severe Acute Respiratory Syndrome Coronavirus-2 (SARS-CoV-2) strain of beta coronavirus (a subfamily of coronavirus). It is a newly emerged strain that binds to the ACE-2 receptor of ciliated bronchial epithelial cells as well as type-2 pneumocytes of the human lung [1]. The COVID-19 European Journal of Medical and Educational Technologies 2020; 13(4): em2017 was first identified in Wuhan, Hubei province, People's Republic of China. After the analysis of the phylogenetic tree and multiple sequence alignment, it was estimated that pangolin acted as an intermediate host in transmitting the novel coronavirus from bat to human.
The disease is transmitted to healthy individuals from the respiratory droplets which are generated during sneeze or cough of infected persons. Also, direct respiratory tract contact or other respiratory secretions may cause this disease with an incubation period of 2-14 days [2]. The virus may stay for 2-3 hours in the air and 72-48 hours in the surfaces like plastic, stainless steels. In addition, infected persons may transfer the virus to any surface or object like a phone, table, doorknob, laptop when they touch it where the virus survives for 8 hours [3,4]. Novel coronavirus infects the upper and lower respiratory tract and can cause variety of severities. Some mild symptoms of this disease are initially cold, cough, sore throat, mild muscle pain, appetite loss, fever, and diarrhoea. The critical symptoms of this disease are pneumonia, shortness of breathing, and organ failure that may lead to death. People with comorbidity (having diabetes, blood pressure, heart disease, and kidney disease) or acute respiratory disease like asthma, are more vulnerable to its infection [5,6]. In order to confirm whether a symptomatic or asymptomatic person is infected or not, diagnostic tests including RT-qPCR, serological test, CRISPR-CAS-12 are performed. Among them, RT-qPCR is the most frequently used test worldwide [7].
Bangladesh, a lower-middle income country (LMIC) of Southeast Asia with more than 160 million people, confirmed the first infected case on March 08, 2020 and experienced first death on March 20, 2020. As of May 05, 2020, the disease was reported to infect all 64 districts, and the country was considered as the third highest country in the number of people who died among the South Asian countries to date [6]. With notable limitations, the country's Institute of Epidemiology, Disease Control and Research (IEDCR) commenced the test for COVID-19 through the method RT-qPCR and it became widespread across the country. However, a mandatory lockdown was declared by the Bangladesh government to prevent the severe community transmission in this densely-populated country but unfortunately, being an LMIC with people under the poverty line, the lockdown is hardly maintained by day labourers, beggars, garment workers and so forth. Hence, social awareness about COVID-19 is found barely satisfactory in Bangladesh which should be brought immediately under sincere concern by the government of Bangladesh.
COVID-19 OUTBREAK IN BANGLADESH
After almost three and a half months of the first COVID-19 patient detected in China, three coronavirus cases (including two men and one woman) were detected positive for the first time on March 8, 2020 in Dhaka, Bangladesh [8] where two of them returned from Italy and the other one was a relative of one of the returnees. The mean ages were between 20 and 35 years, which was ensured by the Institute of Epidemiology, Disease Control and Research (IEDCR). They were hospitalized in Dhaka but the hospital name was kept secret. Three family members of them were being kept in quarantine after coming in contact with the COVID-19 patients. Total 105586 cases were confirmed worldwide till that time [9]. Two of the COVID-19 patients had recovered after four days of first identified in Dhaka. On March 18 th , 2020, a total of 14 cases (including 4 newly diagnosed) were confirmed and the first coronavirus patient died (Figure 1) in Bangladesh that was the 11th day of the country's first coronavirus case. The patient was a 70-year-old man who had several pre-existing complications like Chronic Obstructive Pulmonary Disease (COPD), hypertension, heart disease, and diabetes. In the meantime, the world recorded a total confirmed case of 191,127 along with 7807 death tolls [10].
To decrease this viral transmission, WHO suggested some fundamental guidelines i.e. avoiding close contact with people who have acute respiratory infection symptoms, frequently washing hands (particularly if physical contact with patients or their environment), practicing etiquette rules like maintaining social distance, covering coughs and sneezes with disposable tissues, etc., as well as enhancing standard infection prevention and control practices in emergency departments of the hospitals [11].
The spread of coronavirus in Bangladesh commenced from Dhaka city to the rest of the country. Dhaka is the most densely populated and capital city of Bangladesh. Millions of people reside in this city from all over the country for their sustentation. However, the government declared a general holiday due to COVID-19 infection from March 26 to April 4 to control the spread of the virus and ensure social distancing. But millions of people left Dhaka to enjoy the holidays with family and hardly maintained social distancing, many people were found gathering in Railway station, Bus-stand, and launch terminal. That's how the virus was supposed to spread throughout the country due to community transmission. During the time of writing, the Dhaka division holds over 76 per cent of the COVID-19 patients in the country, followed by Chattogram division holding 11.9 percent (Figure 2). In fact, the Dhaka district which belonged to the Dhaka division, currently contains the highest number of COVID-19 patients (79.03%), followed by Narayanganj (Figure 3). Narayanganj (9.67%), the neighbouring district of Dhaka city, became the epicentre of coronavirus infection in Bangladesh since the majority portion of the patients started roaming around from here. Coronavirus patients have been detected in many districts who recently returned from Narayanganj.
The government took measures to prevent local transmission by locking down the transportation system (including Rail, Water, Air, and Road transport). Bangladesh Police, Army, Navy, and Rapid Action Battalion (RAB) have been working on the frontline to ensure social distancing among the people and to keep an eye on the locked-down houses around the country. All shops, super shops, and kitchen markets throughout the country were declared by the government to be closed by 7 pm, and the grocery stores located in residential areas by 2 pm but drug stores and emergency services remained out of the purview of the orders.
At the end of April, a total of 7667 confirmed cases were reported in Bangladesh with a total of 168 deaths whereas a total of 3,090,445 cases and 217,769 deaths found worldwide. At the beginning of May, 2020, a significant number of cases have been confirmed (on average 1000 per day) in Bangladesh. Even, the virus has recently been spread more vulnerable to becoming highly ill with the virus [13]. The pattern, however, of COVID-19 death rates in Bangladesh appears to be discrete from that of the confirmed case rates. Death rate compared to age in Bangladesh is seen with a successive improve which depicts that, older people are more susceptible to the viral infection than the younger ones. Thus, when aged people remaining at high risk of dying, people of all ages, especially the younger ones are getting mostly infected which is probably due to their frequent outing or local transmission of the virus. However, when it comes to gender, male proportions got infected and died mostly with coronavirus than the female proportions. An overall estimate is that 68 percent of total confirmed cases belonged to men and collaterally the death rate was 73 percent. Hence, the virus is supposed to be affecting men more than women in Bangladesh as with worldwide [14]. The actual reason is not revealed yet by the scientists, but they suspect that biological differences between men and women, unhealthy habits like smoking, and underlying health issues among men could be influencing factors for this reason. Till May 22, 2020, a total of 2,23,841 COVID-19 tests were carried out in Bangladesh whereas the total number of confirmed cases were 30,205 so far (Figure 6). At the beginning of the outbreak in Bangladesh, few tests were performed due to several limitations and so the positive cases were very low by then. But, since early May, the number of tests as well as the laboratories and involved volunteers raised significantly, and thus, the active cases were also found massive. Till now, Bangladesh has set up 42 medical laboratories dealing with COVID-19 detection across the country [15].
Unfortunately, Bangladesh has recorded the lowest recovery rate within the duration of the first 53 days after COVID-19 outbreak, compared to other South Asian countries. But it started overtopping gradually since the beginning of May and the recovery rate proceeded at a considerably faster pace. However, a sharp and successive increase is shown in the rate of confirmed case and death as well which is also a matter of concern as people aren't getting healed equally with the rate of an active case. As of May 22, 2020, the total amount of recovery had reached around 6100 with 20 percent recovery rate [16] (Figure 7). Furthermore, the death toll has crossed a total of 400 on May 21, and thus, on average, 11 patients have found dying per day in Bangladesh within the first 21 days of May which is quite breath-taking.
Many people infected with novel coronavirus may not show symptoms and many of them died asymptomatically [17]. Interestingly, till May 22, 2020, around 550 people died in Bangladesh with COVID-19 symptoms including fever, cough, or pneumonia whereas the number of people died with no COVID-19 positive test (Figure 8). However, through conducting a pilot survey via collecting news from national newspapers and electronic media, some cases have been found where patients tested negative before death but positive results have found revived after recollecting samples from dead patients. This may be an indication that many of the patients who died with COVID-19 symptoms might die of the disease without being tested. Henceforth, the fact could be further analysed and highlighted as well.
THE MYSTERY OF SARS-CoV-2 GENOME ISOLATED IN BANGLADESH
The process of genome sequencing is simply the identification or determination of the whole DNA sequence of any organism which conveys the information about how genes are arranged within. It is quite important to understand the complete physiology of any organism or viruses as well i.e. novel coronavirus and to track down the pathway of the virus by associating it with other metadata, thus to think about future prevention or treatment methodology against the virus [18].
As of now, eighty countries have collected more than 24,000 genome sequence of SARS-CoV-2, which include countries like Nepal or Vietnam where the coronavirus pandemic is relatively less violent. Though, the first infected case was reported early in March, 2020 by the country's Epidemiology Institute -IEDCR, due to several limitations, it took time to sequence the entire genome of the novel coronavirus from the sample of any Bangladeshi patient. However, for the first time in Bangladesh, from the Child Health Research Foundation (CHRF), Dr. Senjuti Saha, Dr. Samir Kumar Saha & their team have broken the ground revealing the whole genome sequence of the coronavirus through using Illumina iSeq 100 NGS platform which has put a heavy influence on the research purview of the country. Accordingly, the news of the layout of genome sequence data became available on May 12, 2020 [19]. Hence, researchers started to explore it following the extraction of the sequence and information from the public repository GISAID and CNCB.
However, according to genome data available till now, this virus has acquired nine mutations by now whereas seven mutations were very common among the sequenced viruses so far but surprisingly, this genome has got two new mutations which have not been seen among the viruses reported to date [20]. This depicts that, the virus has already acquired these new changes during its incubation in Bangladesh which is quite horrifying.
Again, when observed closely, among those nine mutations within the viral genome, one was found which results in a non-silent mutation in its Spike protein. The mutation utterly leads to an amino acid change i.e. Aspartate to Glycine at the 614th position of the Spike protein (D614G). It is predicted that this mutation of the virus has the potential for spreading out quickly among the European and American populations. This creates an extra serine protease or elastase cleavage site close to the Open Reading Frame (ORF) S1 and S2 junction of the Spike protein (Table 1).
Furthermore, the phylogenetic tree analysis with 350 sequences from different countries and reference sequences delineated that, Bangladeshi SARS-CoV-2 genome isolate is familiar to the European cluster which means that, perhaps the person got infected by someone who returned from Europe [21].
However, afterwards, researchers across the country started retrieving more sequences incessantly e.g. on May 20 th , 2020 Bangladesh deposited another sequence from the National Institution of Biotechnology (NIB) under the lead of Dr. Md Salimullah. Interestingly, their sequence has shown to be compatible with that from the United States, Spain, and Italy [22]. Successively, Bangladesh Jute Research Institute & Chittagong Veterinary & Animal Sciences University (CVASU) has deposited another sequence by May 21 st , 2020, under the lead of Dr. Abdul Razzak which showed its uniformity with the sequence of Saudi, Arabia, Russia, Australia, and United States. Again, on the next day, Dhaka University has further added another one under the instruction of Sharif Akhtaruzzaman.
Hence, researches are concertedly going on across the country to explore the genome and thus trying to come to an end. So, this issue should immediately bring under sincere concern by the government as soon as possible.
Impact on the Overall Economy
The economy of Bangladesh has a high dependency on garments and the agricultural sector. Being a LMIC, it may face an economic crisis due to the COVID-19 pandemic. Last year, the garments industry contributed to 84% of the country's exports that worth $40 billion. But this year due to lockdowns, this industry has to cancel numerous orders and hold off orders worth nearly $3 billion. At present,450 spinning mills, 850 weaving mills, and 250 dyeing factories which are directly linked to the Garment industry are at risk of loss. Around 10 lack people are employed in these mills. The garment sector is also related to the packaging industry which is also affected by the COVID-19 pandemic.
Moreover, 59 banks and 46 general insurance companies count on the garment sector for running their business. As a poverty alleviation scheme, the government has declared over 1.0 trillion BDT (over 11 billion USD) which is nearly 3.6 percent of Bangladesh's gross domestic product (GDP) for various sectors including agriculture and garments. This will provide soft bank loans and aid people that are victims of this viral outbreak.
The transport sector is having difficult times during this continuing crisis. The number of trucks and lorries had surged remarkably with the pace of the country's nearly $100 billion exports and imports. As a result of the extended lockdown, many workers of this sector may become jobless in this hard situation.
According to the estimation of Dhaka University Health and Economy Institute, from 26 March to 26 April, the total loss in the economy was around 100,000 crores Bangladeshi taka (BDT). There is a loss of about 3300 crores BDT in agricultural (200 crores), industrial (1131 crores) and service (2000 crores) sector, and the loss will increase with the extension of lockdown.
The experts found out that export and import economy has come to a standstill. Bangladesh will get poorer if the lockdown continues for a long time. In town hawker, a rickshaw puller, auto-rickshaw driver, bus driver, waiter, day labourer, and in the village, farmer, fisherman, trucks, and lorries are static due to lockdown measure. There is a chance that 1 million people related, shopkeeper, emigrants will face hardship in their daily lives. 40% of the poultry industry is driven by women and it may face a loss of 3350 crores BDT according to the Bangladesh Poultry Industries Central Council.
During the COVID-19 outbreak, unemployment of Bangladeshi emigrants is reflecting a negative impact on remittance. The remittance has reduced by 36 crore dollars within the last 2 months which is also hampering the overall economy of Bangladesh. World Bank (WB) has recently reported that, due to COVID-19 and shutdown, remittance may decline by almost 22 percent in this financial year of 2019-2020 in Bangladesh and thus, it could come down to $1,400 crore in 2020, which was $1,830 crore in 2019 [23].
Besides all these, the price hike of daily commodities has also been added to the economy. Daily essentials are subjected to price hike amid COVID-19. Due to supply disruption and scarcity of the kitchen market, the prices of oil, rice, flour are surging. The price of edible oil has witnessed an 8-10 % rise. Moreover, 38-70 BDT per kg of rice is being sold in markets.
However, government is about to launch a special open market sale of coarse rice at 10 BDT per kg across city corporation for the poor people.
Impact on Agricultural Sector
Agriculture is the backbone of the economy of Bangladesh.
According to the World Bank, it is the income source of 87% of rural people. In fact, 11% of city dwellers are also directly related to the agricultural sector. Around 45.7% of labourpower are recruited in farming. Not only that, the contribution of agriculture on the GDP of Bangladesh is 16.6%. This information indicates the importance of agriculture for Bangladesh. The national lockdown strategy is imposing a bad impact on the overall economy. Many farmers are unable to work on their lands, also, are not accessible to some markets for selling their agro products. Moreover, severely hampered transportation, the inadequacy of labour, and a price hike can act as impediments in the fresh food supply chain.
In the poultry industry, 83000 poulters with an investment of 42000 crores are facing heavy loss due to ongoing country-wide lockdown for COVID-19 pandemic. Their daily egg production is 4.25 crore and because of the fall of demand, they have to sell 1/3 of total eggs at a 45% lower price. In overall poultry farms, 3500 tons of broiler chicken production/day takes place. According to the Bangladesh Poultry Industries Central Council, the price of broiler chicken has fallen to 55 BDT (0.65 USD) per kilogram. The demand for broiler chickens dramatically dipped in lockdown situation and 1.65 crores 1-day old chicks are discarded every week to mitigate loss. More importantly, jobs of 4.7 million workers in the poultry industry are endangered right now.
The country's dairy and fish producers are facing a huge number of economic loss. The total number of dairy farms country-wide is about 250 thousand. Per day milk produced and supplied for selling is 1.5 crore litres. But due to the lockdown implementation, the transportation has become limited and also customer demand plummeted. Milk price has dropped by around 35 percent and 2.5 million litres are remaining unsold. The employment of 1.2 crore people is at risk in the dairy sector [24].
The demand for fish substantially plunged. Consequently, many hatcheries are closed down. Also, crab, shrimp, and fish producers are facing frequent trade embargo that is resulting in economic loss. For example, Bangladesh exports 70% of the crabs in the Chinese market. Lockdown has stopped the exportation. Hence the crab industry of Bangladesh is going to face a significant loss. Production of dry fish has also reduced by 40 percent due to the corona epidemic.
In order to sustain food security and socio-economic development, the vulnerability of farming communities should be lessened by the Government and nongovernment organisations.
Impact of COVID-19 on Mental Health of the Patients
The mental health of a person includes emotional, psychological, and social well-being. For a COVID-19 patient, the negligence of surrounding people and social discrimination is worse than COVID-19 disease itself [25]. In Bangladesh, in several cases, if a person is confirmed as COVID-19 positive, he including his family members are subjected to harassment of neighbour and society and sometimes they are ostracized or forced to leave the society.
European Journal of Medical and Educational Technologies 2020; 13(4): em2017 Besides all these, a COVID-19 patient remains overstressed and in nervous conditions. After getting admitted into the hospital, the patient may witness deaths every day in his ward which may affect his mental strength badly. To encourage patients of COVID-19, the doctor, nurses should be compassionate to them. Sometimes courage can help the patient in early recovery [26].
Influence of COVID-19 on Education System
The total number of students in Bangladesh is approximately 23907151. Among them, primary, secondary, and post-secondary level students are consecutively 16230000,7400000 and 277151. To tackle the breakout of COVID-19, the government declared closing down of all educational institutions on March 18 th 2020. Not only that, the Higher Secondary Certificate (HSC) examination was postponed by the Ministry of Education.
Prime minister Sheikh Hasina also declared that if COVID-19 situations don't get better, all institutions will remain closed until September, 2020. This closure for a long period can cause some problems including huge learning gap.
The public and private universities may face session jam if the classes and exams are not taken on time. Some universities have a semester system, which means six months per session. So, there is an immense possibility of session jam in almost all universities.
Many schools, private universities, and a few public universities are taking online classes in order to prevent the session jam. But this is not a fair way of taking classes according to many students and teachers. Bangladesh does not have a strong platform for taking online classes. Teachers are using meeting applications such as ZOOM for taking classes. Moreover, the internet access requires data purchase or Wi-Fi connection. Students who live in remote areas do not have a stable network. Also, data purchase is not possible for students who are under the financial crisis due to the COVID-19 pandemic. As a result, some students of a specific class are deprived of online classes which may result in inequalities in education. Also, some students are indifferent to joining online classes due to a lack of social interaction between students and teachers. HSC examination has been postponed which may delay the admission of college students in universities and other institutions [27].
Bangladesh government has planned for steps like keeping the universities, schools, and colleges open on holidays that means taking classes for all 7 days of the week after the lockdown in order to compensate the education loss. Also, UNICEF is working along with the Bangladesh government to offer fruitful remote learning programs through TV, mobile phone, and Internet platforms for 42 million school students. UNICEF has also aided in producing guides to help teachers performing online classes. Notwithstanding, the Bangladesh government and educational institutions should come forward to provide proper facilities for online classes, i.e., ensuring financial support for students.
Impact of COVID-19 on Culture and Religious Beliefs
The lockdown strategy taken by the government due to the COVID-19 outbreak has been extended several times. Religious gatherings have been proved to be a medium for spreading the COVID-19.
Performing janazah salah at a large-scale gathering is prohibited by the government. But violating it, around 100 thousand people gathered in the janazah of Maulana Jubayer Ahmed Ansari, a popular Islamic scholar of Rahmania madrassa. In addition, millions of people had expectations to congregate in mosques for performing tarabi prayer during Ramadan. Similar to the order of the Saudi Arabia government, the Bangladesh government allowed only 12 people in a mosque including mosque officials and advised the majority of people to perform prayer at home.
Moreover, many people go to market for shopping during one of the largest festivals of this country i.e., Eid-Ul-Fitr which may pose the chance of increasing the community spread. In order to prevent community transmission, most of the markets and super shops such as Bashundhara, Jamuna Future Park, Newmarket, and shops all over the country decided together to keep shops closed during Ramadan. In spite of all these, a small group of people is still going to market by neglecting the COVID-19 situation. Furthermore, any kind of arrangement of iftar (evening meals that mark the breaking of the fasting) party or any gathering on Eid day has been banned by the government. Conversely, a minority of the people are still returning their village home from the town and thus getting crowded at the docks which have recently brought under concern by the government. Hence, docks have been closed and transport availability has been lessened in greater amounts by the law-enforcement forces.
Impact of COVID-19 on Lifestyle
According to a recreational point of view, various concerts, shooting for movies, telefilms, and drama on the occasion of Eid has been cancelled for reducing public gatherings. People are spending time on social networking sites and interacting with dear and near ones. LMICs, yet India is far much ahead in case of the testing capability of COVID-19 than Bangladesh. It is a matter of concern that a huge number of patients may be remaining undetected every day which may worsen the situation day by day. However, the number of detections is increasing every day and synchronously positive cases are also rising. Unfortunately, the test of all the suspected cases per day cannot be performed and they need to wait for call helplessly for days or weeks. Meanwhile, some undetected people die with COVID-19 symptoms, and later their samples are tested to ensure the cause of death [28].
Inadequacy of test
However, the suspected and asymptomatic as well as those who came in contact with the patient should be tested without any delay by increasing the number of tests. This insufficiency of tests is due to a lack of experienced individuals for detecting COVID-19. Also, different universities and research organizations holding PCR machines are involved in COVID-19 detection that is not adequate to run the test of all the patients altogether. Some machines are also previously damaged, which may not show results with accuracy, However, the collaboration of more institutions (research and educational) and laboratories (public and private) should be ensured by the government for a rapid test of COVID-19 as early as possible.
The paucity of protective equipment
In order to prevent infection caused by SARS-
Treatment facilities all over the country
Bangladesh is an overpopulated country compared to the number of its health care providers. Modern treatments are also not available in each corner as Bangladesh is not much developed. There exists a lack of availability of necessary equipment and machines required to deal with the COVID-19 pandemic. As a result, there is a possibility that many COVID-19 patients may die without treatment. Therefore, there is a scarcity of doctors and nurses due to their unwillingness for least security. Again, many of them are being infected daily due to exposure to confirmed or suspected COVID-19 patients. As a result, non-infected patients are also dying without treatment.
Death of Health Service Providers
However, incentives are declared by the government for doctors and nurses who will treat the COVID-19 patients. Not only that, training programmes are also arranged for health workers for aiding and treated COVID-19 patients of distant areas. However, the government and non-government organisations should come forward to ensure the importation of more ventilators and instalment of ICU in order to increase the survival rate of critical COVID-19 patients.
Newly emerged treatments
Plasma Therapy: Plasma therapy is providing a light of hope for COVID-19 patients. This procedure involves collecting a blood sample from a fully recovered COVID-19 patient and transfusing plasma to a critically ill patient. Blood cells are absent in plasma and it contains only antibodies and other proteins. The immune system of an infected person produces natural antibodies in order to fight the virus(antigen). The antibodies increase in number over time in the plasma of an infected person and the patient gets cured. With plasma therapy, the antibodies generated in prior infected patient are used to treat present infected patients [31]. Currently, Dhaka Medical College Hospital (DMCH) has started collecting plasma from COVID-19 recovered patients for clinical trials of Convalescent Plasma Therapy (CPT). Dr. Md Dildar Hossain, a medical officer at the Shaheed Suhrawardy Medical College and Hospital's kidney diseases department was the first donor of plasma. He was diagnosed with COVID-19 on April 25, 2020 and later fully recovered on May 9, 2020. Finally, he voluntarily donated his plasma along with another physician was also reported to donate plasma at DMCH for saving the lives of COVID-19 infected patients.
The breakthrough of Possible Discovery of Treatment:
The World Health Organisation has chosen an antiviral drug called Remdisivir after a couple of experiments which is an antiviral drug that was used for the effective treatment of Ebola [32]. This drug showed positive results against SARS-Cov-2 in an experiment conducted on model animals [33]. However, in Bangladesh, hospitals are using some nonspecific medicines such as Hydroxychloroquine, Chloroquine, Azithromycin as the treatment option for COVID-19. Beximco Pharmaceuticals has launched the first generic version of antiviral drug Remdesivir, named Bemsivir for the first time on May 21 st , 2020. Remdesivir is a drug that inhibits viral RNA synthesis and thus it stops replication of SARS-CoV-2 [34]. U.S. FDA has allowed it as "Emergency Use Authorisation" for the treatment of COVID-19 patients.
Another approach was from Professor Dr. Tarek Alam, head of medicine of Bangladesh Medical College and Hospital, who along with his team frequently used antiprotozoal medicine named Ivermectin in a single dose with an antibiotic called Doxycycline that demonstrated an astounding result in curing some COVID-19 patients. Around sixty COVID-19 positive patients with respiratory issues were prescribed the drug by Dr. Tarek and his medical team. After the second test of those patients in IEDCR, it was confirmed that the patients who were under the research turned out to be COVID-19 negative. According to Dr. Tarek and his medical team, they were recovered within 4 days after applying the combination of medicine. These drug combinations were stated not to cause any side effects. According to them, the overall cost of medicine is approximately 30 BDT. For the approval of their medicine, they have contacted DGHS (Directorate General of Health Services) and other government organizations. Also, the trials and study for success rate and side effects have already been commenced by (DGHS). If it turns out to be successful, it will be a global breakthrough discovery by Bangladeshi doctors in the history of corona treatment. 2020. According to many positive cases, none of them were immigrants nor any of their family members or relatives returned from abroad. Hence, it is a vivid indication that community transmission has begun. Many people are not concerned about the dreadfulness of this disease. They are unnecessarily going out of the house to the markets for shopping and are not abiding by the WHO protocols properly [35]. The community transmission exceeded in Dhaka, Narayanganj, and Chittagong because people of these districts however violate the lockdown and many emigrants did not obey the quarantine order. Also, social distancing is a very difficult task in a densely populated country like Bangladesh. As a result, the number is increasing exponentially day by day. Again, some people died who were suspected to be infected with COVID-19.
Later, many of their results turn out to be positive for COVID-19 as mentioned earlier. Some deaths were recorded in the district isolation centres. In the beginning, tests of suspected ones were only conducted in IEDCR. Later, the governmentprovided with testing facilities in other divisions. In order to decrease the community transmission, there is no alternative to increasing the test for COVID-19. So, the test facility should be ensured for people all over the country for prohibiting the community transmission as soon as possible.
Reasons behind the Vulnerability of Bangladeshi People throughout the Country:
Bangladesh is one of the most densely populated countries in the world which may leave the country more vulnerable to community transmission. Beside this, according to the Bangladesh Bureau of Statistics, 34 million people are poor in Bangladesh and 17.5 million among them live below the poverty line. Poverty is a root cause indicating why people are violating lockdown i.e., increased movement for want of food. People who are at high risk of COVID-19 infection are elder people (age group more than 50) and people with comorbidities. Reports suggest that more than 13 million people in this country are aged more than 60 which may increase the risk of severe complication. Therefore, they may require intensive care and ventilators once get infected. A great portion of the poor people live in slums which is not a clean place and might increase the possibility of infection due to reduced hygiene. Moreover, lack of knowledge about the severity of the disease, personal hygiene, presence of malnutrition among the vast majority of the people possesses additional challenge.
As a result, aged people should not go outside for working in offices, factories, shops because every day they might come in contact with their colleagues and others. Imperative support such as fundamental daily needs such as food, relief, rehabilitation for deprived people may abate their hardship and decrease the possibility of getting infected or infecting others. Consequently, the government has announced over $11 billion stimulus package for various sectors including the agriculture and garments to provide soft bank loans and to aid people that are victim to COVID-19.
Bangladesh Has Rohingya Refugee:
Bangladesh has more than 1.1 million Rohingya refugees only in Cox's bazar. They are living in a camp which is constrained and there is a risk that if a few refugees are infected, it will be devastating for the whole camp. Again, 3.3 million local people abide in Cox's bazar [36]. A horrifying news is that, one confirmed case of COVID-19 till 14th May, 2020 was reported in the camp. If somehow community transmission takes place, the number of patients will exceed and many people will be succumbed to death. Again, there is only 10 Intensive Care Unit (ICU) beds are available for 1.1 million people in one local hospital in Cox's Bazar. However, social awareness should be raised properly in Rohingya camps and the number of ventilators and ICU should be increased in nearby hospitals to tackle any worsening situation.
SPECIAL MEASURES FROM THE GOVERNMENT OF BANGLADESH
Due to this pandemic, not only the governmental organizations but also the non-governmental or private organizations are trying altogether to fight against this novel pandemic concertedly. Moreover, the government was concerned since the outbreak was commenced in Bangladesh and still trying to fight this pandemic in different probable ways.
Economic Stimulus Measures Taken by the Country
In response to COVID-19 pandemic, Bangladesh Bank (BB) reduced the required Cash Reserve Requirement (CRR) of other banks from 5.5% to 4% on a bi-weekly average basis and 5% to 3.5% on daily basis feasible from 15 April, 2020. Repo interest rate also reduced from 6% to 5.25% that was effective from 12 April, 2020. The banks were instructed not to deduct any charges against the late payment of credit card bills during the pandemic. BB postponed charging interest on loans from customers. BB permitted foreignowned companies to take short-term loans from their parent companies.
Package-1
A loan facility of approximately USD 3,529 million, with the interest rate of 9%, generated to provide working capital facilities to the affected industries and service sector organizations in the short term through the bank system. Half of the interest on the loan is to be paid by the client whereas the rest will be paid by the government.
Package-2
A loan facility of approximately USD 2,353 million, with the interest rate of 9%, is created to provide working Capital Benefits to Small (Cottage Industries) and Medium Enterprises. 4% loan is to be paid by small and medium enterprises and the remaining 5% will be paid by the government to the concerned bank.
Package-3
The current size of Extending the Benefits of Export Development Fund (EDF), under the Block to Block LC, is declared to be increased from USD 3.5 billion to USD 5 billion for the purpose of increasing the import of raw materials. Subsequently, an additional approximately USD 1,500 million will be added to the EDF fund. However, the current interest rate for EDF will be reduced from 2.73% to 2%.
Package-4
Bangladesh Bank generates a new loan facility of approximately USD 589 million named Pre-shipment Credit Refinance Scheme with the interest rate of 7%.
Package-5
i) For the agricultural sector, the government has announced a package titled 'Special Incentive Re-financing Scheme for Agriculture Sector of approximately USD 589 million to provide financial support to the farmers in rural areas.
ii) Another package of approximately USD 1060 million has been allocated to boost up crop production amidst the pandemic.
iii) Approximately USD 15 million to rehabilitate the farmers during or after the pandemic, while approximately USD 6 million for the farmers already affected by natural inversions and cooperatives agriculture and approximately USD 9 million have been allocated to expand new crops and for technological advancements in agriculture [8].
Besides these, many local companies as well as international financial institutions like World Bank, International Monetary Fund (IMF) are providing emergency funds to strengthen efforts to battle the corona pandemic in Bangladesh.
Recently, the World Bank has approved $100 million in financing to prevent and respond to the outbreak in Bangladesh.
LESSONS TO BE LEARNED FROM COVID-19
The recent horrifying situation created by the COVID-19 pandemic has shaken the whole world with great intensity. way of life people used to have before the outbreak. The industries have almost come to a stand-still, businesses and trades are jeopardized because of lockdown. All these incidents are hampering the national and, in a word, the world's economy. Yet, COVID-19 has taught us about some amazing as well as momentous facts that previously we were unaware about.
• The practice of Frugality Is Highly Important In this COVID-19 situation, the thing for which we are going to crave the most is money. We need to learn the art of frugality whether it is about our clothing or foods or daily commodities. Buying unnecessary things should be avoided by hook or by crook. In this situation, spending money to buy the luxury of life is nothing but irresponsibility as we don't know where the end of this lockdown is! Rather, wealthy people can donate some money for the poor, sick, and health workers that are fighting for our lives.
• Biodiversity Can be Recreated by Changing People's
Lifestyle
Bangladesh was losing its natural beauty day by day because of amplified pollution, mismanagement of tourist spots, and so forth. Surprisingly, the disappearing beauty is almost reappearing amidst the COVID-19 pandemic. The credit of this miracle solely goes to lockdown held due to pandemic. Spell bounding scenarios are depicted in the womb of Cox's Bazar sea beaches: dolphins are seen roaming fearlessly, thousands of tiny red crabs are seen washing ashore on the beach, deer are seen wandering here and there, turtles are roaming in the Saint Martin beach and so forth. It's not just about Cox's bazar, the same thing is applied for Sundarban, Sylhet, Hillside areas, Sanctuary, etc.
In fact, deforestation has declined at a great rate by refilling the nature with greenery. Plastics are no more polluting the water and diversity of fishes is re-established. Many industries are closed, as a result no more producing smokes to pollute the air of our country. The traffic jam is drastically reduced to almost null in the towns and so the carbon emission. Nature is enjoying its fullest while we are stuck at home. To recapitulate, nature has got back its beauty giving us a strong and true message that how badly it was treated by us earlier.
Concern
Another lesson we have learned from the pandemic is how crucial doctors and nurses are for this world to survive. They are working day and night relentlessly to save people by endangering their own lives. So, it is high time we honoured them for their work from now and respect them wholeheartedly. Again, looking at the world's state, one should not think twice to choose medicine as their subject of study. Also, funding in biological research should be a matter of concern to be highlighted to save the next decades.
• 'Work from Home' Method Could Be Inaugurated in
Workplaces
Lockdown has made it compulsory to close down all the offices and companies in the country. So, many of the companies have started working online so that employees can be able to work from their home. If we look at the online banking fact, most of the banks are supporting working from home. Another example is, in the companies like Google, Facebook, YouTube, etc their officials don't need to come to office and continue their work from home. Now the head of offices and companies are planning to create virtual offices instead of traditional ones. Subsequently, the companies will no longer need to pay office rent every month of the year and they can smoothly run their business either. People with a strong internet connection can work from home and won't need to come to the office.
• Online Businesses Will Get More Popularity
Development of applications for online businesses such as shopping, food, etc will become available because people in lockdown have become habituated to online shopping (i.e. daraz, alibaba.com) and ordering food online (i.e. food panda). So, even after lockdown, there is a possibility that online shopping and ordering food will have the same demand and attention as it is getting right now. There is a possibility that entrepreneurship through online food or clothing or selling products can be a source of income for women and men both.
• COVID-19 Will Reduce the Rush of Big Cities and Towns
Internet-based jobs or businesses offered by different offices and companies will be made available for people all around the world. People won't need to have a physical presence for the job as they can work easily from home. In fact, many people will move from city to small towns in order to reduce their daily costs. They will realize that living and working in traffic free, peaceful village area surrounded by natural beauty is better than staying in expensive apartments of big towns. They may establish a strong Wi-Fi connection for working from their home town.
Furthermore, any international company from any corner of the world can hire people. The term 'globalization' will be broader in sense by the blissful action of the internet. Companies can hire any talented people from any place in the world and also that person won't need a visa or immigration to work for that company. For example, a person in Bangladesh can work for the Google of America if he/she has a strong internet connection, laptop, IT skills and a degree.
All these events are pointing out that people with computer skills who are annoyed with the noise, pollution, and traffic of the city may soon immigrate to small towns for living a happy life with family. Therefore, a huge change will be visible in the city after this COVID-19 situation.
So, it's clear that we should develop IT skills, freelancing, video editing, etc if we want to lead a decent life when life will be free from COVID-19.
• Educational System Should Professionally Include
Online Classes
Some national and international universities, medical colleges, schools, colleges, and other institutions, etc are arranging online classes to prevent session jams. Most of the Bangladeshi students are not habituated to online classes, however this pandemic has made it mandatory for many of them to continue online classes. However, this procedure may stay long due to lockdown. Even after lockdown, Institutions will continue arranging online classes, courses, sessions, etc to ensure proper education for students. Hence, to lead an upgraded life and for a better future after COVID-19, we could upgrade our way of regular life. Again, our compassion for relatives, neighbours, friends as well as for the whole nation is an obligation to fight the after-effects of COVID-19.
The facts mentioned above are however the lessons of COVID-19 for us. This pandemic is going to make a huge difference in the upcoming world economy and the majority of people will be afflicted with severe poverty. After this pandemic, the world will be facing a revolutionary change. Not only that, this change will last for a long time. Many of our lives will be fully different. To adapt ourselves to this change, we must modernize our perceptions of traditional lifestyle as well as take some fruitful steps.
CONLUDING REMARK
COVID-19 has greatly changed the global environment right away as well as revealed some disguised issues that need to be highlighted. According to WHO, the Coronavirus will never leave this planet, similar situations will cling us ever and anon. To tackle the crisis not only in Bangladesh but also in other countries we are moving forward to, we should get prepared with the necessary measures taken. Biomedical research should be widespread and more funded. Health issues are to be paid with much more priority and its high time countries around the world approached concertedly with compassionate international cooperation established. | 2020-05-28T09:15:46.465Z | 2020-05-27T00:00:00.000 | {
"year": 2020,
"sha1": "f2ea231839d5eb03fc0dbf54c4acd324d64d5dbb",
"oa_license": "CCBY",
"oa_url": "https://www.preprints.org/manuscript/202005.0435/v1/download",
"oa_status": "GREEN",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "f0f64268a21c404f08788315d4b6bc7fa6654bd8",
"s2fieldsofstudy": [
"Business"
],
"extfieldsofstudy": [
"Political Science",
"Medicine"
]
} |
53475571 | pes2o/s2orc | v3-fos-license | Why Governing Data is Difficult : Findings from Danish Local Government
Data governance has emerged as a promising approach for transforming organizations. While governing data as an organizational asset has clear benefits, no previous studies have reported on the particular challenges faced by practitioners in local government organizations. Against this backdrop, we investigate why it is difficult for local government organizations to explore and exploit their data assets with data governance. Following an engaged scholarship approach, we carried out six group interviews conducted with 34 representatives from 13 different Danish municipalities. From the analysis, we identified nine challenges relating to three overall themes that are critical to governing data in local government: (1) data value and overview, (2) data practices and collaboration and (3) data capabilities and politics. We explain how the three themes extend previous research in data governance and e-government literature. The implications for practice and directions for future research are discussed.
Introduction
Open data, big data and predictive analytics have long promised to transform entire industries and society. Especially public-sector organizations, who routinely store large volumes of data, are keen to pursue new opportunities and create new services, but are frequently restrained by problems with their data [1]. Issues of quality, availability or accuracy appear as distinct barriers, but resolving these only constitute short-term solutions [2]. Harvesting value from data requires mastering the basics of information management, but this is not a job for the IT function alone [3]. Instead, the entire organization needs an overarching direction and here data governance has emerged as a promising approach. Data governance refers to who holds the decision rights and is held accountable for an organization's decision-making regarding its data assets [4]. It sets the direction for an organization's data management practices. However, data governance literature is still scarce. Scholars in the field construct data governance as frameworks of decision-domains based on theoretical and at times empirical synthesis, but rarely address processes of implementation and adoption in practice [5]. While conceptual studies are important, they provide little actionable direction for organizations. Furthermore, no studies report explicitly on the particular challenges of governing data in public organizations [6].
This paper examines data governance challenges faced by local government organizations at the municipal level of the public sector in Denmark. This is a particularly interesting case as Denmark is a world leading country in digitisation according to Europe's Digital Economy and Society Index [7]. Denmark introduced mandatory digital self-service with an 87% adoption rate as of 2016 [8]. This means that storage of digital data about citizens has exploded in the past years, demanding better data management practices. Also, the newest national digital strategy contains three goals that all depend on a number of underlying specific initiatives related to data. This includes better use of data to enable quicker case processing, public sector data as a driver for growth, and increased attention to protect data [9].
For Danish municipalities, who will be responsible for a large part of the implementation, the national initiatives will compel them to undertake structured ways of managing their data with data governance. Doing so, may in some cases seem irrelevant or even at conflict with their primary obligations as a public agency and thus complicate the endeavour to implement data governance. In addition, the creation of value from data requires both exploration of potential opportunities and exploitation of existing assets [10,11], which in the implementation of strategy may come in many different forms [12]. Exploring data is about generating new insights, while exploiting data about applying these insights [13,14]. As each have different objectives, it is crucial to recognize the distinction at the outset of any data initiative [13], including data governance. In this context, our paper addresses the research question: Why is it difficult for local government organizations to explore and exploit their data assets with data governance?
To elaborate, we focus on the management of large amounts of heterogenous data, from a variety of systems in local government organizations. This is based on the assumption that managing this data requires data governance. We address the research question through engaged scholarship [15]. Following the collaborative variant, we have engaged in joint formulation of problems with municipal practitioners to get an understanding of the challenges that might make it difficult to apply the data governance literature's recommendations in practice [16] The paper is structured as follows. First, relevant literature is presented. Next, we describe the research approach and detail the data collection and analysis. Subsequently, we present our findings in the form of nine challenges that are central to the municipalities in relation to exploring and exploiting their data assets. The challenges are summarized as three overall themes that provide a succinct answer to the research question. We discuss the findings' contribution to research, implications for practice, and directions for future research. A short conclusion ends the paper.
Theoretical background
Information has been an issue of strategic importance for decades, but recent technological developments have enabled the storage of more information than ever. Data may be considered the building blocks of information [17], and so managing information as a strategic resource means ensuring responsible treatment of data as organizational assets. Organizations should therefore be aware of their data to use them effectively and ensure their quality; as volume increases, the complexity of managing data will as well [6]. Here, data governance emerges as a structured approach. Scholars frequently discuss data governance in the context of ensuring data quality, presenting this as one of the primary goals of data governance [18][19][20]. While quality is important, it is only one element of effective data governance, which must be driven by and aligned with business goals [18,[21][22][23]. Data governance may then be defined as companywide processes that specify decision-making rights and responsibilities aligned with organisational goals to encourage desirable behaviour in the treatment of data as an organisational asset [24,25]. In other words, data governance sets the principles and direction for an organization's data management practices.
Only few studies within the data governance field focus on implementation and adoption of data governance in practice. Begg & Caira investigate the dilemmas faced by organizations when pursuing data governance, in the context of small to medium enterprises (SMEs) [26]. They identify a series of relevant "quandaries". First, organizations may not recognize the inherent value of their data, nor will they perceive it as existing separate from the IT systems, and some organizations may not even be able to access their data, because it is "trapped" with vendors. Secondly, organizations may understand the value data governance can deliver, but may ultimately decide that the effort to achieve this by far exceeds the perceived benefits [26]. These findings indicate that practitioners find it difficult to grasp the value-creating potential of data governance. In another study, Begg & Caira also found that managerial and executive understanding and awareness of data have major influence on the organization's ability to conceive a data governance strategy [27]. These studies are relevant to our research because they highlight the importance of understanding value, when implementing data governance, but it is not clear whether this applies to public organizations as well.
Another study has focused explicitly on the public sector, albeit with the perspective of establishing a master data management function [28]. These findings suggest establishing master data management is difficult due to a series of paradoxes. First, there is a need to identify data owners, but people remain committed to group specific functions, and not to organization-wide development. Second, although there is a recognized need for data governance, tasks and responsibilities are avoided. Third, there is a recognized need for an organization-wide vision of master data, yet individual views remain the order of the day [28]. These findings suggest implementing data-related programs across organizational units is challenging in a public-sector context, but it remains unclear whether this also applies to data governance.
The abovementioned findings suggest that practitioners find it difficult to discover, understand and harness the value-creating potential of data. As such, it provides a start-ing point for addressing the research question and understanding the challenges of governing data in municipalities. Concepts of exploration and exploitation [10,12,29] are used to characterize the identified challenges, because distinguishing between these is crucial in data initiatives [13].
Research approach
Our methodology can be described as engaged scholarship [15] with a particular focus on the formulation of problems with (not for) practitioners [16]. We followed the collaborative form of engaged scholarship to "co-produce basic knowledge about a complex problem" [15]. We collaborated with participants in a Danish network for municipal IT practitioners and researchers. The network was founded in 2009 as part of a joint IS research project with a number of municipalities, seeking to increase the degree of public digitalisation and municipalities' ability to innovate with IT. In its current form, the network consists of 13 municipalities and a dozen IS researchers, who collaborate on a set number of workshops and theme days each year. The participating members have previous experiences with engaging in academic IS scholarship, which helped the researchers gain access to the setting, create trust with informants, facilitate cultural understanding and establish rapport [30,31]. It should be noted that our point of departure for this study is the problem "owners" in practice, namely people working with data in local government. Therefore, we are only concerned with the citizens perspective to the degree, that practitioners bring it up.
Research setting
Denmark is a consensual and technologically advanced society. In the Digital Economy and Society Index (2017) that summarises indicators on Europe's digital performance and competitiveness, Denmark holds the first place and is described as a world leader in digitisation [7]. The Danish national digital strategy for 2016-2020 [9] aims to further enhance the use of IT in the public sector in order to deliver good, efficient and coherent services to citizens and businesses. Moreover, the strategy contains three goals that incorporate better use of data to enable quicker case processing, public sector data as a driver for growth, and increased attention to protect data. While interpreting the national digital strategy entails complexities of prioritization, it is highly influential on both central and local government practice [32]. Danish municipalities are somewhat de-centralised and they commission and manage their own data repositories in addition to the central registers. The municipalities are not merely the executive wing of central government. They have a great deal of autonomy in how they organize the delivery of public services and are responsible for a large part of the Danish welfare state, with primary education, day care for children, social welfare, and care of the elderly as important examples. Danish municipalities constitute an interesting case because they already collect and manage vast amounts of data on their citizens. Historically, the governance of IT acquisition and development has been decentralized, focusing mainly on individual and departmental needs, causing a current landscape that is fragmented and consists of hundreds of different systems, across departments within a single municipality. This has resulted in vast amounts of heterogenous, and at times redundant, data across the municipalities, that is in dire need of governance. Ensuring that these data are managed responsibly (exploitation), as well as used to generate new value (exploration) has implications for Danish society as a whole.
Data collection
The empirical data was collected by the first author using semi-structured group interviews. The group interview is a qualitative data gathering technique that has the advantages of being inexpensive, data rich, flexible, stimulating to respondents, recall aiding, cumulative and elaborative, over and above individual responses [30]. The participants were members of the abovementioned network and came from several different layers of the municipalities, ranging from managers, to consultants, project managers and technical experts.
As there is a lack of existing empirical studies of data governance in the public sector, group interviews were used in this study to gain empirical data from several hierarchical levels in order to cover a "variety of voices" [33]. Municipalities differ across many characteristics, including size and digital maturity. Group interviews across (and among) practitioners in municipalities therefore allowed for nuances of practice to be brought forth, as the participants could discuss and reflect amongst themselves [30].
The data was collected through six sessions spanning a period of three months (see Table 1). Two sessions were of a general character including participants from different municipalities, and three sessions involved participants from the same municipality. One session only had one participant, and therefore functioned as a classic semi-structured interview [33]. Table 1. Activities for data collection
# Activity
Participants Hours 1 General group session 13 representatives from 9 municipalities 3 2 Individual session 1 representative from 1 municipality 1,5 3 Individual group session 4 representatives from 1 municipality 1,5 4 Individual group session 2 representatives from 1 municipality 1 5 Individual group session 2 representatives from 1 municipality 1 6 General group session 20 representatives from 12 municipalities 6 Total 34 representatives from 13 municipalities 14 The first session introduced data governance as a viable practice and we received feedback from practitioners regarding the necessity for and utility of such an approach in municipal settings. Between session #1 and #6, the first author conducted four interviews, which had the purpose of unfolding specific barriers, challenges, or difficulties related to working with data. As these sessions had fewer participants from the same organization, more time was available for each of the participants to express their views and it was possible to touch upon topics of more sensitive character. The last session focused on presenting, discussing and validating findings, and encouraging further dialogue on how to work with data governance going forward.
Data analysis
The data was coded following the conventional approach to qualitative content analysis [34]. First by reading transcripts and field notes, highlighting interesting or relevant parts and collecting them in a separate document. Upon completion, the extracted quotes were arranged as challenges and named. The material was then coded again, using the newly constructed challenges to collapse any duplicates and reduce potential internal contradictions. The process was repeated until challenges could no longer be created, collapsed or split.
In order to reduce potential bias, the last general group session functioned as a site to test the validity of the identified challenges. The last group session had the highest turn out, and thus allowed for valuable refinement of the findings from a variety of perspectives. The analysis resulted in the identification of nine challenges that were further conceptualized at a higher level of abstraction as three main themes (see Table 2).
Findings
In this section, we present each theme and then detail the challenges it consists of.
Data value and overview
The first three challenges presented above show that efforts to explore and exploit data are complicated by short-term perspectives on usage, lacking overview of existing data sources, and a poor understanding of data value. The three challenges can be summarized under the theme Data value and overview, which emphasizes the municipalities' struggle to understand and express the value-creating potential of data.
Short-term perspective on data usage. A recurring challenge throughout all the sessions revolved around the lack of understanding of what data can be used for, beyond the context of its immediate practice. Currently, data is primarily considered convenient for performing a specific workflow and as a by-product of working in a digital environment: "Many of those who work with data are not used to thinking of data as an asset [ What needs to happen is a change of the mindset in going from a reactionary to a proactive view on data. However, this will not happen by itself. The employees have to be introduced to the somewhat abstract idea of seeing data as an asset: "People need to be told this story that you can see data as either something you depend on in being reactive, or where you consider it an asset [and] become a little more proactive" (Development consultant) Central points highlighted under this challenge indicate that municipalities find it difficult to start exploring the value-creating opportunities that data might have, because the Danish municipality employees are very focused on their primary obligation, i.e. the day-to-day operations of welfare services. Becoming more data-driven is therefore a major change to the organizational mindset.
Value from data initiatives are difficult to understand. Although the participants show enthusiasm and see potential in working more structured with data, they find it challenging to express the potential value to stakeholders in the rest of the organization. Especially framing the value of data initiatives to ensure economic resources for data related projects is difficult: "Our BI (Business Intelligence) system has been three years on the way, and it has taken us long to convince our management to spend just minimal resources on this. It's hard to sell the idea of infrastructure and data as [infrastructure] upward in the organization" (IT architect) While the benefits seem clear to the project members, it is challenging to communicate the value of data initiatives to executive levels. At the same time, other participants question the value, but hear from other municipalities it is 'the best thing' to do: "We find it hard to spot the value, but we know … that someone says it's just the best thing you can do. It's also a good foundation [to invest in data governance] and our gut feeling tells us it's a good idea, but we just want this specific use case that illustrates 'this is what we're going to create the foundation for'" (Financial consultant) What follows is an amalgamation of issues, where municipal practitioners attempt to secure resources to build an appropriate infrastructure for the future use of data (exploitation). Yet to succeed with this, they need a persuasive, illustrative use case (exploration) to convince the top layers of the organization of the relevance of investing in the use of data as an asset. As such, issues of exploration and exploitation are closely tied together here.
Lack of overview of existing data Related to challenges of building appropriate infrastructure, most of the municipalities are challenged by fragmented enterprise architecture and legacy systems. In many cases, the municipalities do not even have access to some of their own data, as it is stored on servers placed with the vendors, who delivered the original system, and they demand high costs for providing access. This makes it near impossible to gain an overview of what data actually exists, where it is, who has access to it, and how it may generate value: "One thing is the complexity of many different solutions, but it is something else to have 40 years of legacy systems that have been implemented at random. There was no consideration of infrastructure at that time […] we are sitting on a gold mine of data and knowledge that we do not even know about" (Head of IT) At the forefront is a very concrete obstacle to exploit data assets, as they are downright difficult to access in legacy systems. Simultaneously, this also makes exploring potential value-generation nearly impossible, as no overview exists.
Data practice and collaboration
The next three challenges show that lack of cross-organizational collaboration and high degrees of autonomy within the departments makes it difficult to start governing data and exploit data assets, while distrust in certain professional domains further complicates data exploration efforts. The challenges can be summarized under the theme Data practices and collaboration, which emphasizes that diverse, local practices make it difficult for municipalities to design and implement shared data governance principles and practices.
Autonomy within the different departments.
To ensure data treatment in line with the principles set forth by the data governance programme, some degree of standardized processes is necessary. Enforcing this in highly specialized and autonomous departments will be a central challenge according to several participants. The high level of autonomy is pointed to as a distinct feature of the public as opposed to the private sector: "This is the way you implement decisions, and it is very different [from the private sector], and there is a lot of room for interpretation that makes things not so straightforward" (Head of IT) It is highly likely that the different departments will implement a local adaption of a decision, that fits their existing practice, rather than follow the standardized directions. To curb this problem, it is suggested to frame the principles as being of value to the departments, but this would vary too much between the different fields: "Ideally, it should be of
value, but there is a big difference between speaking to a technical department full of engineers, or [speaking to] nurses, pedagogues and teachers, because [then] you should really know your visiting hours and how to communicate" (Development consultant)
As such, exploiting data assets in departments that are used to and comfortable with working structured and systematically with data will not require the same effort as it will in domains, where exploiting data is not common practice.
Distrust toward data in social fields
In the same vein as trying to deal with autonomous departments, some professionals remain highly skeptical towards data governance and the role of data in their particular domain. Especially departments within social fields remain distrustful, as their profession is about making individual, subjective judgements regarding sensitive cases: "Here …, it is more feeling for the individual case and [they are asking the question] what is it even data is. Here, the anxiety [regarding data] is more pronounced" (Financial consultant) In addition, some professionals fear an increase in visibility of data regarding their cases, to other parts of the organization will expose them. They worry it may result in someone higher up making decisions regarding their domain, based on this data, without consulting them. Especially a fear that others might misinterpret data is apparent: "People fear you interpret the data incorrectly, so just trusting that data is being treated and analysed correctly is a huge change-oriented project in itself" (Financial consultant) Overcoming a tradition of suspicion regarding data is perceived as a widespread challenge. Specialists do not trust that data will be exploited appropriately or adequately, and therefore remain skeptical about exploring avenues for new or better use of data.
Lack of cross-organizational collaboration
One of the opportunities many municipalities are very keen to pursue, is combining data about a citizen from several systems across departments to gain a full overview of the individual. According to the participants, this will have transformative impact on a wide variety of elements, from the way they monitor the effects of specific initiatives to the way they deliver services to the public. Yet, to do so, the different departments have to establish tight collaboration with each other, but this is difficult: "You can have a siloed organisation, and then work together across, with good processes. But we don't have that. We try to facilitate datasharing across with a BI-project, but those are just the terms. The departments simply don't collaborate." (Financial consultant) The wider the distance between what purpose a data governance process or principle serves and the person, who has to adhere to it, the less meaningful it might appear: "The closer you move towards, where we meet the citizens and run everyday operations, the less meaningful [a data governance principle] can be experienced by the employees" (Head of Digitalization) As such, cross-organizational collaboration and deconstructing siloes become a pivotal part of the process of exploiting data assets. Yet, the collaborative work required may appear the least meaningful to the employees who are closest to the data.
Data capabilities and politics
The last three challenges suggest that (lack of) capabilities across departments and hierarchal levels makes it difficult to envision a strategic direction for the use of data across a municipality. The varying levels of maturity and a lack of understanding of the value-creating potential of data at both executive and political levels in the municipalities further complicate the process of exploring and exploiting data assets. There challenges can be grouped under this theme, which emphasizes the need to take varying data capabilities across departments and management functions into account; in general and in particular, if the aim is to develop an organization-wide data governance programme.
Varying levels of data maturity across different departments Data governance entails implementing processes and principles that are supposed to be enterprise-wide. However, currently it is not possible to design such a wide-reaching data governance program for a municipality, because the different departments within the municipality have varying levels of data management maturity. Several municipalities point to the employment sector as very experienced in working with data: "The field of employment is extremely data-driven and guided by managing information, which it has been for many years and I think it's easy to notice how the employees have this experience and focus on data quality and data usage" (Head of Digitalization) On the other hand, the elderly sector is in some municipalities not experienced at all, and does not realize how welfare technology may change the foundation of their entire domain. In one case, they are lacking a basic understanding of the role that IT can play in their profession: "We have just reached out to the elder area, because we have to create a digitization strategy. They do not have it in their consciousness and we would like to help them. The first meeting we had, they thought we were there to discuss which PCs they should have and what phones they should buy. And that was probably the last thing we came to discuss" (Head of IT) As such, this challenge is also at the intersection of exploring and exploiting data assets. In order to design and implement data governance for the municipality as a whole, it is necessary to consider the maturity of the individual departments. Depending on the department and their existing data and work practices, it may be more reasonable to focus on either exploration or exploitation of data assets, but the relationship between these remain unclear, thus becoming challenging.
Lack of top-level support for data initiatives
Gaining support from the executive levels of the municipalities is framed as a common challenge. According to participants, it is because they need the compelling use case that links working structured and systematically with data to value in the municipal context. They agree that right now, most data initiatives are powered by passionate individuals: "The passionate cannot drive this alone, because at one point there will be no more passion left. There must be top management support" (Project manager) While this challenge relates to the difficulty of understanding and expressing the value that data initiatives (#2) might be able to generate, achieving top-level support also has other objectives and consequences. For example, it may be easier to communicate the value of data to the rest of the organization, if top-management has understood it and helped frame the goals of data governance as related to the overall goals of the organization. As such, this challenge remains at the intersection of exploration and exploitation; executives cannot comprehend the value creating potential of exploiting data assets, until they have seen successful examples of exploration.
Lack of political focus on data usage in municipal context. Some participants feel digitalization and management of data should be on the political agenda for their municipality. While this may appear to be related to achieving top-level support, getting politicians to see the opportunities for strategic use of data goes beyond improving administrative processes. If data was involved in political discussion, it could shape the future development of the public sector. To engage politicians will be a challenge, as few have capabilities for understanding the value of data: "No politicians can comment on this meaningfully. It is not a political issue … in the municipality and when I say that, I mean something like 'data is important because it can make us a better municipality' … But it's not there, it's only administrative" (IT architect) Similar to attaining top-level support, this challenge is also related to the interplay between exploiting and exploring data assets. As suggested by the challenge regarding distrust towards data in social fields (#5), both exploration and exploitation of data assets in a municipal context can become a politically infused endeavour, in that it may disturb some fundamental values. Bringing data usage on the political agenda is thus both an issue of exploring data opportunities to raise awareness regarding its applicability, but also remain an issue of exploitation as powerful interests may influence its strategic direction.
Discussion
In this section, we discuss our findings in relation to the theoretical background section and our research question: Why is it difficult for local government organizations to explore and exploit their data assets with data governance? First, we discuss how each theme corroborates previous research on data governance, and how it relates to the broader context of e-government research (summarized in Table 3). Next, we discuss the findings' implications for practice and point to directions for future research.
Contribution to research
The theme Data value and overview extends Begg & Caira's findings from their SME study [26], where they found that the perception of the value-creating potential of data have a major effect on the pursuit of data governance initiatives. From our results, it becomes clear that a basic understanding of data value is also central to challenges with data governance in local government and not only in SMEs. E-government initiatives are often complicated by certain value traditions that are embedded in managers' cultural environments, but rarely explicit and sometimes at conflict with one another . The managers in Danish local government may hold different value positions that can be both congruent and converging. Thus, when it comes to assigning value to data in local government, many actors bring diverse interests that complicate opportunities for success. This is also highlighted by Guha & Chakrabarti [35] in their conceptualization of e-government networks. They argue e-government projects are prone to failure, if not understood as networks of actors who are forced to co-operate, despite different goals, objectives, and culture. Competing value positions, goals, objectives and actors are thus well-known issues in e-government research, and contribute to understanding challenges within the first theme. The theme Data practices and collaboration extend the findings on establishing master data management in the public sector [28]. Here, they identified a series of paradoxes that point to the difficulty of establishing organization-wide support and responsibility for data initiatives in the public sector. Our findings show that diverse practices across different municipal departments also complicates establishing cross-organizational structures for data governance, and not only master data management. Implementing IT-enabled changes in the public sector requires that processes are incorporated in existing routines, which call for consideration of situated practices and institutionalizing the changes [36]. The friction between existing practices and implementation of e-government initiatives is therefore not new nor unexamined. Additionally, Juell-Skielse et al. [37] examined different modes of collaboration and expectations in inter-organizational e-government initiatives. They found that modes of collaboration do not exist in and of themselves; rather they are inherently related to the benefits they are presumed to produce. Establishing cross-collaboration with data initiatives may therefore require heightened focus on the expected benefits.
Last, the theme Data capabilities and politics also extend Begg & Caira's other work on data governance in SMEs [27]. They found that an organization's ability to conceive strategic direction for their data governance is dependent on the top-level's capabilities for understanding data's value creating potential. Our results suggest that perspectives on data in local government remain short-term with a poor understanding of data value at the executive and political levels. In e-government literature, capability maturity implies a focus on the relationship between input areas, such as human, structural, relational, and IT capital and the resulting maturity stages [38]. Practitioners conducting maturity assessments of their local governments can help them prioritize strategies and resources [38] and similarly, consideration of data capability maturity might enable municipal actors to focus their exploration and exploitation efforts. Lastly, Rowley [39] conceptualizes a typology of e-government stakeholder roles related to stakeholder benefits. Understanding e-government stakeholders and mapping the benefits they gain in relation to data governance initiatives may help mobilize support from the appropriate roles.
While the three themes corroborate and extend existing data governance literature, they are not new issues in the e-government literature. This could imply that challenges related to exploration and exploitation of data assets in public organizations require specific attention and examining implementation of data governance in local government should be done with the broader e-government field in mind. [27] Capability maturity [38]; E-government stakeholders [39]
Implications for practice and future research
The central implication of this study on data governance is how municipal practitioners can understand their challenges with data governance in the context of the three themes.
While paradoxes are addressed in other strands of the literature [41], conceiving of challenges constitutes a useful way to be aware of potential pitfalls and developing programs to specifically overcome these. When initiating data governance programs and attempting to implement more structured and systematic practices, it can be useful to consider how challenges might affect initiatives. It may also help managers to identify the most urgent areas and thus prioritize the scarce resources for data initiatives. Moreover, focusing on how challenges relate to issues of exploring and exploiting data assets can assist practitioners in communicating value or getting started with designing and implementing processes. Our findings and the discussed previous research suggest that data governance in local government is a large-scale change effort that requires a lot more than just the designation of roles and responsibilities. It requires attention to the three themes and broader issues examined in e-government literature. We propose that future research delves into how the three themes of value, practices and capabilities relate or effect each other, in order to conceptualize a relevant theoretical framing of these. While municipal practitioners are keen to pursue data related opportunities, they struggle with issues of exploration and exploitation according to the findings of this study. Studies that explore the three themes' correlation, as well as how to take advantage of the interplay of exploration and exploitation activities are encouraged. Finally, we must emphasize that our investigation of challenges in data governance is limited to the views within local government organizations. Involving the citizens' perspectives and rights pertaining to governing often personal and sensitive data is a very important direction for future research, and a well-known problem in the e-government literature [42][43][44].
Conclusion
Our research shows that it is difficult for local government organizations to explore and exploit their data assets with data governance for three main reasons. Firstly, they struggle to understand and communicate the value that data and data governance might be able to create. Second, diverse, local practices complicate the design and implementation of a shared, standardized approach to data and third, varying data capabilities across departments and among managers and politicians makes it difficult to envision a strategic direction for the use of data across the organization as a whole. These three themes may assist practitioners, who wish to get started with data governance initiatives. Our findings corroborate and extend existing data governance literature for local government organizations and in addition, suggest that the identified themes relate to broader e-government issues. | 2018-10-18T00:36:11.986Z | 2018-06-25T00:00:00.000 | {
"year": 2018,
"sha1": "fb7e265ed02022707c3233592c468bf239e91649",
"oa_license": "CCBY",
"oa_url": "https://hal.inria.fr/hal-02068930/file/476633_1_En_2_Chapter.pdf",
"oa_status": "GREEN",
"pdf_src": "ScienceParseMerged",
"pdf_hash": "01e6f4b2abf341e8386e8ace3e182460c41df8bb",
"s2fieldsofstudy": [
"Political Science",
"Sociology"
],
"extfieldsofstudy": [
"Computer Science",
"Political Science"
]
} |
269945112 | pes2o/s2orc | v3-fos-license | Stress, Dysregulation of Rhythms, and Bipolar Disorder: A Challenging Field of Research
Clarifying the mechanisms by which circadian rhythms regulate biology is a central issue in directing life choices in the immediate future and presents an interesting challenge for current scientific research [...].
Clarifying the mechanisms by which circadian rhythms regulate biology is a central issue in directing life choices in the immediate future and presents an interesting challenge for current scientific research.The importance of this research aspect is well established by the Nobel Prize in Physiology and Medicine awarded in 2017 to Hall, Rosbash, and Young, who uncovered key elements of biological rhythm systems, including those in human bodies.Specifically, their work demonstrated how the biological rhythms of the body align with the Earth's rhythms.
The internal coordination of physiological and behavioral activities with daily environmental variations is regulated by 24-h 'circadian' cycles [1].Misalignment of the circadian clock alters the sleep-wake cycle, resulting in ineffective responses to environmental cues.It has been hypothesized that the abnormal division of cancer cells is also linked to disruptions in circadian rhythms, and associations have been found between changes in circadian rhythms and some cancers [2,3], with a specific role noted for light pollution in breast cancer [4].Dysregulation of circadian rhythms has also been identified as a risk factor for cardiovascular diseases, obesity, asthma, and diabetes [5,6].Indeed, both internal and external signals influence the clock, but light is the most significant factor [1].
It is known that sleep rhythm is a central element in both bipolar disorders and in individuals with hyperactivity and high energy levels, even in the absence of mood disorders [7,8].Thus, misalignment of personal and social rhythms-such as those for sleeping, eating, and social commitments and contacts, which are closely connected with circadian biorhythms-has been associated with what is called 'the spectrum of bipolar disorders'.In a neo-Kraepelinian view, the term 'bipolar spectrum' refers to a pyramid of conditions.At the apex are the most severe forms of bipolar disorders, and at the base are many individuals characterized by hyperactivity, high energy, and a tendency for noveltyseeking, but without clear mood disorder connotations.In the middle, moving towards the apex, are all the vulnerability scenarios with sub-clinical symptoms that transition into mood disorders ranging from the mildest to the most severe, including depression, impulse control disorders, mood swings, anxiety disorders, personality disorders, and substance abuse disorders [9][10][11][12][13][14].In this perspective, various temperaments could represent subclinical forms of bipolar disorder or simply be a risk factor for them [15,16].
There is evidence of vulnerability in the spectrum of bipolar disorders towards triggers capable of altering social rhythms and biorhythms.Light pollution has been found to be associated with a high prevalence of hypomania in megacities and may even play a role in the genesis of these disorders [17][18][19][20][21]. Socio-economic and cultural changes in current lifestyles have led to substantial modifications in light patterns, which may be one of the causes for the increase in bipolar disorders in our society, or at least for a paradoxical stability over time for such a strongly disadvantageous disease.This suggests that, at least the sub-threshold forms of the bipolar spectrum, have an adaptive connotation in modern life [20].This perspective transcends the standard nosographic categorical approach to studying the continuum of hyperthymic traits and attitudes that can be adaptive depending on external stimuli, thus embracing a dimensional approach that can also improve the characterization of prodromal symptoms and assess symptomatology.In fact, it has been found that non-pathological personality characteristics and traits of hyper-energy/hyperactivity (without any diagnoses of mood disorders) frequently have genetic variants associated with bipolar disorder.That is, those genetic characteristics would not be typical of bipolar disorder "itself" but of hyper-energy/hyperactivity and novelty-seeking traits, including in people with the disorder as a sort of "tip of the iceberg" [22][23][24][25].
According to this approach, the onset of a case of bipolar disorder could be attributed to the convergence of genetic variants associated with hyper-energy and hyperactivity, in association with certain types of stress conditions in different life phases, particularly those of heightened vulnerability [26][27][28].
The recent pandemics and the subsequent implementation of lockdowns in several countries have inadvertently subjected people to a kind of environmental experiment, which has altered many risk factors for bipolar spectrum disorders, creating a convergence of stressors and the dysregulation of social and biological rhythms [29,30].
This framework has led to the following observations: Older adults with dysregulation of social and personal rhythms before the pandemic and lockdown were found to be at risk for depressive episodes during the lockdown [31,32].
Stricter lockdown measures triggered more depressive episodes in people with bipolar disorders compared to less strict lockdown measures in two geographically proximate cities (i.e., Cagliari, Italy vs. Tunis, Tunisia) [33].Healthcare workers, who were under significant pressure due to the pandemic and had more dysregulation of personal and social rhythms, were found to be at greater risk for burnout syndrome and mood disorders [34][35][36][37].The dysregulation of social rhythms negatively impacts the course and outcomes of chronic diseases and immune responses [38,39].
The attention to the misalignment of personal rhythms related to stress has prompted reflection on the observation that individuals with a positive score on screening instruments designed for bipolar disorder, such as the Mood Disorder Questionnaire (MDQ), have been found to exhibit dysregulation of personal rhythms, particularly in relation to sleep [40][41][42][43].In fact, the MDQ was considered useless by many researchers as a screening tool because it identified too many false positives [44,45].Some researchers have suggested that it may be the diagnostic criteria for bipolar disorder itself that set too high a standard, and that a positive MDQ score may identify a clinically significant spectrum that is closely associated with bipolar disorder [46].Indeed, MDQ positivity has been found to be associated with low levels of health-related quality of life, even in individuals without psychiatric diagnoses [46].
This line of research has led to the hypothesis of the existence of three different levels of hyperactivation/hyper-energy (ranging from normality to pathology) [47,48].
The first level represents an adaptive increase in energy, as observed in athletes achieving excellent results [49].The second level is characterized by strong and prolonged stimulation of stress hormones and is identified by a positive score on the MDQ (without the individual receiving a diagnosis of bipolar disorder), as seen in healthcare workers experiencing burnout syndromes.
Consequently, a dysregulation of mood, energy, and social rhythms syndrome (DYMERS), corresponding to the second level of the ranking, has been proposed as a hypothesis.This perspective views DYMERS as a vulnerable condition that could potentially evolve into other disorders, with a significant risk for bipolar disorder, among others.The nature of this evolution is determined by individual-specific susceptibility based on genetic profiles and specific sensitivity to particular stresses arising from gene-environment interactions [47,48].
Conclusions
The scenario outlined in our discussion of current research on stress, dysregulation of rhythms, and the bipolar spectrum provides fertile ground for future investigation.This Special Issue aims to contribute to this field, offering an important step forward and serving as a stimulus for future research endeavors in the years to come.While our contribution may not provide conclusive answers at present, it undoubtedly represents a significant advancement in our understanding of these complex issues. | 2024-05-22T15:21:01.583Z | 2024-05-01T00:00:00.000 | {
"year": 2024,
"sha1": "6fcbe205cd9de54c6ea97b73ced60ed62ea9b484",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2077-0383/13/10/3014/pdf?version=1716213588",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "72483332a6759afaff5c92d082f94d31e0e2f627",
"s2fieldsofstudy": [
"Psychology",
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
6962304 | pes2o/s2orc | v3-fos-license | Persistent reflux symptoms cause anxiety, depression, and mental health and sleep disorders in gastroesophageal reflux disease patients
Some patients with gastroesophageal reflux disease experience persistent reflux symptoms despite proton pump inhibitor therapy. These symptoms reduce their health-related quality of life. Our aims were to evaluate the relationship between proton pump inhibitor efficacy and health-related quality of life and to evaluate predictive factors affecting treatment response in Japanese patients. Using the gastroesophageal reflux disease questionnaire, 145 gastroesophageal reflux disease patients undergoing proton pump inhibitor therapy were evaluated and classified as responders or partial-responders. Their health-related quality of life was then evaluated using the 8-item Short Form Health Survey, the Pittsburgh Sleep Quality Index, and the Hospital Anxiety and Depression Scale questionnaires. Sixty-nine patients (47.6%) were partial responders. These patients had significantly lower scores than responders in 5/8 subscales and in the mental health component summary of the 8-item Short Form Health Survey. Partial responders had significantly higher Pittsburgh Sleep Quality Index and Hospital Anxiety and Depression Scale scores, including anxiety and depression scores, than those of responders. Non-erosive reflux disease and double proton pump inhibitor doses were predictive factors of partial responders. Persistent reflux symptoms, despite proton pump inhibitor therapy, caused mental health disorders, sleep disorders, and psychological distress in Japanese gastroesophageal reflux disease patients.
Introduction G astroesophageal reflux disease (GERD) is defined as "a condition that develops when the reflux of stomach contents causes troublesome symptoms and/or complications". (1) GERD is categorized into erosive reflux disease (ERD), defined as the presence of esophageal mucosal breaks, and non-erosive reflux disease (NERD), defined as the presence of GERD symptoms without esophageal mucosal breaks. In East Asian countries, more than half of GERD patients have a mild form of ERD or NERD, as identified by endoscopic examination, unlike Western countries. (2,3) Recently the number of GERD patients in Japan has increased and the prevalence of GERD in Japan is now almost the same as that in Western countries. This increase is due to an increase in gastric acid secretion and a decrease in the Helicobacter pylori (H. pylori) infection rate, as well as to the increased attention that GERD has gained, and advances in the concept of GERD itself. (2,4) In studies in Western countries, the severity and frequency of GERD symptoms has been shown to increase physical and mental health disorders, and night-time symptoms have been reported to cause sleep disturbances in more than half of GERD patients. (5,6) Proton pump inhibitors (PPIs) strongly inhibit gastric acid secretion, and are generally the first choice of treatment for GERD. (1) Many studies have reported that PPI therapy improved physical health, mental health, and sleep disorders in GERD patients. (7)(8)(9)(10) However, it has also been reported that some GERD patients experience persistent and troublesome symptoms despite PPI therapy and their health-related quality of life (HRQOL) and sleep are often badly affected. (11,12) Physical and mental health levels, sleep disorders, and anxiety and depression levels are often evaluated by the 8-item Short Form Health Survey (SF-8), the Pittsburgh Sleep Quality Index (PSQI), and the Hospital Anxiety and Depression Scale (HADS), respectively. There have been few studies that have used all 3 questionnaires. Moreover, response rates to PPI therapy or causative factors affecting HRQOL in GERD patients receiving PPI therapy differ by country. The aims of this observation study were to evaluate the relationship between the efficacy of PPI and HRQOL in Japanese GERD patients receiving PPI therapy, using the SF-8, PSQI, and HADS questionnaires, and to examine predictive factors affecting response to PPI therapy.
Materials and Methods
Patients. This study was performed as an observational study to assess the association between symptomatic response to PPI therapy and HRQOL in Japanese GERD patients receiving PPI therapy at Nagoya City West Medical Center and Nagoya City University between October 2012 and February 2013. During this study period, we enrolled 145 consecutive GERD outpatients at Nagoya City West Medical Center and Nagoya City University Hospital. The subjects of this study were GERD patients, older than 20 years of age, who were undergoing continuous PPI therapy and had been receiving treatment for over 2 months. GERD had been diagnosed based on the presence of the typical reflux symptoms of heartburn and/or acid regurgitation; these symptoms occurred more than twice a week prior to PPI therapy. PPI therapy consisted of 10 or 20 mg omeprazole, 15 or 30 mg lansoprazole, or 10 or 20 mg rabeprazole once a day. Prior to PPI G therapy initiation, an endoscopic examination was performed in all patients to evaluate mucosal breaks according to the Los Angeles (LA) classification system, with NERD described as LA grade 0 and ERD described as LA grades A-D. For patients with more than 1 endoscopic examination, the worst severity was recorded.
Study design. In this study, all patients were asked to evaluate their HRQOL and PPI therapy response by filling out a gastroesophageal reflux disease questionnaire (GerdQ), SF-8, PSQI, and HADS questionnaires, and 1 further questionnaire consisting of 4 original questions related to life-style factors [alcohol, smoking, non-steroidal anti-inflammatory drugs (NSAID) intake, and PPI daily dosage]. All aspects of this study were approved by the Ethical Committees of both the Nagoya City West Medical Center and the Nagoya City University Graduate School of Medical Sciences. All patients agreed to take part in the study and answered all questions in each of the questionnaires.
Evaluation of the response to PPI therapy using the GerdQ questionnaire. The GerdQ is a 6-item questionnaire that has been recently developed to help identify patients with GERD. Using the results of this questionnaire, we classified 145 patients as either those in whom GERD was well-controlled with PPI therapy (responders) or those in whom it was not wellcontrolled with PPI therapy (partial responders). Responders were defined as patients who reported 0-1 days of persistent reflux symptoms in any of the 4 questions concerning heartburn, regurgitation, sleep disturbances, and additional medications in the GerdQ. On the other hand, partial responders were defined as patients that reported 2-3 days of persistent reflux symptoms in at least 1 of the 4 questions concerning heartburn, regurgitation, sleep disturbances, and additional medications in the GerdQ. (13) Evaluation of physical and mental health using the SF 8 questionnaire. The SF-8 is an 8-item, self-rating, questionnaire which assesses HRQOL. The scores for the 8 domains [general health (GH), physical functioning (PH), role-physical (RP), bodily pain (BP), vitality (VT), social functioning (SF), mental health (MH), and role-emotional (RE)] and 2 summaries [physical component summary (PCS) and mental component summary, (MCS)] were derived from this questionnaire. (14) The "Manual of the SF-8 Japanese Version" was used in this study. The mean score for the Japanese general population is 50 points for each domain and summary. A score of <50 was considered to indicate impaired HRQOL, and higher scores indicated a good level of functioning and well-being in comparison with the Japanese general population. (15) We evaluated the SF-8 scores between responders and partial responders.
Evaluation of sleep disorders using the PSQI question naire. The PSQI is a 19-item, self-rating, questionnaire which assesses sleep quality and disturbances over a 1-month time interval. Nineteen individual items generate 7 "component" scores: subjective sleep quality, sleep latency, sleep duration, habitual sleep efficiency, sleep disturbance, use of sleeping medication, and daytime function. The subscales yield a score from 0 to 3 and are summed to yield a total score ranging from 0 to 21. The more sleep disturbances suffered by a patient, the higher the PSQI score. The validated Japanese version was used in this study and a PSQI score of >5.5 was considered to indicate impaired sleep. (16,17) We evaluated the PSQI scores between responders and partial responders.
Evaluation of anxiety disorders and depression using the HADS questionnaire. The HADS is a 14-item, self-rating, questionnaire which assesses psychological distress. It is designed to measure anxiety and depression, and its 2 subscales contain 7 items each. Respondents indicate the frequency of any symptom on a 4-point Likert-type scale. The anxiety subscale (HADS-A) and depression subscale (HADS-D) scores were calculated as the sum of their respective 7-item scores (ranging from 0 to 21) and the total score was the sum of the 14 items (ranging from 0 to 42). When a patient suffered more serious anxiety or depressive symptoms, the HADS-A or HADS-D scores reached 7 or higher. (18) Moreover, we classified the total HADS scores as follows: <7 points, "mild"; 7-11 points, "moderate"; >11 points, "severe". We evaluated the HADS scores between responders and partial responders.
Evaluation of background factors in responders and
partial responders. Age, gender, body mass index (BMI), alcohol intake, smoking, H. pylori infection, NSAID intake, concomitant gastrointestinal medication other than PPI, esophagitis grade by LA classification, and PPI daily dose were evaluated in responders and partial responders by univariate analysis as background factors associated with the response to PPI therapy. Moreover, we performed a multivariate analysis to predict partial response to PPI therapy using the factors that had a p value <0.2 in univariate analysis.
Statistical analysis. Statistical analysis was performed using the Ekuseru-Toukei 2015 platform (Social Survey Research Information Co., Ltd., Tokyo, Japan). Values are expressed as mean ± SD or frequency (%). Student's t test was used to compare individual SF-8 component scores, PSQI scores, HADS component scores, and background factors in PPI responders and partial responders. The Fisher's exact test was used for the analysis of HADS Anxiety and Depressive state, LA classification, and PPI daily dose. A logistic regression analysis was used to determine predictive factors for partial response to PPI in GERD patients, which were expressed as an odds ratio (OR) and 95% confidence interval (CI). A 2-sided p value of less than 0.05 was considered statistically significant.
Results
Patient clinical characteristics. The study population consisted of 145 GERD patients (mean age, 65.3 years; female, 49.0%; median GERD disease duration, 18 months; mean BMI, 22.9). A summary of the patient characteristics is shown in Table 1. PPI therapy response rate as determined by the GerdQ questionnaire. Seventy-six (52.4%) and 69 (47.6%) patients were identified as responders and partial responders to PPI therapy, respectively (Fig. 1).
Comparison of the SF 8 score between responders and partial responders. The mean scores for the 8 domains of the SF-8 are shown in Fig. 2A The GH, VT, SF, MH, and RE scores were significantly lower in partial responders than in responders (p<0.001, p<0.01, p<0.01, p<0.001, and p<0.01, respectively). The mean PCS and MCS scores are shown in Fig. 2B. In responders, the mean PCS and MCS scores were 46.480 and 50.824, respectively. In partial responders, the mean PCS and MCS scores were 45.181 and 45.162, respectively. The MCS score in partial responders was significantly lower than that in responders (p<0.001). There was no significant difference in PCS score between responders and partial responders.
Comparison of the PSQI scores between responders and partial responders. The mean PSQI scores are shown in Fig. 3. In responders and partial responders, the mean PSQI score was 5.1 and 9.4, respectively. The PSQI score in partial responders was significantly higher than that in responders (p<0.001).
Comparison of the HADS scores between responders and partial responders. The mean HADS-A and HADS-D scores, and the total HADS score are shown in Fig. 4A. In responders, the mean HADS-A, mean HADS-D, and total HADS scores were 3.826, 4.826, and 8.776 respectively. In partial responders, those scores were 6.857, 6.663, and 13.468, respectively. The HADS-A, HADS-D, and total HADS scores of the partial responders were significantly higher than those of the responders (p<0.001, p<0.01, and p<0.001, respectively). The ratios of "severe", "moderate", and "mild" were 31.3, 13.4, and 55.2%, respectively, in responders, and 69.4, 12.9, and 17.7%, respectively, in partial responders (Fig. 4B). There was significant difference in the anxiety or depression level ratio between responders and partial responders (p<0.0001). The "severe" and "mild" ratios in partial responders were significantly higher and lower, respectively, than those in responders (p<0.0001 for both).
Comparison of the background factors between respond ers and partial responders. The background factor results for responders and partial responders are shown in Table 2. LA classification and PPI daily dose were significant factors associated with PPI response (p<0.01, p<0.01, respectively). Age, gender, GERD disease duration, BMI, alcohol intake, smoking, and NSAID intake were not significantly associated with PPI response. Sixty-three patients (43.4%) were classified as having NERD. The proportion of partial responders with NERD was significantly higher than the proportion of responders (p<0.01). The proportion of responders classified as LA-C was significantly higher than the proportion of partial responders (p<0.05). Moreover, the proportion of responders receiving the regular PPI dose was significantly higher than that of partial responders (p<0.01), Values are presented as the mean ± standard deviation or frequency (%); Duration of GERD is presented in order of the median, lower quartile, and upper quartile; GERD, gastroesophageal reflux disease; BMI, body mass index; NSAID, non steroidal anti inflammatory drugs; NERD, non erosive reflux disease; ERD, erosive reflux disease; PPI, proton pump inhibitor. The GH, VT, SF, MH, and RE scores of the partial responders were significantly lower than those of the responders. *p<0.01, **p<0.001.
(B) SF 8 summary score. The mental component summary (MCS) score in partial responders was significantly lower than that in responders. There was no significant difference in physical component summary (PCS) score between responders and partial responders.
whereas the proportion of partial responders receiving the double PPI dose was significantly higher than that of responders (p<0.05 (Table 3).
Discussion
This study evaluated the relationship between the efficacy of PPI therapy and HRQOL (physical health, mental health, sleep levels, anxiety levels, and depression levels) in Japanese GERD patients receiving PPI therapy and examined predictive factors affecting the response to PPI therapy. We found that approximately 47% of GERD patients receiving PPI therapy were partial responders and that these patients had significantly more mental health, sleep, anxiety, and depression disorders in comparison to responders. Moreover, responders did not display any mental health disorders and GERD patients generally suffer from some physical disorder, regardless of the response to PPI therapy. Furthermore, the grade of the endoscopic findings and the PPI daily dose were significant factors that affected the response to PPI therapy, and NERD and the double PPI daily dose were predictive factors of partial response to PPI therapy.
According to Jones et al., (13) a GERD patient receiving PPI therapy is defined as a partial responder when 2-3 days of persistent reflux symptoms are reported in at least 1 of 4 questions concerning heartburn, regurgitation, sleep disturbance, and additional medication in the GerdQ. In our study of GERD patients receiving PPI therapy for more than 8 weeks, the proportion of partial responders, identified using the GerdQ, was approximately 47%. In a systematic review, El-Serag et al. (19) reported that persistent and troublesome GERD symptoms remained in 17-32% of primary care patients receiving PPI therapy and 45% (30-60%) of participants reported persistent GERD symptoms in observational primary care and community-based studies. In our study of Japanese GERD patients, the proportion of partial responders was slightly higher than that of these previous studies. One of the reasons for the high proportion of partial responders in our study could be due to the fact that our patients were enrolled at a core hospital and a university hospital, in which refractory GERD patients are often introduced secondarily from primary clinics. In these patients, the effect of PPI therapy may have decreased over time, as they would have often contracted GERD, the median duration of which was 18 months, and often been treated with PPI.
Our SF-8 data showed that partial responders had scores of <49 points for all SF-8 domains, PCS, and MSC, whereas responders had scores of <49 points for only 1 SF-8 domain and PCS. These results indicated that partial responders had significantly more mental health disorders than responders. A systematic review reported that the physical health and mental health scores of nonresponders to PPI therapy were, on average, 8-16% and 2-12% lower, respectively, than those of responders. (11) Contrary to these previous reports, there was no significant difference in physical health disorders between responders and partial responders in our study. However, the PCS scores of the responders and partial responders were both <47 points, which showed that GERD patients suffer from some type of physical disorder regardless of response to PPI therapy, compared with the general Japanese population. Hongo et al. (9) reported that, 8 weeks after the initiation of rabepurazole treatment, SF-8 scores had improved in their study. However, our study revealed that nearly half of GERD patients are still partial responders in spite of PPI therapy, and persistent reflux symptoms are strongly associated with a mental health disorder.
We demonstrated that the mean PSQI score of 9.4 in partial responders indicated a serious degree of sleep disorders; this score was significantly greater than that in responders. On the Fig. 3. Comparison of the Pittsburgh Sleep Quality Index (PSQI) scores. The PSQI score was significantly higher in partial responders than in responders. There was a significant difference in the ratio of anxiety or depression levels between responders and partial responders. The "severe" ratio of partial responders was significantly higher than that of responders and the "mild" ratio of partial responders were significantly lower than that of responders. other hand, the mean PSQI score in responders was <5.5, which indicated that responders did not suffer from more sleep disorders than the general Japanese population. This result revealed that persistent reflux symptoms are also strongly associated with sleep disorders. Fujiwara et al. (12) reported the occurrence of sleep disorders in 52.2% of Japanese GERD patients before PPI treatment. This study also reported that daily treatment with rabeprazole for 8 weeks decreased sleep disorders. However, Hiramoto et al. (20) reported that once-daily treatment with esomeprazole for 2 weeks decreased GERD symptoms, but did not decrease sleep disorders. Whether the sleep disorders of partial responders can be improved by PPI therapy alone is a subject for future analysis.
Our data showed that the HADS-A and HADS-D scores of responders and partial responders were both <7, which indicated that they did not suffer from serious symptoms of anxiety or depression. However, the total HADS score of partial responders was >11, which indicated that they displayed severe psychiatric symptoms. In addition, partial responders suffered from a significantly greater degree of anxiety and depression disorders than responders. Moreover, the proportion of partial responders with severe psychiatric symptoms was significantly higher than that of responders, while the proportion of responders with mild psychiatric symptoms was significantly lower than that of responders. Consistent with our results, a population-based study in Norwegian GERD patients reported that anxiety and depression were strongly associated with reflux symptoms. (21) However, contrary to our results, a study by Boltin et al. (22) observed that failure to respond to PPI therapy was not associated with anxiety or depression. In addition, a study in Korean patients observed that response to PPI therapy was not associated with anxiety or depression. (23) These conflicting results may be due to differences in the method used to measure response to PPI therapy. Our study revealed that persistent reflux is also strongly associated with anxiety and depression levels.
In our study, the background variables affecting the response to PPI therapy were LA classification and PPI daily dose, but not gender, BMI, alcohol intake, smoking, or NSAID intake, as determined by univariate analysis. The proportion of partial responders receiving a double PPI daily dose was significantly higher than that of responders. On the other hand, the proportion of responders with LA grade C classification or receiving a regular PPI daily dose was significantly higher than that of the partial responders. Moreover, we revealed, by multivariate analysis, that Values are presented as either the mean ± standard deviation or frequency (%); Duration of GERD (months; n = 145) is presented in order of the median, lower quartile, and upper quartile; GERD, gastroesophageal reflux disease; BMI, body mass index; NSAID, non steroidal anti inflammatory drugs; NERD, non erosive reflux disease; ERD, erosive reflux disease; PPI, proton pump inhibitor. *p<0.05, **p<0.01.
Factors
Responders (n = 76) Partial responders (n = 69) NERD and a double PPI daily dose were significant factors predicting partial response to PPI therapy. We believe that partial responders took higher doses of PPI in an effort to improve persistent and troublesome reflux symptoms, whereas responders were satisfied with the decrease in GERD symptoms they achieved with the regular daily dose of PPI. In other words, it is very difficult to improve GERD symptoms completely with only PPI therapy. Several recent studies have suggested that female sex, lower BMI, NERD, esophageal hypersensitivity, non-acid reflux, and psychological factors were associated with poor response to PPI treatment in GERD patients. (23)(24)(25)(26) In East Asian countries, NERD was identified as a risk factor for sleep disorders, and the daily activity of NERD patients was affected more than that of ERD patients, consistent with our results. (3,4,12,25) The present study has some limitations. First, the number of study subjects was small, and we did not include healthy controls. Moreover, 3 different kinds of PPI treatment were used. At present, we are accumulating data on GERD patients with persistent symptoms to further explore factors affecting HRQOL. We want to examine whether the control of persistent reflux symptoms using other treatments can improve mental health, sleep, anxiety, and depression disorders. (27)(28)(29) In conclusion, nearly half of Japanese GERD patients receiving PPI therapy were partial responders, and their persistent and troublesome reflux symptoms caused mental health and sleep disorders, and psychological distress despite PPI therapy. In addition, responders did not suffer from mental health disorders, and all GERD patients had some form of physical disorder regardless of response to PPI therapy. Moreover, LA classification and PPI daily dose were identified as significant factors affecting the response to PPI, and NERD and the double PPI daily dose were predictive factors of partial response to PPI therapy. | 2018-04-03T03:34:27.225Z | 2016-05-21T00:00:00.000 | {
"year": 2016,
"sha1": "b7d6954d575a9f501b658cd55feada139060d63f",
"oa_license": null,
"oa_url": "https://www.jstage.jst.go.jp/article/jcbn/59/1/59_16-9/_pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "b7d6954d575a9f501b658cd55feada139060d63f",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
246634001 | pes2o/s2orc | v3-fos-license | Loss-Induced Quantum Revival
Conventional wisdom holds that quantum effects are fragile and can be destroyed by loss. Here, contrary to general belief, we show how to realize quantum revival of optical correlations at the single-photon level with the help of loss. We find that, accompanying loss-induced transparency of light in a nonlinear optical-molecule system, quantum suppression and revival of photonic correlations can be achieved. Specifically, below critical values, adding loss into the system leads to suppressions of both optical intensity and its non-classical correlations; however, by further increasing the loss beyond the critical values, quantum revival of photon blockade (PB) can emerge, resulting in loss-induced switch between single-PB and two-PB or super-Poissonian light. Our work provides a strategy to reverse the effect of loss in fully quantum regime, opening up a counterintuitive route to explore and utilize loss-tuned single-photon devices for quantum technology.
Loss is ubiquitous in nature, which is usually regarded as harmful and undesirable in making and operating quantum devices. Very recently, loss has been found to play an unconventional role in non-Hermitian physics [1][2][3][4], such as loss-induced transparency of light [5,6], lossinduced lasing revival [7], and loss-induced nonreciprocity [8,9]. These pioneering works, however, have mainly focused on the classical regime, i.e., studying loss-tuned optical intensity, instead of quantum correlation of light. Understanding the role of loss in engineering purely quantum effects not only facilitates the development of open quantum theories, but also provides a practical way to fabricate loss-controlled quantum devices inaccessible by conventional ways and allows exploring their applications in quantum technology.
Here we show that, accompanying the classical revival of optical intensities, quantum correlations of light can also be revived by adding loss in an optical compound system. We note that in the pioneering experiments on loss-induced transparency [5,7], classical suppression and revival of optical transmission are attributed to the emergence of an exceptional point (EP), featuring the coalescence of both the complex eigenvalues and their corresponding eigenstates [56,57]. In contrast, we find that in our system, quantum suppression and revival of optical correlations precisely correspond to the conditions of two-photon resonance and excitation-spectrum mode coalescence. More interestingly, we also find that different types of quantum correlations can emerge in the revived light, by merely increasing the loss (via placing an external nanotip near the optical resonator), resulting in losstuned quantum switches between single-photon and twophoton blockades. Our work extends loss-induced effects into the purely quantum regime, opening up a promising way to study various quantum effects with lossy synthetic materials [9,58] or topological structures [59,60], as well as to build loss-tuned single-photon devices for quantum engineering [61][62][63] and quantum metrology [64][65][66].
LE Ps
FIG. 1. Loss-induced suppression and revival in an optical compound system. (a) A whispering-gallery-mode resonator µR1 with Kerr-type nonlinearity χ coupled to a linear optical cavity µR2 with additional loss γtip induced by a Cr-coated nanotip. (b) The locations of Hamiltonian exceptional points (HEPs, black dashed line) agree well with those of Liouvillian exceptional points (LEPs) obtained through the fully quantum simulations [67]. Here, we focus on the case of J/γ 1 = 2 (red solid curves). (c) The EP at γtip/γ 1 = 8.9 leads to classical and quantum critical points, CPc,q, in mean photon number N1 and quantum correlation g (2) (0), respectively. Here, CP q↓ and CP q↑ are related to the quantum suppressive and revived processes, respectively. For the experimentally accessible parameter values, see the main text.
The intrinsic losses of the two resonators are γ j=1,2 . The total loss of µR1 is given by γ 1 = γ 1 + γ ex , where γ ex is the loss induced by the coupling between the resonator and the fiber taper. An additional loss γ tip is introduced on µR2 by a chromium (Cr) coated silica-nanofiber tip, featuring strong absorption in the 1550 nm band [7]. The strength of γ tip can be increased by enlarging the volume of the nanotip within the linear cavity mode field, leading to a linewidth broadening without observable change in resonance frequency [7]. Thus, the total loss of µR2 is given by γ 2 = γ 2 + γ tip .
We study the eigenenergy spectrum of this system by considering the effects of loss. The eigenstates |ψ ±,0 1,2 are the superposition states of the Fock state |m, n with m photons in µR1 and n photons in µR2 [67]. The complex eigenvalues of this non-Hermitian system in the onephoton excitation subspace are found as whose real and imaginary parts are respectively indicate the eigenfrequencies ω ± 1 and the linewidths κ ± 1 . Here, Γ = (γ 1 + γ 2 )/4 and β = (γ 2 − γ 1 )/4 quantify the total loss and the loss contrast of the system, respectively.
The Hamiltonian EPs (HEPs) are defined as the spectral degeneracies of the non-Hermitian Hamiltonian [56,57], which emerge for λ + 1 = λ − 1 , i.e., For a full quantum picture, we study Liouvillian EPs (LEPs) including the effect of quantum jumps [1,67]. As shown in Fig. 1(b), the LEPs and HEPs occur at the same positions indicating a good agreement between the semiclassical and fully quantum approaches [1].
As what one would expect in conventional systems, additional loss γ tip decreases the mean-photon number N 1 to zero in µR1. However, N 1 recovers with more loss in the vicinity of the classical critical point (CP c ), i.e., the γ tip with the minimum of N 1 [ Fig. 1(c)]. The quantum statistics of this light can be recognized from the second-order correlation function g 1 (0). The condition g 1 (0) > 1] characterizes sub-Poissonian (super-Poissonian) statistics or photon antibunching (bunching), and g (2) 1 (0) → 0 indicates a full single-PB. Adding loss annihilates the single-PB, and converts the light from antibunching into bunching. We refer to the γ tip for g (2) 1 (0) = 1 as quantum critical points (CP q↓,↑ ). Remarkably, in the vicinity of CP q↑ , the sub-Poissonian light recovers despite the increasing loss, with the revival of single-PB at an EP. More intriguingly, when N 1 recovers after CP c , the quantum statistics of the light can be tuned between bunching and antibunching by increasing loss below or beyond CP q↑ , respectively. This loss-induced quantum revival is fundamentally different from the classical revival of transmission rates [5][6][7].
To study this loss-induced quantum revival, we consider the HamiltonianĤ i in a frame rotating with the driving frequency ω l : where ∆ = ω c − ω l is the optical detuning, Ω = [γ ex P in /( ω l )] 1/2 is the driving amplitude with power P in on µR1. The optical decay can be included in the effective HamiltonianĤ eff = H r − i j=1,2 (γ j /2)â † jâ j [81]. The probabilities of finding Photon excitation pathway (beyond the EP) 1 (0) versus γtip. The markers (squares, circles) and black lines are analytical and numerical solutions, respectively. (c) The cavity excitation spectrum S1(∆) and the eigenfrequency spectra ω1,2 with linewidths κ1,2 show the origin of the quantum suppression and revival. (d) The revived 1PB can be understood from the mode coalescence in S1(∆), and (e) the photon excitation pathway beyond the EP, where ω ± 1 coalesce at ωc, and the dashed arrows are the forbidden excitations. The parameters are the same as those in Fig. 1.
An excellent agreement between our analytical results and the exact numerical results is seen in Fig. 2. Figure 2(a) shows the loss-induced classical suppression and revival of the intracavity photon number N 1 . Below CP c , γ tip /γ 1 = 5.3, N 1 is decreased to 0.003 by increasing additional loss. When the loss exceeds CP c , N 1 is revived due to the EP-induced mode coalescence; resulting in a predominant mode localized in µR1. This classical coun- 1 (0) (red solid curve) and g terintuitive effect has been used for realizing loss-induced revival of lasing [7].
In addition, |ψ 0 2 and |ψ + 1 are respectively governed by the states |2, 0 and |1, 0 when the system operates at or beyond the EP. As shown in Fig. 2(e), when the light resonantly coupled to |0, 0 → |1, 0 , the transition from |1, 0 to |2, 0 is detuned by 2χ, indicating a single-PB is revived because of the anharmonic energy-level spacing induced by Kerr nonlinearity. We conclude that the interplay of excitation-spectrum mode coalescence and the two-photon resonance in nonlinear eigenfrequency spectrum leads to the loss-induced quantum revival of single-PB. This underlying principle is different from that of loss-induced entanglement [89] in which a quantum effect is realized through conditional dynamics. Figure 3 shows that different types of quantum statistics can be tuned by increasing loss for the light revived after CP c . As single-PB featuring two-photon antibunching, two-PB features three-photon antibunching, but with two-photon bunching, which indicates the absorption of two photons can suppress the absorption of additional photons [30]. This two-PB effect can be characterized by the conditions g When the light recovers after CP c , a two-PB emerges with g 1 (0) ∼ 1.12 at γ tip /γ 1 = 6 [ Fig. 3(a)]. Adding γ tip beyond CP q↑ leads to a single-PB occurs at the EP. These results can also be confirmed by comparing the photon-number distribution P m with the Poisson distribution P m [ Fig. 3(b)]. We find that P 2 is enhanced while P m>2 are suppressed at γ tip /γ 1 = 6, which is in sharp contrast to the case at the EP. With such a device, a switching between two-PB and single-PB can be achieved by increasing loss below or beyond CP q↑ . As for as we know, this loss-induced quantum switching between different types of non-classical statistics has not been revealed in previous works on loss-induced classical revival [5][6][7].
In summary, we have shown how to realize loss-induced quantum revival of single-PB in a compound nonlinear system. In contrast to the single-PB effects in conventional systems, we find less loss annihilates single-PB, and more loss helps to recover single-PB in quantum revival regime of light. This counterintuitive quantum effect happens because of the interplay of two-photon res-onance and excitation-spectrum mode coalescence. More interestingly, different types of quantum correlations are exhibited in the revived light, which can be well controlled by tuning loss. These results, shedding light on the marriage of non-Hermitian physics and quantum optics at the single-photon levels, open up the way to reverse the effect of loss for steering quantum effects in various systems, such as plasmonics, metamaterials, and topological photonics. Our scheme no longer relies on destructive interference between different modes [8,52], or additional gain media [90,91], which may enable novel quantum devices assisted by the loss for the applications of quantum engineering or metrology.
Here, we present more technical details on the intracavity field intensities and quantum correlation functions (Sec. S1), as well as the cavity excitation spectrum and the eigensystem (Sec. S2).
S1. INTRACAVITY FIELD INTENSITIES AND QUANTUM CORRELATION FUNCTIONS
We consider an optical-molecule system consisting of a Kerr resonator (µR1) directly coupled to a linear resonator (µR2). In a frame rotating with the driving frequency ω l , this system can be described by the following Hamiltonian where ∆ = ω c − ω l is the optical detuning,â j=1,2 are the intracavity modes with resonance frequency ω c , J is the coupling strength between the two resonators, χ = 3 ω 2 c χ (3) /(4ε 0 ε 2 r V eff ) is the Kerr parameter with vacuum (relative) permittivity ε 0 (ε r ), nonlinear susceptibility χ (3) , and mode volume V eff . The driving amplitude is given by Ω = γ ex P in /( ω l ) with the power P in on µR1, and the loss induced by the coupling between the resonator and the fiber taper γ ex .
The optical decay can be included in the effective HamiltonianĤ eff =Ĥ r − i j=1,2 (γ j /2)â † jâ j , where γ 1 = γ 1 + γ ex (γ 2 = γ 2 +γ tip ) is the total loss of µR1 (µR2), γ 1 and γ 2 are the intrinsic losses of the two resonators, and an additional loss γ tip is induced on µR2 by a chromium (Cr) coated silica-nanofiber tip. Under the weak-driving condition (Ω γ 1 ), the Hilbert space can be restricted to a subspace with few photons. In the subspace with N = m + n = 3 excitations, the general state of the system can be expressed as with probability amplitudes C m,N −m , which can be obtained by solving the Schrödinger equation: When a weak-driving field is applied to the cavity, it may excites few photons in the cavity. Thus, we can approximate the probability amplitudes of the excitations as C m,N −m ∼ (Ω/γ 1 ) N . By using a perturbation method and discarding higher-order terms in each equation for lower-order variables, we obtain the following equations of motion for the probability amplitudes iĊ 00 (t) = 0, iĊ 01 (t) = ∆ 2 C 01 (t) + JC 10 (t), iĊ 10 (t) = ∆ 1 C 10 (t) + JC 01 (t) + ΩC 00 (t), where ∆ 1 = ∆ − iγ 1 /2, ∆ 2 = ∆ − iγ 2 /2, ∆ 3 = ∆ 1 + χ, ∆ 4 = ∆ 1 + 2χ, ∆ 5 = 2∆ 3 + ∆ 2 and ∆ 6 = ∆ 1 + 2∆ 2 . For the initially empty resonators, i.e., the initial state of the system is the vacuum state |00 , the initial condition reads as C 00 (0) = 1. By settingĊ mn (t) = 0, we obtain the following solutions where The probabilities of finding m photons in µR1 and n photons in µR2 are given by P mn = |C mn | 2 . The mean-photon numbers in µR1 and µR2 are denoted by N 1 and N 2 , respectively, and can be obtained from the above probability distribution as 1 (0) versus the optical detuning ∆ for different γtip. The analytical results (colored markers) agree well with the numerical results (black solid curves). The parameters are the same as those in Fig. S1. | 2022-02-08T04:00:24.652Z | 2022-02-05T00:00:00.000 | {
"year": 2022,
"sha1": "42fad1c8dcb1b08e39be1163cf2b4837afc8b3e3",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "42fad1c8dcb1b08e39be1163cf2b4837afc8b3e3",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
270281706 | pes2o/s2orc | v3-fos-license | Age is not just a number: How incorrect ageing impacts close‐kin mark‐recapture estimates of population size
Abstract Population size is a key parameter for the conservation of animal species. Close‐kin mark‐recapture (CKMR) relies on the observed frequency and type of kinship among individuals sampled from the population to estimate population size. Knowledge of the age of the individuals, or a surrogate thereof, is essential for inference with acceptable precision. One common approach, particularly in fish studies, is to measure animal length and infer age using an assumed age‐length relationship (a ‘growth curve’). We used simulation to test the effect of misspecifying the length measurement error and the growth curve on population size estimation. Simulated populations represented two fictional shark species, one with a relatively simple life history and the other with a more complex life history based on the grey reef shark (Carcharhinus amblyrhynchos). We estimated sex‐specific adult abundance, which we assumed to be constant in time. We observed small median biases in these estimates ranging from 1.35% to 2.79% when specifying the correct measurement error standard deviation and growth curve. CI coverage was adequate whenever the growth curve was correctly specified. Introducing error via misspecified growth curves resulted in changes in the magnitude of the estimated adult population, where underestimating age negatively biased the abundance estimates. Over‐ and underestimating the standard deviation of length measurement error did not introduce a bias and had negligible effect on the variance in the estimates. Our findings show that assuming an incorrect standard deviation of length measurement error has little effect on estimation, but having an accurate growth curve is crucial for CKMR whenever ageing is based on length measurements. If ageing could be biased, researchers should be cautious when interpreting CKMR results and consider the potential biases arising from inaccurate age inference.
| INTRODUC TI ON
Close-kin mark-recapture (CKMR) is a method for estimating population size and other key parameters such as fecundity (and population growth and survival rates) using data on the relatedness of individuals sampled from the population (Bravington, Skaug, & Anderson, 2016;Skaug, 2001).The key rationale is that small populations will tend to contain a higher proportion of closely related individuals than large populations.
One of the main advantages of CKMR over capture-recapture (Otis et al., 1978) and its extensions such as spatial capturerecapture (Borchers & Efford, 2008) is that it can be applied in cases when sampling is necessarily lethal, such as fisheries, and when physical recaptures are rare or impossible, where alternative metrics are often relative (e.g., catch-per-unit-effort) and potentially unreliable (Bravington, Grewe, & Davies, 2016;Casas & Saborido-Rey, 2023).This is because CKMR does not require the recapturing of individuals, but rather their genetic markers.Offspring share genetic information with their parents (hence 'kin'), thus they 'mark' their parents when born; through modern genetics we can compare sampled individuals with one another to see if these marks are 'recaptured'.So far, CKMR has been developed for parent-offspring pairs (POPs; e.g., Bravington, Grewe, & Davies, 2016;Ruzzante et al., 2019;Trenkel et al., 2022), half-sibling pairs (HSPs; e.g., Hillary et al., 2018;Bravington et al., 2019;Patterson et al., 2022), and the combination of both (e.g., Bradford et al., 2018).The rise in popularity of the method has become clear from an increase in published studies involving CKMR, although the total number of applications is still small (Delaval et al., 2023).Most of the applications up to this point involved marine or aquatic species.Several salmonids have been studied (Prystupa et al., 2021;Ruzzante et al., 2019;Wacker et al., 2021), as well as large pelagic species such as southern bluefin tuna (Thunnus maccoyii; Bravington, Grewe, & Davies, 2016) and the pelagic bluefin tuna (Thunnus orientalis; Tsukahara et al., 2023), and a variety of elasmobranchs such as white sharks (Carcharodon carcharias; Hillary et al., 2018), lemon sharks (Negaprion brevirostris; Swenson et al., 2024), thornback rays (Raja clavata ;Trenkel et al., 2022), blue skates (Dipturus batis; Delaval et al., 2023), and grey nurse sharks (Carcharias taurus; Bradford et al., 2018).The Christmas Island ying-fox (Pteropus natalis; Lloyd-Jones et al., 2023) and the yellow fever mosquito (Aedes aegypti; Sharma et al., 2022) were the only terrestrial species that we could identify in published CKMR studies to date.
CKMR with POPs can estimate the size of the entire adult population, whereas with HSPs only the breeding adult population is estimated, for example, post-reproductive adults are 'invisible' for the method (Bradford et al., 2018).Here, we focus only on POPs.For any comparison between two individuals, the probability that a potential offspring truly is the offspring of the parent is inversely related to the number of mature animals alive in the birth year of the offspring.
Probabilities of finding a kin pair are expressed as a function of the expected relative reproductive output (ERRO) of the parent in the year that the offspring was conceived.This approach is parent-centric, as it starts from the point that the parent is sampled and then formulates a probability for a PO relationship (an alternative, offspringcentric formulation was proposed by Skaug (2017)).
In the simplest scenario, the probability of any adult being the parent of a juvenile reduces to two over the number of potential parents, assuming a 50:50 sex ratio; in reality, this probability is often more complicated, for example, when reproductive output is related to age, or when there is stock structure or population trend.To use relatedness to estimate adult population abundance with acceptable precision, it is therefore essential to accurately age the studied animals because birth year is derived from their age.
Accurate ageing can be challenging: for example, epigenetic ageing requires calibration using individuals of known age (De Paoli-Iseppi et al., 2017;Polanowski et al., 2014), which is not always possible; ageing via otoliths, which are calcium carbonate structures in the inner ear, can be relatively accurate (Campana, 2001) but requires lethal sampling and is only possible for animals that have otoliths (and sharks are not among those); and ageing by counting the dental or cementum growth layer groups in teeth is not necessarily lethal and commonly used for (marine) mammals (Hohn, 2009, Chapter 9), but cannot be applied to fish species.Sharks can be aged from their vertebrae, but this is a lethal procedure and can be biased in various ways or even unusable depending on the species (Burke et al., 2020).
Alternatively, length can be used to infer age through growth curves, which seems appealing as length is often recorded during sampling.
Accurate estimates for growth curves of the studied species are not always available, however, and age as a function of length (ageat-length) can vary substantially between populations of the same species (e.g., Bradley, Conklin, Papastamatiou, McCauley, Pollock, Kendall, et al., 2017).Moreover, length measurements often involve measurement error.Swenson et al. (2024) studied the effects of ageing error from incorrect length measurement through simulation and found that incorrect ageing can induce substantial bias in CKMR parameter estimates.Various degrees of error were added to the true lengths of individuals, after which these were converted to ages using a von Bertalanffy growth curve.These 'incorrect' ages were then used as inputs for the CKMR model without explicitly modelling the length measurement error.
Simulation is an important tool to assess the robustness of statistical methods to violations of model assumptions (DiRenzo et al., 2023) and their performance more generally (Morris et al., 2019).Through simulation, Conn et al. (2020) studied the effects of unmodelled spatial heterogeneity on CKMR estimation and found that this can induce a negative bias in the abundance estimates; Sévêque et al. (2024) found that fitting overly simplistic CKMR models (that do not account for complex life-history traits or selective sampling) can cause biases in survival and estimates in non-trivial directions; and Waples and Feutry (2022) showed, among other things, that age-specific vital rates can bias abundance estimates from CKMR.We follow an agent-based simulation approach similar to Swenson et al. (2024) to explore the effects of incorrect ageing on the CKMR adult abundance estimator.Unlike Swenson et al. (2024), our model does not assume length (and thus age) to be perfectly known but rather we explicitly account for the measurement error on lengths.
It is often important for demographic modelling to account for the uncertainty in the age estimates, especially when sampling probabilities depend on the age of individuals (i.e., when there is 'selectivity'), which is fundamental to fishing (Vasilakopoulos et al., 2020).Correctly accounting for ageing error is therefore still an active part of fisheries research (e.g., Hulson & Williams, 2024).Fournier and Archibald (1982) showed how ageing error in catchat-age data can be accounted for as long as the ageing error is known.Later, Richards et al. (1992) developed statistical methodology to account for ageing error when the error is unknown, using multiple readings of fish.We are unaware of any CKMR studies in which ageing error is directly modelled and estimated.Bravington et al. (2019) accounted for the uncertainty in ageing by first fitting a known-age CKMR model to the data and then refitting the model ten times, resampling ages from the age-at-length curve each time.In our simulation, ageing error is introduced in two ways: (i) through misspecified growth curves, and (ii) through incorrect length measurements, that is, measurement error.In reality, error could also be (and almost surely is) introduced within a population through natural variation in length-at-age, for example, as a function of genetic and environmental factors.We assume that all individuals follow the growth curve perfectly; however, one could readily interpret the length measurement error as the joint error of length measurement and length-at-age variation, or even solely as length-at-age variation if that is more appropriate for a particular case study.We assume ageing error from incorrect length measurements to be known and explicitly account for it in our model (Bravington, Skaug, & Anderson, 2016, Section 3.1.4).
The research presented in this manuscript is centred around two fictional shark species that are based on a grey reef shark population (Carcharhinus amblyrhynchos) at Palmyra Atoll, in the central Pacific Ocean (Bradley, Conklin, Papastamatiou, McCauley, Pollock, Pollock, et al., 2017;Papastamatiou et al., 2018).This motivating case study consists of genetic samples that were collected from this population in 2013 and 2014.One fictional species is a simplification of the real species (hereafter referred to as the 'simple species') and was included to test the basic performance of the model.The other fictional species has more realistic life history traits (hereafter referred to as the 'complex species') and was included to more closely match a real empirical study.We also compare the results for both species.
It is paramount to first explore the feasibility of CKMR, for example through simulation, before committing the resources and time required for the correct collection and genetic analysis of the samples.
Moreover, the findings will be relevant to other CKMR studies when age is uncertain.
| MATERIAL S AND ME THODS
We first present our setup of the simulations for the two fictional shark species.Simulated time series are 100 years long, with sampling occurring in the final 2 years (mimicking the 2 years of sampling in the Palmyra Atoll case study).Following that, we present the POP-based CKMR models for our two species using these 2 years of data, followed by our estimation method and performance diagnostics.We assume that kinship relationships are known with certainty; in real-life situations, one often needs to account for uncertainty in this process (Bravington, Skaug, & Anderson, 2016).All variables and quantities used in this study are summarised in Table 1.
Code for the simulation and fitting of models was written in R 4.3.2 and C++14, where the latter was linked to R through Rcpp 1.0.12(Eddelbuettel, 2013;R Core Team, 2023).
Two different 'species' were simulated separately, one with simple life history characteristics, and one with a more complex life history.For each simulation, sampling in the last 2 years was random, and mating occurred at random as well, that is, mothers and fathers were matched at random, where all non-gestating mothers mated and mature males could father multiple litters in the same mating cycle.Females of the simple species always produced two offspring, whereas the litter size for the complex species ranged from 3 to 6, with equal probability.Females of the simple species reproduced every year as gestation was negligible; females of the complex species gestated for a year and therefore reproduced every other year.
Newborns had age zero and sex was assigned at random with an expected 50:50 sex ratio.The survival process was Bernoulli where the annual survival probability was the same for all ages and sexes, but different between the two species and empirically set at a level that resulted in the yearly population growth rate equalling approximately one, that is, no growth.Natural mortality was the only source of mortality we considered, and all individuals that reached the maximum age perished at the next survival event, that is, animals could go through at most a max + 1 yearly cycles.The maximum age for sharks of the simple species was 19 years and 63 years for the complex species, where the latter matches the results from Bradley, Conklin, Papastamatiou, McCauley, Pollock, Kendall, et al. (2017).For a given species, all individuals of the same sex matured at the same age: males and females in the simple species matured at 10 years old, whereas in the complex species males matured at 17 years and females matured at 19 years of age.The length of an animal was the same for all individuals of a certain age, irrespective of sex and species.After the initialisation of a population in year zero, the simulation looped through four distinct events: a birthing/mating event, a sampling event (only in the final 2 years of the simulation), a survival event, and an ageing event (Figure 1).
For both species, we ran the simulations for 100 years, to ensure that all animals of the initial populations would have died off.Every simulation started with 8500 individuals to stay close to the population size estimate of 8433 by Bradley, Conklin, Papastamatiou, McCauley, Pollock, Pollock, et al. (2017), with an expected 50:50 sex ratio.At each sampling event, 375 individuals were randomly and non-lethally sampled, where re-captures were possible between sampling events.This resulted in at most 750 unique sampled individuals across the 2 years of sampling, which is of a similar scale as the number of genetic samples available in the motivating case study.All 750 samples were retained for analysis as there was no particular reason to exclude recaptures, unlike, for example, Hillary et al. (2018), where duplicate samples were excluded from the analysis to avoid them aliasing as half-sibling pairs.For every sampled individual, the age, year of capture, and sex were recorded; the
F I G U R E 1
Flowchart representing the different stages of the life-cycle for the simulation.A population is initialised at the start of a simulation.Following that, it loops through stage 1-4 every year the simulation runs.
true length was derived through a von Bertalanffy growth function (VBGF; von Bertalanffy, 1938;Francis, 1988), that was specified as where ∞ = 163 cm is the asymptotic length, a 0 = − 8.27 the theoretical age at length zero, and k = 0.0554 denotes the growth coefficient.
These values match the estimates of the best model in table 2 of Bradley, Conklin, Papastamatiou, McCauley, Pollock, Kendall, et al. (2017).
Gaussian noise was added to reflect (symmetric) length measurement error with variance 2 = 2.89 2 , with over-and underestimates being equally likely, after which this 'observed' length was rounded to the nearest integer.Based on these parameters, we generated 1000 different realisations of a 100-year-long population history for each species, using functions based on those from the fishSim-package (Baylis, 2022).
| POP-based estimator
We developed estimators for both populations based only on POPs.
Any other possible genetic relationship (such as half-sibling or selfcapture) was categorised as 'not a POP'.CKMR models are generally fitted through a likelihood (function), which is constructed from the joint distribution of all pairwise comparisons between the samples, that is, the product of approximately n(n − 1) ∕ 2 Bernoulli trials for a POP, where n is the number of samples.We only consider pairwise comparisons and treat these as independent, whereas they clearly are not: an offspring can only have one parent of each sex.Because we ignore these higher dependencies, our likelihood is not a true likelihood but rather a pseudo-likelihood.Working with such a likelihood should not affect the point estimates but could affect other properties of likelihood-based estimation, such as variance estimation, although this effect is likely minor or even negligible provided that a small proportion of the total population is sampled, that is, n ≪ N (Bravington, Skaug, & Anderson, 2016;Skaug, 2001).Because length is measured with error and age is inferred from length, age is uncertain and hence we cannot assume directionality in the comparison, that is, who is the parent and who is the offspring.Therefore, for any comparison for individual i and j, we test both directions (parent-offspring and offspring-parent), denoted PO/OP.In practice, we tend to optimise the logarithm of the pseudo-likelihood, the so-called 'pseudo-loglikelihood', as this is generally easier to work with and numerically more stable.Our pseudo-log-likelihood is given by where is the parameter vector, x denotes the observed data, K ij is the kinship between i and j, Pr is the probability function, ij is an indicator that is 1 if the kinship between i and j is observed to be PO/OP and 0 otherwise, and z denotes the information recorded about a captured individual, such as length.Age is required to calculate the probability of observing kinship, and therefore we sum over all potential ages for i, j and multiply by the probability density of that age given the measured length, f(a| l * ): We will now specify the two main elements of Equation (3), namely the probability of observing the PO/OP kinship, and the probability density of age given length.
| Probability of kinship
We modelled the female and male adult abundance separately; thus, for every PO/OP comparison between two individuals we had to consider, conditional on the sexes, both combinations of which individual in older and thus the potential parent.We will first present the formulae for the simple species, followed by those for the complex species.
The probability of any comparison between i and j being PO/OP is the same as the sum of testing for PO and OP separately, thus we only present the PO probabilities.For the simple species this became for the females, and for the males.Here, is an indicator function that returns 1 if its argument is true and 0 otherwise, MO and FO refer to mother-offspring and father-offspring, respectively, y denotes the birth year, the age of maturity, N s,t the total adult abundance of sex s in year t, c the year of capture, and i t 1 , t 2 the survival function for individual i from t 1 to t 2 .As survival was assumed constant, i t 1 , t 2 was defined as t 2 −t 1 .
Even though females could only have one litter whereas males could father multiple litters, their ERROs were formulated similarly, that is, the reciprocal of the total mature abundance of their respective sexes.
For the complex species, the probability of an MO pair thus became and the probability of an FO pair became The two key differences between the complex species relative to the simple one were that (1) a potential father only needed to (4) (5) .
have been alive the year before the birth of the offspring, whereas a potential mother needed to have survived until birthing, and (2) the potential parents needed to have matured at least 1 year before the birth year.To illustrate this, imagine that we are comparing two individuals from the complex species, where the parent is female, and we know the individuals' ages.The offspring was caught in year 50 at age 3, and thus born in year 47.The potential parent was female, and caught in year 45 and would have needed to survive for at least 2 years in order to be a potential parent; she was 36 years old at the time of capture, and thus born in year 9.The ERRO for this parent in the year of mating, that is, the year before the birth year of j, is the reciprocal of the number of females alive in that year who also survived 1 year of gestation, which is .Therefore, the probability that i is the mother of j would be: Every comparison, given a i and a j , contains a signal about the adult population in a specific year.We assumed a constant population size, and thus N s,t = N s .We also developed and tested a model that included sex-specific growth parameters.This model was internally inconsistent and therefore not included in the main body of this manuscript for any formal inference.However, we did include the derivations and some results in Appendix C.
| Probability density of age given length
We had an assumed true length-at-age curve l(a) (Equation ( 1)) and we knew that there was measurement error on lengths.Denoting the measured length as l * , we derived the probability density f(a| l * ) using Bayes' rule as follows: Measured length given age l * | a was assumed to follow a discretised Normal distribution, as lengths were rounded to the nearest centimetre.We followed Roy (2003) in defining this distribution as where Φ denotes the standard normal cumulative distribution function, the expectation is given by Equation (1), and l captures the standard deviation of length measurement error.As the sampling probability in the simulation was unrelated to age, the age distribution of sampled individuals was the same as the age distribution in the whole population, and we did not need to distinguish between the two.
We assumed that the population had a stable age distribution with no growth, which meant that the distribution of ages, had we not imposed a maximum age, would have been geometric with shape parameter being equal to the mortality rate, which is 1 − .Acknowledging that there was a maximum age, a max , we needed to condition on the age being at most this age, and thus where the numerator and denominator were the geometric probability mass and cumulative distribution functions, respectively.Note here that we used the definition of a geometrically distributed variable being the number of failures (survival) until a success (death) occurs.Finally, the probability density function on measured length became
| Fitting
The parameters in the CKMR model were estimated from the sam- underestimate, and a 33% and 67% overestimate.We also considered five different growth curves: the correct one, two that were shifted upwards by 5% and 10%, and two that were shifted downwards by 5% and 10%.These growth curve shifts were aimed to represent real variation in growth curves between populations of the same shark species (Bradley, Conklin, Papastamatiou, McCauley, Pollock, Kendall, et al., 2017).This resulted in a total of 25 combinations or 'scenarios'.We labelled these scenarios using the format 'ME±XX:GC ± YY', where ME refers to the measurement error, XX denotes the percentage over-or underestimate, GC stands for growth curve, and YY denotes the percentage of up-or downwards shifting; for example, the scenario with a 33% overestimated standard deviation of length measurement error and a 5% downshifted growth curve had the label ME+33:GC-5.These measurement errors and growth curves are visualised in more detail in Figure 2.
Considering 25 scenarios for every simulation resulted in the fitting of 50,000 models in total.To keep computation time to a minimum, we implemented most of the fitting process in C++. (8)
| Variance and performance
To evaluate the performance of the estimator, we present the following metrics: (i) mean error and mean relative error to evaluate a potential bias; (ii) median error and median relative error to evaluate the median bias, which uses the median instead of the mean, as the median is often more appropriate when distributions are skewed.In addition, the mean absolute error (MAE) and root mean square error (RMSE) are presented in supplemental tables.The definitions of the six metrics are given in Appendix A.1.Furthermore, we derived the 95% log-normal confidence interval (CI) coverage to evaluate the performance of these CIs in the correct growth curve scenarios.
Variance was estimated from the Hessian matrix produced by the maximum likelihood estimation, and averaged over these 1000 estimated standard errors.We can treat the pseudo-likelihood as a true likelihood as long as sampling was sparse (see Section 2.2).It is unclear if this criterion was met in our study, as we took 750 samples from a population with roughly 8500 individuals.If sampling is not sparse, the estimated variance could be negatively biased as the pairwise comparisons are not approximately independent.To explore the extent of this potential bias, we evaluated how well the average estimated standard error estimated the empirical standard deviation of population estimate errors across the 1000 simulations for each species.We include definitions of these in Appendix B.1.
| RE SULTS
The mean number of POPs for all sampling realisations was 48.6 (range: 25-76) for the simple species and 55.6 (range: 31-90) for the complex species.Mean simulated adult abundances in the final year of the simulation were 794 and 793 (range: 630-992 and 600-1019; ♀ and ♂) for the simple species and 514 and 650 (range: 400-683 and 516-824; ♀ and ♂) for the complex species.A small number of recaptures between the 2years of sampling, that is, that some individuals were sampled at more than one sampling event, occurred in every simulation.This ranged from 4 to 25 recaptured individuals for the simple species and 5-30 individuals for the complex species.
The simulated mean annual growth rate was 0.999 for both sexes of the simple species, and 1.001 and 0.998 for the males and females of the complex species, respectively; the mean annual growth for any simulation was always within 0.3 percent point from the mean across all simulations.The fitting algorithm did not always converge when the measurement error standard deviation and/or the growth curve was (very) negatively biased.Whenever this happened, it happened for most of the simulations in that scenario.Therefore, we excluded the scenarios where this happened from the analysis, which led to the exclusion of scenarios ME-67:GC-10, ME-67:GC-5, ME-67:GC + 0, ME-33:GC-10, ME-33:GC-5, and ME+0:GC-10.In the other scenarios, all models converged successfully.
For the simple species, median errors for Ns when using correct measurement error and growth curve specification (ME+0:GC + 0) were 20.83 and 22.52 (relative: 2.57% and 2.79%; ♀ and ♂) individuals (Figure 3; Tables A1 and A2).For the complex species, median errors for Ns when using correct measurement error and growth curve specification were 6.48 and 10.28 (relative: 1.35% and 1.59%; ♀ and ♂) individuals (Figure 4; Tables A3 and A4).For the simple species, median relative errors in abundance estimates were positive but close to zero for all deviations from the true standard deviation of length measurement error provided that the growth curve was correctly specified, although they were slightly larger for the females (Figure 3, also Table A1).For any given measurement error standard The left panel shows the five growth curves that were used in the scenarios tested in this study.The true growth curve is indicated in red; the black dotted lines show the incorrect ones, which were constructed by shifting the growth curve up and down in steps of 5%.These shifts were aimed to represent real variation in growth curves between populations of the same shark species (Bradley, Conklin, Papastamatiou, McCauley, Pollock, Kendall, et al., 2017).The right panel shows five measurement errors used in this study.The true simulated error was 2.89 cm, and the other measurement errors were chosen by deviating from this error in both directions.
F I G U R E 3
Box plots for the error in estimated sex-specific adult abundance relative to the true abundance for the simple species.We only present the results for scenarios in which the optimiser consistently converged; this meant that some scenarios were left blank.Box plots show the interquartile range (IQR) and the median; the mean is indicated by the darker filled circle; the vertical lines cover five times the IQR; and all values outside of that are indicated as outliers.The scenarios were labelled using the format 'ME±XX:GC ± YY', where ME refers to the measurement error, XX denotes the percentage over-or underestimate, GC stands for growth curve, and YY denotes the percentage of up-or downwards shifting; for example, the scenario with a 33% overestimated standard deviation of length measurement error and a 5% downshifted growth curve had label ME+33:GC-5.
F I G U R E 4
Box plots for the error in estimated sex-specific adult abundance relative to the true abundance for the complex species.We only present the results for scenarios in which the optimiser consistently converged; this meant that some scenarios were left blank.Box plots show the interquartile range (IQR) and the median; the mean is indicated by the darker filled circle; the vertical lines cover five times the IQR; and all values outside of that are indicated as outliers.The scenarios were labelled using the format 'ME±XX:GC ± YY', where ME refers to the measurement error, XX denotes the percentage over-or underestimate, GC stands for growth curve, and YY denotes the percentage of up-or downwards shifting; for example, the scenario with a 33% overestimated standard deviation of length measurement error and a 5% downshifted growth curve had label ME+33:GC-5.deviation, we observed a trend from a positive median error to a negative median error as we shifted the growth curve upwards (Figures 3 and 4).When growth curves were shifted down 5%, this resulted in median relative errors of around 30% for the simple species, and between 30% and 60% for the complex species.Shifting growth curves up by 5% resulted in median relative errors of −30% for the simple species, and between −30% and −40% for the complex species.
When the growth curve was correctly specified, the 95% CI coverage (rounded to one decimal place) for the simple species adult abundance estimates ranged from 96.1% to 96.4%, and ranged from 94.4% to 95.9% for the complex species estimates (Table B3).For a given growth curve, no relation between the measurement error standard deviation and CI coverage became apparent.Incorrectly specified growth curves severely lowered the CI coverage for all measurement errors for both species (Table B3).When the growth curve was correctly specified, the empirical standard errors ranged from 180.97 to 182.56 for the male simple species, and 190.85 to 192.56 for the female simple species (Table B1); for the complex species, these errors ranged from 145.77 to 148.42 for the males, and from 107.51 to 109.27 for the females (Table B2).Given a growth curve, increasing the measurement error standard deviation seemed slightly decrease the empirical standard errors, for all species and sexes.Given an assumed standard deviation of measurement error, the empirical standard errors decreased as the growth curve was shifted upwards.When the growth curve was correctly specified, the average estimated standard errors ranged from 181.04 to 183.04 for the male simple species, and 183.54 to 185.51 for the female simple species (Table B1); for the complex species, these errors ranged from 139.79 to 142.41 for the males, and from 102.99 to 105.71 for the females (Table B2).The changes in average estimated standard errors between the scenarios follow a similar pattern to the empirical standard errors.
Whenever the growth curve was correctly specified, average model estimated standard errors were always (slightly) lower than empirical standard errors in all scenarios and both species, except for the male simple species (Tables B1 and B2).The underestimation of the empirical standard error by the average estimated standard error was always < 5%.Deviations from the correct growth curve increased underestimation in all cases (Tables B1 and B2).
| DISCUSS ION
In this study, we explored the effects of incorrect age inference from length measurements on CKMR estimates of adult abundance through misspecifying the length measurement error and the growth curve in various ways.The number of POPs discovered in our simulation was in the vicinity of the 50-100 kin pairs recommended for a CKMR application (Bravington, Skaug, & Anderson, 2016), albeit on the lower end.Overall, an incorrect assumed standard deviation of measurement error mostly impacted the convergence likelihood of the fitting algorithm, whenever this standard deviation was assumed to be smaller in the fitting than was true for the simulation.Whenever the measurement error standard deviation was high enough to allow for convergence, it made little difference whether it was the true value or if a much higher standard deviation was assumed.This would suggest that if researchers are ever unsure about whether their assumed degree of spread in length measurement error is correct, it is safer to overestimate it.A misspecified growth curve, on the other hand, had drastic effects on the estimation of all parameters: a 5% shift away from the true growth curve resulted in biases ranging from −60% to +40%; estimated and empirical standard errors seemed to scale with the abundance estimates, and shifting away from the true growth curve resulted in an increased underestimation of empirical standard errors.
The model performed well under correct specification (scenario ME+0:GC + 0), although the positive median relative error in adult abundance estimates suggests a positive median bias.This error was more extreme for the simple than for the complex species.A bias in the estimates is not uncommon for maximum likelihood methods when the sample size is small, which could be true in our study as the number of sampled POPs never surpassed 76 for the simple species and 90 for the complex species.However, it could also be that this shows a slight positive bias in the method itself, especially as a previous CKMR simulation study by Conn et al. (2020) found small positive biases in the abundance estimates, too.We can express the empirical standard errors for the correct scenarios as percentages of the associated mean simulated abundances in the final year.This gives standard errors relative to the mean (also known as coefficients of variation) of 23.0% and 24.3% for the male and female estimates of the simple species, respectively, and 22.7% and 21.2% for the male and female estimates of the complex species, respectively.These are high but not uncommon for real-life population studies.
The 95% log-normal confidence intervals (CIs) seemed to accurately represent the uncertainty around the estimates whenever the correct growth curve specification was used, as the coverage ranged from 94.4% to 96.4%.Nonetheless, the coverage always exceeded 95% when the entire model was correctly specified, which could indicate that the 95% log-normal CIs were slightly conservative.
In this study, we assumed that all individual sharks followed the specified growth curve perfectly, and any variation in lengths for a given age resulted from measurement error.This is a simplification of reality, and future research could focus on ways to accommodate natural variation in length at a given age, which could be a function of age in itself.As an incorrect standard deviation of length measurement error seemed to have little effect on point estimates, we believe that, when in doubt, it is preferable to assume a higher standard deviation as this improves how likely it is that the fitting algorithm converges.
The effects of deviating from the true growth curve on the adult abundance estimates were substantial.When growth curves were shifted by 5% we often observed median relative errors of over 30%.This strongly highlights the sensitivity of the method to correct age estimation.Empirical standard errors were also increasingly underestimated when growth curves were shifted away from the truth.This effect was stronger when the growth curve was shifted upwards, that is, when ages were being underestimated.An underestimation of uncertainty could be a consequence of the comparisons not being truly independent, that is, a violation of the sparse sampling assumption.It is important here to note that we did not evaluate the standard deviation of abundance estimates but rather of the error in abundance estimates (see Appendix B.1).The true abundance was different for every simulation, so we could not use the standard deviation of the abundance estimates itself, since this would partly capture the stochasticity of the simulation process.To overcome this, we used the standard deviation of the error in abundance estimates, which should be a more robust measure of the true standard error.This should not be a problem as long as estimation is unbiased; however, our results indicate a slight positive bias, which could have impacted the accurateness of the empirical standard error in being a measure of true standard error.CI coverage was most severely impacted by incorrect growth curves; however, this was likely mostly due to the bias in the estimates in those scenarios.
In real-world applications, researchers could potentially check the correctness of their assumed growth curve by assessing the distribution of lengths/ages among the sampled individuals.If many of the observed lengths are either associated with very low ages or are close to asymptote, or in some other way exhibit an unexpected sampled age distribution given the sampling scheme, then this could be an indication that the assumed growth curve is incorrect (or that sampling assumptions are violated).
Even though recaptures should be rare (as long as sampling is sparse), they did occur in our simulations between sampling years.These recaptures did not pose any problems within the analysis, for example, getting mistaken for a different genetic relationship, and thus we retained the recaptures in our data.Alternatively, duplicate samples can be excluded from the analysis when there is reason to do so.We hypothesise that excluding recaptures would likely increase estimates of uncertainty, as fewer observations are used for the analysis.We are unaware of any study that investigated the extent to which including recaptures could potentially affect precision or even bias in CKMR estimation, and we believe that this could be a great topic for future research.Whenever it is known that multiple samples belong to a single individual, there exists the potential for extending CKMR by incorporating some form of capturerecapture into the method (Bravington, Skaug, & Anderson, 2016;Otis et al., 1978).Additionally, it could also allow us to fit the growth curve jointly with the CKMR model, instead of assuming it to be known by extrapolating from other studies (Bravington et al., 2019).This could create a situation where one collects new samples every year to update the model, thereby continuously improving the estimates not only of the abundance and trend, but also of the growth curve: in a Bayesian framework, one could use the initial growth curve as prior information, and then update the posterior every year as more information is collected.
In our model, we did not allow for any growth or decline in the population size over time.Our simulated populations exhibited no systematic growth, but the stochastic nature of the process did lead to some random growth/decline.One could consider estimating a growth rate, or assume a small range of growth rates (see Hillary et al. (2018) for an example with white sharks (Carcharodon carcharias)).The main challenge would be to understand how including a growth rate parameter affects the assumed age distribution f(a).We can imagine three general population growth scenarios.
If a population is stable but growing or in decline, the assumed age distribution will be geometric and depend on a combination of survival and growth rate (Caswell, 2006, Section 4.5.2.1).The second scenario is when a population exhibits changing growth or decline, in which case there is no stable age structure.We believe that this scenario is intractable, and it would make a good subject for a robustness study to see how much it affects estimation.The third scenario would be where there is no expected population growth or decline but there is demographic stochasticity, which in practice could result in deviations from the stable age structure.
For this scenario, an option could be to use the method described by Hillary et al. (2018), where the measured lengths were binned and a multinomial distribution was fit to these binned data to estimate the distribution of sampled ages.Still, this could be a topic for future research to see what other methods exist to find the distribution of (sampled) ages.
CKMR involves many pairwise comparisons, which often involve many identical probabilistic statements.To limit computation time, we evaluated unique probabilistic statements only once.If further computational improvements are required, it is possible to reduce the number of pairwise comparisons that are evaluated by excluding a subset of comparisons from the analysis.For example, the lengthage relationships are often much clearer for younger animals, and therefore one could choose to only consider animals up to a certain size as potential offspring (Trenkel et al., 2022).
In our simulation and model, we assumed some life history traits to be fixed and known, but this is not always required for CKMR.
We estimated sex-specific adult abundance only in our model and assumed quantities such as survival to be known and fixed.In order to relax the assumption of a fixed and known survival parameter , one could estimate it by including half-sibling pairs alongside parent-offspring pairs (Bravington, Skaug, & Anderson, 2016).
Parent-offspring pairs can be used to model fecundity as long as the parameter appears explicitly in the model, which could be the case when fecundity varies with the size or age of animals (Bravington, Skaug, & Anderson, 2016, Section 3.1.4)We are unaware of any attempts to estimate time-varying fecundity or survival, and we believe this to be a potential direction for future research.Moreover, we assumed maturity to be knife-edge as it slightly reduces the complexity of the model.However, if maturity occurs more gradually, then this can be accommodated by adding a fecundity curve to the model (e.g., a logistic curve; Conn et al., 2020).We also imposed a fixed and known maximum age in the simulation, mostly to reduce computation time.In reality, animals do not always have a maximum age; in such cases, one could set the maximum age equal to an age that the animal has practically zero probability of reaching.Further, we made the assumption that sampling was random with respect to age, that is, that there is no selectivity, which does not necessarily need to be true in reality.When accounting for ageing error when there is selectivity, it will be necessary to include some function relating true age to observed age, which would depend on the probability of being sampled at a given true age.Finally, we have not considered fishinginduced mortality, as our case study concerned an area protected from fishing.This and other anthropomorphic sources of mortality should be accounted for whenever they are present, analogously to Bravington, Grewe, and Davies (2016).
When a promising method like CKMR is first presented, one can see the appeal to start studying populations as quickly as possible.Benchmark comparisons could be useful (e.g., Ruzzante et al., 2019) to compare a new method to some 'truth'.However, these comparisons can be ambiguous when it is unclear how accurate the benchmark truly is.Simulation studies, such as this one (and see Conn et al. (2020) for the effects of unmodelled spatial heterogeneity on CKMR), are a key part of understanding when the CKMR method works well and when it does not.We believe the CKMR method has great potential and, in some cases, is an improvement over other methods, but our study confirms that care that needs to be taken when ageing is biased.In such cases epigenetic ageing could be preferable, even though epigenetic ageing can still involve substantial uncertainty (e.g., Larison et 2021;Prado et al., 2021) and relies strongly on the quality of the training data (Mayne et al., 2023).
ACK N OWLED G EM ENTS
We would like to thank Dr Mark Bravington for his time and enthusiasm whenever we discussed the methodology and worked on deriving the correct probabilities over email and in-person.We would also like to thank Dr Paul Conn and an anonymous reviewer for their thorough and thought-provoking feedback.This is contribution #1709 from the Institute of Environment at Florida International University.
CO N FLI C T O F I NTER E S T S TATEM ENT
The authors declare no conflicts of interest.
O PEN R E S E A RCH BA D G E S
This article has earned Open Data and Open Materials badges.
TA B L E A 1
Performance metrics for the estimation of parameter N ♀ , extracted from 1000 simulations of the simple shark population.
TA B L E B 1
Model-based estimates for the standard error of adult abundance estimates, averaged over 1000 simulations ( N s ), the empirical standard errors of adult abundance estimates, derived from 1000 simulations ( N s ), the difference between the two (Δ N s ), and the difference relative to the empirical standard error (Δ N s ∕ N s ; %), for both sexes of the simple species.Note: The 95% CIs were estimated for every simulation using the model-based standard error assuming a normal distribution on the link scale, which results in a log-normal distribution on the real scale.
MODELLING POPULATION GROWTH
In an alternative version of this model we included a growth parameter r, which allowed for the estimation of exponential growth or de- however, this was beyond the scope of our research.We could have assumed that the population had settled into a new equilibrium, in which case the age distribution f(a) would be proportional to the dominant eigenvector from the associated Leslie matrix.However, as our population growth/decline was stochastic rather than systematic, this did not seem appropriate (see the Discussion for more detail).We did run our simulation study with yearly growth parameters r and r assuming that the f a| * as presented in the main text was approximately correct, that is, ignoring population growth in the age distribution formulation.We decided not to include this part of the study in the main body of this manuscript, as we did not believe that the results could be used to accurately assess the effect of incorrect ageing on parameter estimation.Nonetheless, we included these results here for completeness as they could contain some valuable insight and form the basis for future research.As we considered abundance for both sexes separately, we estimated the following four parameters: N ,t 0 , N ,t 0 , r and r, where t 0 is some reference year.The kinship probabilities remained the same as presented in the Equations (4)-( 7).As population size was no longer assumed equal for all years, abundances in different years are linked through a geometric population dynamics model: where r ∈ (0, ∞ ) denotes the yearly growth rate.We set t 0 = 2014 to match Bradley, Conklin, Papastamatiou, McCauley, Pollock, Kendall, et al. (2017) as closely as possible.
Estimated abundance through time.
We fit our 25 scenarios, consisting of all combinations of 5 different measurement errors and 5 different growth curves, to both populations.We modelled the male and female side of the population separately resulting in four figures, each containing 19 population history plots-six plots are blank since the models in these scenarios (C1) N t = N t 0 r t−t 0 , F I G U R E C 1 Plots of the 1000 simple female adult population trends for the simple species for the last 20 years of the simulation, for the 19 scenarios that resulted in successful fits.The median of these 1000 trends is indicated in dark grey, and the truth adult abundance is indicated in red.The scenarios were labelled using the format 'ME±XX:GC ± YY', where ME refers to the measurement error, XX denotes the percentage over-or underestimate, GC stands for growth curve, and YY denotes the percentage of up-or downwards shifting; for example, the scenario with a 33% overestimated standard deviation of length measurement error and a 5% downshifted growth curve had label ME+33:GC-5.
did not (all) fit correctly.In Figures C1-C4 we notice a similar pattern of over-and underestimation related to shifting the growth curves.However, as we also model exponential growth or decline, we also notice effects of shifting the growth curves on the direction and magnitude of this trend.Albeit potentially informative, due to the inconsistency between modelling growth and the assumed age distribution we believe that these results cannot be directly used for inference.
F I G U R E C 2 Plots of the 1000 estimated male adult population trends for the simple species for the last 20 years of the simulation, for the 19 scenarios that resulted in successful fits.The median of these 1000 trends is indicated in dark grey, and the truth adult abundance is indicated in red.The scenarios were labelled using the format 'ME±XX:GC ± YY', where ME refers to the measurement error, XX denotes the percentage over-or underestimate, GC stands for growth curve, and YY denotes the percentage of up-or downwards shifting; for example, the scenario with a 33% overestimated standard deviation of length measurement error and a 5% downshifted growth curve had label ME+33:GC-5.
F I G U R E C 3 Plots of the 1000 estimated female adult population trends for the complex species for the last 20 years of the simulation, for the 19 scenarios that resulted in successful fits.The median of these 1000 trends is indicated in dark grey, and the truth adult abundance is indicated in red.The scenarios were labelled using the format 'ME±XX:GC ± YY', where ME refers to the measurement error, XX denotes the percentage over-or underestimate, GC stands for growth curve, and YY denotes the percentage of up-or downwards shifting; for example, the scenario with a 33% overestimated standard deviation of length measurement error and a 5% downshifted growth curve had label ME+33:GC-5.
F I G U R E C 4
Plots of the 1000 estimated male adult population trends for the complex species for the last 20 years of the simulation, for the 19 scenarios that resulted in successful fits.The median of these 1000 trends is indicated in dark grey, and the truth adult abundance is indicated in red.The scenarios were labelled using the format 'ME±XX:GC ± YY', where ME refers to the measurement error, XX denotes the percentage over-or underestimate, GC stands for growth curve, and YY denotes the percentage of up-or downwards shifting; for example, the scenario with a 33% overestimated standard deviation of length measurement error and a 5% downshifted growth curve had label ME+33:GC-5.
Summary of notation.
collected in the last two years of the simulation by maximising the pseudo-log-likelihood, which can involve prohibitively long computation time.To resolve this, we restricted the number of pairwise comparisons.Many pairwise comparisons resulted in identical probabilistic statements, and thus in practice only needed to be derived once.As we considered adult abundance for both sexes separately, we estimated two parameters: N ♀ and N ♂ .All other parameters, such as , were assumed known and fixed.To each of the 2000 population realisations (1000 for each species) we fitted the appropriate POP model with varying degrees of length measurement error and growth curves, which was achieved by altering some of the fixed parameters.Specifically, we assumed five different standard deviations for length measurement error: the correct one, a 33% and 67% NTR I B UTI O N S Felix T. Petersma: Conceptualization (lead); formal analysis (lead); methodology (lead); visualization (lead); writing -original draft (lead); writing -review and editing (equal).Len Thomas: Conceptualization (supporting); formal analysis (supporting); methodology (supporting); visualization (supporting); writing -original draft (supporting); writing -review and editing (equal).Danielle Harris: Conceptualization (supporting); formal analysis (supporting); methodology (supporting); visualization (supporting); writing -original draft (supporting); writing -review and editing (equal).Darcy Bradley: Conceptualization (supporting); formal analysis (supporting); methodology (supporting); writing -review and editing (equal).Yannis P. Papastamatiou: Conceptualization (supporting); formal analysis (supporting); methodology (supporting); writing -review and editing (equal).
The 95% confidence interval (CI) coverage for the sex-specific adult abundance for the successfully converging scenarios, denoted in percentages (%).
cline in the population size.Including growth in the model meant that the distribution of age given measured length f a| * is no longer de- * by grouping the data and fitting a multinomial distribution; | 2024-06-07T05:12:27.841Z | 2024-06-01T00:00:00.000 | {
"year": 2024,
"sha1": "90a53f698db2b48adf00c3367b68106cb18afd8c",
"oa_license": "CCBY",
"oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1002/ece3.11352",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "90a53f698db2b48adf00c3367b68106cb18afd8c",
"s2fieldsofstudy": [
"Environmental Science",
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
9159597 | pes2o/s2orc | v3-fos-license | Docosahexaenoic acid liposomes for targeting chronic inflammatory diseases and cancer: an in vitro assessment
Inflammation, oxidative stress, and uncontrolled cell proliferation are common key features of chronic inflammatory diseases, such as atherosclerosis and cancer. ω3 polyunsaturated fatty acids (PUFAs; also known as omega3 fatty acids or fish oil) have beneficial effects against inflammation upon dietary consumption. However, these effects cannot be fully exploited unless diets are enriched with high concentrations of fish oil supplements over long periods of time. Here, a nanomedicine-based approach is presented for delivering effective levels of PUFAs to inflammatory cells. Nanoparticles are internalized by immune cells, and hence can adequately deliver bioactive lipids into these target cells. The ω3 FA docosahexaenoic acid was formulated into liposomes (ω-liposomes), and evaluated for anti-inflammatory effects in different types of immune cells. ω-Liposomes strongly inhibited the release of reactive oxygen species and reactive nitrogen species from human neutrophils and murine macrophages, and also inhibited the production of the proinflammatory cytokines TNFα and MCP1. Moreover, ω-liposomes inhibited tumor-cell proliferation when evaluated in FaDu head and neck squamous carcinoma and 4T1 breast cancer cells in in vitro cultures. We propose that ω-liposomes are a promising nanonutraceutical formulation for intravenous delivery of fish oil FAs, which may be beneficial in the treatment of inflammatory disorders and cancer.
Introduction
Dietary ω3 polyunsaturated fatty acids (PUFAs), including docosahexaenoic acid (DHA), have been reported to be beneficial in the treatment of cardiovascular diseases and cancer. [1][2][3][4] Such diseases share common pathophysiological features, including unbalanced oxidative stress, an inflammatory microenvironment, and aberrant cell proliferation, which are potential therapeutic targets for PUFAs. [5][6][7][8][9] Several clinical studies have been conducted to exploit the beneficial effects of PUFAs. However, such effects on the aforementioned targets are typically observed after long-term oral consumption of PUFAs in high amounts. 10,11 The oral dosage of DHA intake varies, ranging from 0.4 to 4 g/day for at least 4 months (NCT01078909 and NCT01865448). Bouwens et al used a combination of DHA and eicosapentaenoic acid (EPA; 1.8 g/day for 26 weeks) to demonstrate anti-inflammatory effects in humans. 12 To benefit fully from the effects of PUFAs, we propose a novel approach to enrich their content specifically in target cells that are involved in inflammatory disorders. Instead of oral intake of high doses of fish oil, we propose to specifically increase the levels of PUFAs in inflamed lesions through the use of nanomedicine. Nanoparticles can accumulate in inflamed tissues and are efficiently taken up, mainly 5028 alaarg et al by macrophages and to a lesser extent other immune cells. 13 The accumulation of nanoparticles in pathological tissues was first described for tumors and occurs via their enhanced extravasation through leaky blood-vessel walls. 14 This phenomenon has been named the enhanced permeability and retention (EPR) effect and has also been described for chronic inflammatory disorders, such as rheumatoid arthritis and atherosclerosis. 15, 16 An example of a clinically available nanomedicine that exploits the EPR effect is the liposomal formulation of doxorubicin, branded Doxil, indicated for breast cancer, ovarian cancer, multiple myeloma, and AIDSrelated Kaposi's sarcoma. 17, 18 We propose a novel liposomal formulation of docosahexaenoic acid-loaded liposomes (ω-liposomes), by incorporating DHA into polyethylene glycol (PEG) ylated liposomes. Such long-circulating PEGylated liposomes can accumulate in inflamed tissues and deliver their PUFA cargo into macrophages and other immune cells. In the present study, we report the physicochemical characteristics of ω-liposomes and their anti-inflammatory effects in different types of immune cells ( Figure 1). In addition, to demonstrate their potential benefit in the treatment of cancer, their growth-arresting effects in human and murine cancer cell lines were also evaluated.
Materials and methods chemicals
1,2-Dipalmitoyl-sn-glycero-3-phosphocholine (DPPC) and N-(carbonyl-methoxy-PEG 2,000 )-1,2-distearoyl-snglycero-3-phosphoethanolamine (DSPE-PEG 2,000 ) were purchased from Lipoid AG (Steinhausen, Switzerland). Notes: Fish oil and oily fish are rich in ω3 polyunsaturated fatty acids (PUFAs) such as docosahexaenoic acid (DHA). We formulated DHA into ω-liposomes by lipid-film hydration followed by downsizing using extrusion. PEGylated long-circulating liposomes have been shown to accumulate in inflammatory lesions and tumors via the enhanced retention and permeability effect, and thus facilitate local delivery of the PUFAs in ω-liposomes. Multiple pathophysiological processes can be tackled with ω-liposomes. Tissue-associated immune cells, such as plaque macrophages or tumor-associated macrophages, accelerate the progression and invasiveness of the disease by releasing proinflammatory cytokines, such as TNFα and MCP1, and reactive nitrogen species (RNS), which are usually mediated by increased activity of NFκB. Neutrophils can also have significant effects on destabilizing atherosclerotic plaques or on tumor progression via the release of reactive oxygen species (ROS) and via intercommunication with macrophages. a third target for ω-liposomes is the abnormal cellular proliferation of tumor cells. Abbreviations: PEG, polyethylene glycol; ω-liposomes, docosahexaenoic acid-loaded liposomes.
5029
Fish oil liposomes for inflammation and cancer liposomal preparation and Dha loading into liposomes DHA-loaded liposomes (ω-liposomes) and control liposomes (C-liposomes) were prepared with DPPC, cholesterol, DSPE-PEG 2,000 , and DHA (ω-liposomes only) in molar ratios indicated in Table 1. Rhodamine PE was added at 0.2 mol% for fluorescent labeling when applicable. Typical batch sizes were prepared with 20 mM total lipid (TL). Liposomes were prepared using lipid film/hydration. Lipids, including DHA, were dissolved in chloroform-methanol (1:1 v/v, total 4 mL) in a round-bottom flask, and a lipid film was prepared by rotary evaporation (Büchi Labortechnik AG, Flawil, Switzerland), followed by an additional drying step under a stream of nitrogen for 1 hour. Subsequently, the lipid film was hydrated with 5 mL 4-(2-hydroxyethyl)-1-piperazineethanesulfonic acid (HEPES)-buffered saline (HBS) solution (pH 7.4) to form a lipid dispersion. To downsize the lipid dispersion and to form uniform-size liposomes, the lipid particles were reduced by multiple sequential extrusion steps using a Lipex extruder (Northern Lipids, Burnaby, BC, Canada) through polycarbonate membranes (Nuclepore, Pleasanton, CA, USA) with final filters of pore size 100 nm. The resulting liposomes were purified by either sequential centrifugation steps (as per Eckert et al) 19 or by ultrafiltration. Briefly, liposomes were centrifuged at 8,000× g for 20 minutes to remove the debris. The supernatant was removed and centrifuged again at 15,000× g for 20 minutes or subjected to ultrafiltration using filtration units with a 100 kDa molecular weight cutoff (Sartorius Stedim Biotech SA, Aubagne, France). Liposomes were then filtered through a 0.22 µm nylon filter (CellTreat Scientific Products, Pepperell, MA, USA).
characterization of liposomes
The mean particle-size distribution and polydispersity index (PDI) of the liposomes were determined by dynamic light scattering (DLS) using a Malvern CGS-3 multiangle goniometer (Malvern Instruments, Malvern, UK) with a JDS Uniphase 22 mW He-Ne laser operating at 632 nm, an optical fiber-based detector and a digital LV/LSE-5003 correlator. All measurements were performed at a 90° angle. The ζ-potential of the liposomes was determined by laser Doppler electrophoresis using a Zetasizer Nano-Z (Malvern Instruments). Liposomes were diluted in 10 mM HEPES buffer (pH 7.4) prior to measurements. The phospholipid content of liposomes was determined with a phosphate assay, in accordance with Rouser et al. 20 The DHA content of liposomes was determined after their disruption in acetonitrile by high-performance liquid chromatography on a Shimadzu system equipped with a C18 column, two LC-10AT pumps, and an SPD-M10AVP photodiode array detector at a wavelength of 237 nm. The absence of free DHA and the homogeneity of particle size were confirmed by separation of free DHA from liposomes by size-exclusion chromatography on a Superdex 10/300 column (GE Healthcare UK Ltd, Little Chalfont, UK) using phosphate-buffered saline as eluent at a flow rate of 1 mL/min. Liposome colloidal stability under storage conditions was studied by monitoring their mean size and size distribution with DLS every 4-5 days for 30 days upon storage in HBS at 4°C.
Inhibition of production of reactive nitrogen species (NO assay)
To evaluate the effect of liposomal formulations on the release of reactive nitrogen species, RAW264.7 cells were seeded at 100,000 cells per well in a 96-well plate. After 24 hours of incubation, cell-culture medium was removed from all wells and replaced with fresh medium spiked with the respective treatments -ω-liposomes or C-liposomes -at the indicated concentrations. After 2 hours, lipopolysaccharide (LPS) was added at a final concentration of 100 ng/mL, followed by incubation at 37°C for another 22 hours. In another setup, we evaluated the effect of ω-liposomes or C-liposomes on RAW264.7 cells prestimulated with LPS. Briefly, we first stimulated RAW264.7 cells with LPS (100 ng/mL) for 2 or 4 hours, after which cells were washed with medium or not, and subsequently treated with the liposomes for another 22 hours. In either the pretreatment or the prestimulation setup, the supernatant was collected for a nitric oxide assay with Griess reagents. Absorbance was measured at 550 nm on a SpectroStar Nano (BMG LabTech, Ortenberg, Germany). The effect of liposome formulations on cell viability was assessed under similar conditions of the assay (ie, LPS stimulation, seeding density, and exposure time) to assure that the noticed antioxidant effects were not due to significant cytotoxic effects of the formulations. A CellTiter 96 ® Aqueous One solution viability assay (Promega Corporation, Fitchburg, WI, USA) was performed as per the supplier's protocol, and the absorbance was measured using EZ Read 400 (Biochrom, Cambridge, UK). RAW264.7 NFκB Luc cells were seeded at 80,000 per well in a 96-well plate. After 24 hours of incubation, the medium was removed from all wells and replaced with either fresh medium or the respective ω-liposomes or C-liposomes formulations at the indicated concentrations. This medium was removed and cells washed with fresh medium twice after 4 hours of treatment. Cells were stimulated with LPS at a final concentration of 100 ng/mL for 2 hours. Luciferase activity was determined using a One-Glo luciferase-assay system (Promega Corporation) according to the supplier's instructions. Cell viability was assessed under the same exposure time of LPS treatment and LPS stimulation as described earlier.
Inhibition of production of proinflammatory cytokines TNFα and McP1
THP1 cells, a commonly used model for mimicking macrophages in the vasculature, 21 were seeded at 250,000 cells per well in a 24-well plate and differentiated into macrophages as described earlier. The medium was replaced by a medium containing liposomal formulations for 2 hours, then stimulated with LPS for an additional 22 hours. The supernatant was collected and stored at -80°C until further analysis. TNFα and MCP1 were measured by enzymelinked immunosorbent assay (ELISA) according to the manufacturer's instructions (ELISA Max™ deluxe set human TNFα and human MCP1/CCL2 kits; BioLegend, San Diego, CA, US). Cell viability was determined as described earlier.
Inhibition of production of reactive oxygen species (ROS assay)
PMNs, freshly obtained from human buffy coat, were diluted to a final concentration of 250,000 cells/mL and treated with ω-liposomes and C-liposomes formulations at indicated concentrations in a white 96-well plate for 30 minutes. Subsequently, luminol, as luminescence enhancer, and zymosan, as reactive oxygen species (ROS) inducer, were pipetted into each well. The chemiluminescence resulting from ROS production was measured immediately using a Titertek Luminoskan (TechGen International, Zellik, Belgium). Inhibition of toll-like receptor 4 (Tlr4) activation HEK-Blue hTLR4 cells were seeded at 25,000 cells per well in a 96-well plate and treated with liposomes for 2 hours, followed by 16 hours of LPS stimulation. A Quanti-Blue assay (InvivoGen) was performed according to the manufacturer's instructions to determine the level of secreted alkaline phosphatase as a reporter for TLR4 activation. Cell viability was assessed as mentioned earlier.
experiments with cancer cell lines
Murine 4T1 breast cancer cells and human FaDu squamous cell carcinoma cells were obtained from the ATCC. Cells were cultured in DMEM supplemented with 10% (v/v) fetal bovine serum, penicillin (100 IU/mL), streptomycin (100 µg/mL), and amphotericin B (0.25 µg/mL). Normal human umbilical vein endothelial cells (HUVECs) were obtained from Lonza (Basel, Switzerland) and cultured in EBM basal medium supplemented with EGM-2 SingleQuot kit supplements and growth factors (Lonza).
FaDu squamous cell carcinoma cells or 4T1 metastatic breast cancer cells were seeded at a density of 3,000 cells per well in a 96-well plate. HUVECs, which served as control (normal cells), were seeded at a density of 4,000 cells per well in a 96-well plate. After 24 hours, cells were treated with ω-liposomes and C-liposomes at the indicated concentrations for 24 hours. To determine the number of dividing cells, bromodeoxyuridine (BrdU) reagent was added to the cells for 4-6 hours and an ELISA BrdU colorimetric immunoassay (Hoffman-La Roche Ltd, Basel, Switzerland) was performed, according to the manufacturer's protocol.
Statistical analysis
Data are expressed as mean ± standard error of mean. Statistical analyses by Student's t-test were performed with GraphPad Prism; values of P,0.05 were considered statistically significant.
Results and discussion Preparation and characterization of ω-liposomes
The physicochemical characteristics of PUFA-loaded PEGylated liposomes (ω-liposomes) and control PEGylated liposomes (C-liposomes) are summarized in Table 1. Hydration of the lipid film with HBS resulted in a heterogeneous, milky dispersion for both ω-liposomes and C-liposomes. The cumulative size distribution at 90% (D 90 ) of this lipid dispersion before extrusion was 2 µm and 3 µm for C-liposomes and ω-liposomes, respectively ( Figure S1A). After extrusion, both liposomal formulations were translucent and displayed comparable mean sizes (around 100 nm), D 90 (around 120 nm), and narrow size distributions (polydispersity ,0.1) (Table 1 and Figure S1B). The narrow size distribution is also demonstrated in the size-distribution histograms obtained from DLS ( Figure S2). The slightly negative-close to neutral ζ-potentials indicated that the negative charge of the DHA lipids was shielded by the PEG corona on the surface of liposomes (Table 1). TL recovery after extrusion was 69%±8% and 70%±10% for ω-liposomes and C-liposomes, respectively, as determined by Rouser phosphate assay. The entrapment efficiency and DHA-loading content in ω-liposomes were 81%±3% and 15%±1%, respectively ( Table 1). Separation of liposomes and free DHA by size-exclusion chromatography showed that DHA was effectively incorporated in the liposomes (Figure 2A). Changes in mean size and size distribution of both liposomal formulations were minimal under liposome-storage conditions at 4°C over a period of 5 weeks ( Figure 2B and C). Liposome-size distribution was relatively narrow and remained ,0.1 on the PDI. In addition, there were no significant changes in particle size or PDI values between C-liposomes and ω-liposomes in a stability study at room temperature and 37°C at physiological pH (ie, pH 7.4) ( Figure S3). Furthermore, differential scanning calorimetry showed no differences in the thermograms of the formulations, likely due to the stabilizing effect of cholesterol ( Figure S4).
cell-uptake/interaction studies of liposomes
Lipid-based nanocarriers can be efficiently internalized by phagocytic cells, and hence are an attractive system to deliver PUFAs to immune cells. Macrophages are known to be the most efficient in uptake of nanoparticles among the different cell types and may act as a reservoir for nanotherapeutics. 13,22 Figure 3 shows that RAW264.7 macrophages internalized both ω-liposomes and C-liposomes to a similar extent. Fluorescence microscopy was used to visualize the association and the uptake of rhodaminelabeled liposomes. We further utilized the power of confocal microcopy to visualize a focal intracellular plane, minimizing any extracellular signal. The accumulation of labeled liposomes in perinuclear vesicles suggests accumulation of either formulation in the endosomal/lysosomal compartment ( Figure 3A Unbalanced reactive nitrogen species (RNS) and ROS are often generated by stimulated immune cells and play a crucial role in the development and progression of multiple inflammatory disorders. [23][24][25] The effect of ω-liposomes on the production of nitric oxide (NO) and ROS were evaluated in murine macrophages and human PMNs, respectively. NO is an important mediator in inflammation that reacts with superoxide to form peroxynitrite, a powerful oxidizing and tissue-damaging moiety. LPS induced a significant increase in NO production by RAW macrophages, and ω-liposomes reduced LPS-induced NO production significantly, with up to 80% reduction at the highest tested concentration. Remarkably, C-liposomes also inhibited LPS-induced NO production by RAW cells dose-dependently, although less strongly than ω-liposomes (50% vs 80% at 0.5 mM TL and 35% vs 60% at 0.25 mM TL) ( Figure 4A). In a different experimental setup in which cells had been prestimulated with LPS and then treated with liposomes, ω-liposomes showed similar potent anti-inflammatory activity ( Figure S5A-C). Since nuclear factor kappaB (NFκB) activation plays a crucial role in inducing NO production via inducible nitric oxide synthase, we sought to determine the effect of liposomes on RAW264.7 cells stably transfected with an NFκB-reporter gene, which expresses firefly luciferase upon activation. Both formulations showed inhibition of NFκB activation similarly to the observed inhibitory effects on NO production ( Figure 4B). Of note, the liposomal formulations were not toxic to either of the cell lines at the tested concentration range and exposure time ( Figure 4C and D).
The effects of ω-liposomes on the production of two proinflammatory cytokines -MCP1 and TNFα -were also studied, as they are key regulators in the recruitment of immune cells to inflammatory lesions and exacerbation of the inflammation. 26,27 THP1 macrophages were stimulated with LPS (100 ng/mL) and then treated with either ω-liposomes or C-liposomes. ω-Liposomes reduced the production of both cytokines, while C-liposomes had only minor effects (Figure 5A and B). The highest concentration of ω-liposomes, 0.5 mM TL, resulted in a minor reduction in THP1 cell viability of less than 30% compared to the untreated control cells ( Figure 5C).
In addition to macrophages, neutrophils (PMNs) are a key source of ROS. 28 Therefore, the effect of liposomes in an oxidative burst setup was assessed, where zymosan induced an increase in ROS production by neutrophils. ROS was remarkably reduced with ω-liposome treatment by 60% and 90% at 0.06 mM and 0.13 mM TL, respectively. C-liposomes had no effect on zymosan-induced ROS production at any tested concentration ( Figure 6).
The effects of ω-liposomes were further examined on LPS-induced TLR4 activation. TLR4 is a key receptor recognizing damage-associated molecular patterns and pathogenassociated molecular patterns. As such, it plays a key role in atherosclerosis, cancer, and other chronic inflammatory diseases. 29 TLR4-controlled expression of the reporter molecule secreted embryonic alkaline phosphatase (SEAP) was monitored. LPS triggered a fivefold increase in SEAP activity, which was reduced by ω-liposomes in a dose-dependent manner, whereas C-liposomes had no effect ( Figure 7A). ω-Liposomes at 0.5 mM TL (highest concentration) resulted in a minor reduction in cell viability of less than 30% compared to the untreated control cells ( Figure 7B).
A number of studies have investigated the anti-inflammatory effects of PUFAs in general, and DHA specifically. These studies focused mainly on increasing the dietary intake of PUFAs, in in vivo and clinical studies, or by the use of DHA in its free form in vitro. 3,5,31,33,34 DHA significantly decreased the expression of cytokine-induced leukocyte adhesionmolecule expression in human endothelial cells, while EPA, another member of PUFAs, did not. 33 In a genomic study, Bouwens et al demonstrated that a high oral dose of DHA and EPA decreased the expression of genes involved in inflammatory pathways, such as eicosanoid synthesis, Figure 6 effect of docosahexaenoic acid-loaded liposomes (ω-liposomes) and control liposomes (C-liposomes) on zymosan-induced neutrophil oxidative burst. Notes: Polymorphonuclear neutrophils freshly isolated from a buffy coat were treated immediately with liposomes and stimulated with zymosan. Reactive oxygen species (ROS) generation was measured by using a luminol-amplified chemiluminescence assay. Data presented as mean ± standard error of mean from one representative experiment performed in triplicate (each n$4). ***P,0.001 compared to control nontreated (NT) zymosan-stimulated cells (Student's t-test).
effect of ω-liposomes in cancer cell lines
Aberrant cell proliferation is the hallmark of cancer and other chronic inflammatory diseases. Although targeting inflammation in cancer may reduce the invasive features of tumors, anti-inflammatory therapies are seen as adjunct therapy. 35 Studies have demonstrated the effects of dietary PUFAs on the prevention and inhibition of cancers. 2,4 Therefore, we investigated if ω-liposomes could reduce aberrant cell proliferation. Bromodeoxyuridine (BrdU)-incorporation assays were performed, in which a thymidine analog incorporates only into the DNA of proliferating cells. ω-Liposomes significantly reduced the proliferation of two fast-growing tumor models -human FaDu squamous carcinoma and murine 4T1 breast cancer -in vitro, while C-liposomes did not affect cell proliferation (Figure 8). ω-Liposomes did not reduce the proliferation of control cells, as tested on normal HUVECs ( Figure S6).
The exact mechanisms of action of PUFAs in general and DHA specifically are not well known. Possibly, the incorporation of PUFAs into the cell membrane may affect membrane 36 Furthermore, GPR120, a novel G-protein-coupled receptor, was identified as a sensor for PUFAs. 37 Also, PUFAs may act as precursors for specialized lipid mediators, such as resolvins and protectins, which possess potent anti-inflammatory effects and assist in the resolution of inflammation. 38 In this study, a simple nanonutraceutical-based strategy was adopted to deliver effective doses of PUFAs, which dampen the inflammatory phenotype of inflammatory immune cells involved in chronic inflammation. Their antiproliferative activities against cancer cells were also demonstrated. The developed ω-liposomes possess several advantages over the classical dietary intake of DHA, which requires a long duration of intake at high doses. First, formulation into liposomes allows the solubilization of DHA in aqueous solutions at high concentrations, which is required for injection in a low volume. Next, PEGylated liposomes circulate longer and accumulate to a higher extent in inflamed tissues.
Most nanoparticles for in vivo use, which have particle size of 10-300 nm, nonspecifically accumulate in the reticuloendothelial system (RES), eg, in the liver and spleen, and other tissues. Nanoparticles are also however accumulated in pathological lesions by the EPR effect, as demonstrated for atherosclerotic plaques, inflamed joints in rheumatoid arthritis, and tumors. Such lesions are rich in immune cells, mainly macrophages, which drive the pathophysiological inflammation and contribute to the leaky vasculature. 13,15,16 These macrophages have the highest uptake activity compared to other professional phagocytes and other cells. 13 PEGylation will prolong the circulation time of liposomes, and hence increases the chance for extravasation by EPR to the inflamed tissue.
Therefore, this approach can improve the pharmacokinetic profile of DHA and will boost its activities. Of note, a clinically approved total parenteral nutrition (eg, Omegaven), which is rich in DHA and EPA, is prescribed at a dose up to 2.5 g/day for 4 weeks to reduce risks of liver damage. 39 Such non-PEGylated emulsion is cleared more quickly (half-life 54 minutes, as described in the manufacturer's manual) when compared to long-circulating PEGylated liposomes, such as Doxil (half-life 20-30 hours), 40 and hence will have less chance to accumulate in inflamed tissues by EPR extravasation. In addition to the longer circulation, liposomes can improve the chemical stability of DHA, which is susceptible to fast oxidation. [41][42][43] Moreover, ω-liposomes are a tunable drug-delivery platform in which other drugs can be loaded into the liposomal aqueous core, for synergism, or a targeting ligand can be attached to the surface, for an active targeting approach toward specific cell types. 44
Conclusion
DHA was successfully incorporated in long-circulating PEGylated liposomes to form ω-liposomes. The formulation remained stable and active upon storage over a month. Furthermore, ω-liposomes possess strong antioxidant and anti-inflammatory effects in vitro, as demonstrated by the inhibition of the production of ROS, NO, MCP1, and TNFα, and the inhibition of NFκB activation, in activated immune cells. They also induce strong inhibition of tumor-cell proliferation. This nanonutraceutical represents a nanomedicine-based approach for delivery of relatively safe nutraceutical components that can be applied for the prevention or management of chronic inflammatory diseases and cancer.
Publish your work in this journal
Submit your manuscript here: http://www.dovepress.com/international-journal-of-nanomedicine-journal The International Journal of Nanomedicine is an international, peerreviewed journal focusing on the application of nanotechnology in diagnostics, therapeutics, and drug delivery systems throughout the biomedical field. This journal is indexed on PubMed Central, MedLine, CAS, SciSearch®, Current Contents®/Clinical Medicine, Journal Citation Reports/Science Edition, EMBase, Scopus and the Elsevier Bibliographic databases. The manuscript management system is completely online and includes a very quick and fair peer-review system, which is all easy to use. Visit http://www.dovepress.com/ testimonials.php to read real quotes from published authors. C). afterward, lPs was washed (C) or not (A and B), and cells were treated with ω-liposomes or c-liposomes. NO production was measured in the supernatant with griess reagent. Data presented as mean ± standard error of mean from a representative experiment (n=4). Abbreviations: NT, nontreated; Tl, total lipid; ω-liposomes, docosahexaenoic acid-loaded liposomes. Notes: HUVECs were seeded at 4,000 cells/well in a 96-well plate. Cells were exposed for 24 hours to ω-liposomes and c-liposomes, after which the medium was replaced with medium containing bromodeoxyuridine (BrdU) and cells incubated for an additional 4-6 hours. Afterward, BrdU incorporation was determined by enzyme-linked immunosorbent assay. Data presented as mean ± standard error of mean from a representative experiment (n=6). Abbreviations: NT, nontreated; Tl, total lipid; ω-liposomes, docosahexaenoic acid-loaded liposomes. | 2017-05-13T18:10:41.345Z | 2016-10-05T00:00:00.000 | {
"year": 2016,
"sha1": "d02188c96dd8827f8ffc3019780ab88893c15849",
"oa_license": "CCBYNC",
"oa_url": "https://www.dovepress.com/getfile.php?fileID=32784",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "d5276f514cab4b6354da91c2c3b0efebccf0900d",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Medicine",
"Biology"
]
} |
16181828 | pes2o/s2orc | v3-fos-license | Characterization of the lipoxygenase (LOX) gene family in the Chinese white pear (Pyrus bretschneideri) and comparison with other members of the Rosaceae
Background Lipoxygenases (LOXs), a type of non-haem iron-containing dioxygenase, are ubiquitous enzymes in plants and participate in the formation of fruit aroma which is a very important aspect of fruit quality. Amongst the various aroma volatiles, saturated and unsaturated alcohols and aldehydes provide the characteristic aroma of the fruit. These compounds are formed from unsaturated fatty acids through oxidation, pyrolysis and reduction steps. This biosynthetic pathway involves at least four enzymes, including LOX, the enzyme responsible for lipid oxidation. Although some studies have been conducted on the LOX gene family in several species including Arabidopsis, soybean, cucumber and apple, there is no information from pear; and the evolutionary history of this gene family in the Rosaceae is still not resolved. Results In this study we identified 107 LOX homologous genes from five Rosaceous species (Pyrus bretschneideri, Malus × domestica, Fragaria vesca, Prunus mume and Prunus persica); 23 of these sequences were from pear. By using structure analysis, phylogenic analysis and collinearity analysis, we identified variation in gene structure and revealed the phylogenetic evolutionary relationship of this gene family. Expression of certain pear LOX genes during fruit development was verified by analysis of transcriptome data. Conclusions 23 LOX genes were identified in pear and these genes were found to have undergone a duplication 30–45 MYA; most of these 23 genes are functional. Specific gene duplication was found on chromosome4 in the pear genome. Useful information was provided for future research on the evolutionary history and transgenic research on LOX genes. Electronic supplementary material The online version of this article (doi: 10.1186/1471-2164-15-444) contains supplementary material, which is available to authorized users.
Background
Pear, a member of the subfamily Pomaceae in the Rosaceae, is a temperate fruit crop of major economic importance in the world market. As global economic markets develop, the current breeding objects are moving from the traditional focus on fruit yield to fruit quality, especially flavor. Such flavor is a genetically and biochemically highly complex trait, which involves the interaction of sugars, acids and aroma. Aroma, as a very important component of flavor is receiving more and more attention. The aroma of the fruit is a complex mixture of more than 1000 volatile compounds, including esters, aldehydes, terpenes, alcohols, carbonyl compounds, and some sulfur compounds [1,2], and the type and relative proportion of each aroma component determine the specific aroma of different fruit. Although intensive research has been conducted on the physiological and biochemical analysis of fruit aroma, the molecular genetic basis and inheritance of aroma patterns are still unknown.
However, biosynthetic pathways leading to the formation of plant flavor volatiles such as esters, alcohols and aldehydes have been thoroughly investigated [3]. As judged by either quality or quantity, the main fruit volatiles are substantially derived from saturated and unsaturated fatty acids. Straight-chain alcohols, aldehydes, ketones, acids, esters and lactones are primarily formed by the fatty acid oxidation pathway by lipoxygenase, via α and βoxidation. The lipoxygenase pathway of fatty acids involves at least four enzymes, namely lipoxygenase (LOX), hydroperoxide lyase (HPL), alcohol dehydrogenase (ADH) and alcohol acetyl transferase (AAT).
LOXs are lipid-oxidizing enzymes, a type of non-haem iron-containing dioxygenase, which is ubiquitous in the animal and plant kingdoms [4]; they are even found in fungi [5] and bacteria [6]. LOX is a common plant lipoxygenase that oxidizes linoleate and alpha-linolenate, the two most common polyunsaturated fatty acids found in plants. The patterns of LOX gene expression vary according to the tissue and stage of development [7][8][9][10].
According to enzyme classification, LOX is defined as a Linoleic: Oxygen oxidoreductase, which catalyzes (Z, Z) -1,4-pentadiene structural units of unsaturated fatty acids plus oxygen to produce unsaturated fatty acid peroxides [11]. It is a multifunctional enzyme, involved in at least three different types of catalytic reaction: 1) oxidation of the lipid double plus (peroxidase reaction); 2) a secondary lipid peroxide conversion (reaction of hydrogen peroxidase) [12]; 3) formation of epoxy leukotrienes (leukotriene synthesis reaction) [13].
It is known that in complex eukaryotes LOXs are generally encoded by a multigene family [10,14]. With the rapid development in sequencing and functional genomics research, LOX genes are being identified, cloned and studied in more and more plant species.
For example, six lipoxygenases were reported in the model plant Arabidopsis thaliana [15]. Studies on cucumber revealed that expression of 13 out of the 23 LOX genes can be detected using RT-PCR. Twelve genes were differentially expressed in response to abiotic stresses and plant growth regulator treatments [16]. Analysis of the grape (Vitis vinifera L.) genome revealed that a LOX family consisting of 18 individual members [17]. A shared polyploidy relationship between Glycine max and Medicago truncatula was revealed by analysis of this gene family [18]. There are three different types of LOX in soybean LOX1, LOX2, and LOX3, while the LOX2 isozyme is primarily responsible for the "beany" flavor of most soybean seeds. A single nucleotide-amplified polymorphism (SNP) marker was found to identify the lack of the LOX2 isozyme, and can be used to assist the breeding and selection in this species [19]. A total of 25 LOX genes were identified in apple by mining the whole assembled apple genome [20].
With the intention to extend the knowledge of the formation of aroma-related volatiles and to understand the structure and evolutionary history of the LOX gene family, we investigated this gene family in pear (Pyrus bretschneideri) and compared pear LOXs with those of another four fully sequenced Rosaceae species (Malus × domestica, Fragaria vesca, Prunus mume and Prunus persica). Structure analysis, synteny analysis, phylogenetic analysis and positive selection analysis were conducted on LOX homologous genes and the effects on function are discussed. Using the pear genome information, we also utilised the transcriptome dataset of pear fruit to verify our results.
23 LOX genes were identified in pear with 36 in apple, 18 in mei, 16 in peach and 14 in strawberry. The chromosome number of apple and pear is 34 compared with 16 in peach and mei, and 14 in strawberry. The number of LOX genes in pear and apple are almost double the number of in peach and mei. Pear and apple are the most important economic crops in the Rosacease and belong to the Maloideae while mei and peach belong to the Prunoideae, which does not show the expansion of LOX gene family. The species tree of the five Rosaceae species shows the whole genome duplication in the progenitor of pear and apple ( Figure 1).
The pear LOX gene family has 23 members, which are distributed unevenly in the genome. Chromosomes 7, 9 and 10 each have 1 LOX gene while chromosomes 2, 5, 11 and 16 have 2 LOX genes and 4 on scaffold. Most notable is the presence of 8 LOX genes on chromosome 4. We can therefore infer that chromosome 4 has gone through a segmental duplication and the LOX gene family is in that segment.
We also noticed that Pbr020432.1 (Chr4: 1151461-1156749), and Pbr020415.1 (Chr4: 1294295-1299591) encode proteins of the same length (Additional file 1: Table S1), with the same pI (Isoelectric Point) and (Molecular Weight). After further analysis, we found out that these two sequences are identical. We included all the 25 LOX genes described in the analysis of the LOX gene family in apple [20], with the exception of MDP0000312394, which did not have the complete domain structure.
Phylogenetic analyses
In order to investigate phylogenetic relationships and the molecular evolutionary history of the sequences in these five Rosaceae species (pear, apple, peach, mei and strawberry), a phylogenetic analysis was conducted and a phylogenetic tree was generated using the neighbor joining (NJ) method ( Figure 2); the maximum-likelihood (ML) method was also performed in MEGA5. The tree remained consistent using these two methods. A Bootstrap test was set as 1000 to test the confidence of the tree. The NJ tree showed that these 107 sequences clustered into 3 main groups and the ML tree confirmed this result.
We can determine from this phylogeny that the LOX genes were present prior to the evolution of these five species, since all the branches have LOX genes from different species rather than each branch having only the genes from a single species. All the LOX genes fall into two major clades: Type1 and Type2. The first group corresponds to type1 LOX, and the groups 2 and 3 corresponds to type2 LOX. We named the pear LOX genes according to the published studies on apple [20,26]. The genes were divided into 9 subfamilies while LOX4 is not included in our research since this gene is not really a LOX gene, a conclusion supported by Vogt (2013). However, we identified an additional cluster of LOX genes and named it LOX10.
Structure and biochemical features of LOX genes
The structure of the LOX gene sequence comprises two domains, the "Lipoxygenase" domain PF00305 in Pfam (http://pfam.xfam.org/), and domain PF01477 which is named PLAT (Polycystin-1, Lipoxygenase, Alpha-Toxin) or LH2 (Lipoxygenase homolog). We identified 107 LOX genes that all the functional domain, PF00305. However, 12 out of 107 did not have the PLAT/LH2 domain which is found in a variety of membrane or lipid associated proteins (Additional file 1: Table S1). All the LOX homologous genes in pear have both domains. In order to examine the motif structure in pear we constructed a six motif figure by submitting the genomic data and the coding sequence data of LOX genes of pear to the PIECE and MEME website ( Figure 3). All the LOX genes have all the six motifs, except Pbr016256.1 which is missing one motif at the beginning of the sequence (Figure 3a). The MEME results (Figure 3b) also confirmed the results. The protein encoded by Pbr016256.1 has 734 AAs which is the shortest one. However, Pbr016256.1 is expressed at all the six stages (Table 2) in our transcriptome research, so is a functional gene.
Collinearity analyses
We found 72 paired collinearity relationships from these 107 sequences, of which 58 pairs are collinear between pear and other species (Table 3). All the collinearity relationships are formed by WGD or segmental duplication according to the result of MCscan. This corresponds to the chromosome collinearity between pear and apple.
The three pear genes Pbr004541.1, Pbr004568.1 and Pbr020415.1 have a collinearity relationship with LOX genes in all the other 4 species used in this analysis. We used Pbr020415.1 as an example to show the collinearity relationship of different species (Figure 4a).
Collinearity relationships were also found among the same species (Table 4). 7 pairs are among pear, 6 pairs are among the apple and 1 in strawberry. Pbr004005.1 on scaf-fold1189.0 is collinear with Pbr004008.1 and Pbr035580.1. All the 4 pairs of LOX paralogous genes in pear were shown to have a collinear relationship. An illustration of the collinearity relationship of LOX genes on the pear chromosome is given in (Figure 4b). This collinearity occurs in either the sense or antisense direction.
Expression of LOX genes in pear
We sequenced RNA from mixed pear fruit samples to obtain transcriptome data from six stages of fruit development (15 days after flowering (DAF), 36DAF, 80DAF, 110DAF, 145DAF, 167DAF) ( Table 2). Of the 23 LOX homologous genes identified in pear, 18 were found to be expressed in the fruit, while the Pbr004005.1, Pbr004008.1, Pbr020412. The phylogenetic tree was generated using the neighbor joining (NJ) method in MEGA5. A Bootstrap test was set as 1000 to test the confidence of the tree. The bootstrap values of confidence level, as percentages, are given at branch nodes.
double the average value. Therefore, most LOX genes are functionally active. We also examined the expression level figure of these genes ( Figure 5). The results showed that the expression of LOX genes in pear was usually low expression in the early development stage (15 DAF), increased to a peak at the middle development stage (80 DAF or 110 DAF), reduced to near zero near ripening (145 DAF) and then increased slightly at ripening (167 DAF). This is believed to correspond to the pattern of changes in the volatile components of pear fruit aroma.
Discussion
The LOX gene family is involved in the development of various plant organs, in the fruit ripening process also in the response to abiotic stresses, and is also involved in the synthesis of aldehydes and alcohols. Most importantly, it plays a key role at the early stage in the synthetic pathway for fruit aroma compounds and therefore the development of fruit flavor.
By using the MSV algorithm in HMMER3, we searched the whole genome sequences of five Rosaceae species; a total of 107 LOX homologous genes were identified. The LOX gene family exists before the evolutionary distinction of these five species. Phylogenetic analysis, structure analysis, collinearity analysis, and transcriptome expression analysis were conducted on all the 107 LOX genes. In summary, this study provides a characterization of LOX gene family in pear and the evolutionary history of LOX in Rosaceae.
36 LOX genes were identified in apple and 23 genes in pear, 18 in mei, 16 in peach and 14 in woodland strawberry. The number of LOX genes in pear and apple was almost double the number in mei, peach and strawberry. In pear we identified 23 LOX genes, which is a comparatively large gene family. The LOX gene family has gone through an expansion which corresponds to the variation in chromosome number. For example, the chromosome number of apple and pear is 34 compared to 16 in peach and mei, and 14 in strawberry. Considering a recent WGD (whole genome duplication) event was shared by apple and pear at 30-45MYA (million years ago) (Figure 1) [25], Pbr016256.1 has one motif missing, all the others have six motifs. b. Logo of motifs in LOX genes of pear using MEME. The line represents the coding sequence and six motifs are shown. All the results were obtained from the amino acid sequence. The total number of pear LOX genes is 23, while the sites means how many in them have this motif and the width is the length of the motif. The Maximum motif width was set as 50 in this study. Pbr016256.1 has one motif missing, all the others have six motifs.
LOX gene family expansion in apple and pear is believed to have occurred at that time.
From the observation of the phylogenetic tree (Figure 2), the LOX genes of pear from the same chromosome (chromosome 4 and chromosome 11) cluster together, which indicates the similarity of the genes on the same chromosome. Also the duplicated copies on chromosome 4 (Pbr020432.1 -Pbr020415.1, Pbr020435.1 -Pbr020412.1), chromosome 11 (Pbr004541.1 -Pbr004568.1) and scaf-fold1189.0 (Pbr004005.1 -Pbr004008.1) of the pear genome are paralogous genes. The two pairs of LOX genes located on chromosome 4 (Additional file 1: Table S1) have the same sequence and their order on the chromosome is the same. All these 4 pairs of genes are distributed on the same chromosome or scaffold and from comparison of their position on the chromosome it can be concluded that tandem duplication or segmental duplication has happened in this area of chromosome. Pear LOX genes showed an obvious trend of expansion with duplication as the most common occurrence. It can be concluded that a tandem duplication or segmental duplication has happened in the pear genome. However, the specific type of duplication is still unknown.
In addition, transcriptome data showed that 18 of the LOX genes in pear were expressed. This confirms that these genes are functionally active, with 10 of them being expressed in all six stages of fruit development. Though one motif was missing, gene Pbr016256.1 is still expressed in our study at all the six stages; this suggests that the loss of this N-terminal domain does not affect its function. The change in expression level shown in Figure 5 is believed to correspond to the change in aroma volatile components in the fruit. The aroma is very low at the early stage of fruit development and then increases significantly during the middle stage of development. Then subsequently the aroma substances are converted to volatile components. Fruit immediately prior to and during ripening and ripening time have a low level of LOX expression but because of the accumulation of precursors at these development stages the fruit keep emitting an aroma.
Collinearity analyses showed that 331 blocks and 9257 gene pairs have a collinear relationship between pear and apple. Also the chromosomes in pear are collinear to each other, a conclusion confirmed by the pear genome sequencing project, which demonstrated strong collinearity between segments of several chromosomes. In addition, we discovered that a segment in chromosome 4 of the pear genome is believed to be collinear with chromosome 6 in the peach genome. The LOX gene family members 4 of pear are collinear with ppa001216m on chromosome 6 of peach. All these data indicated the collinear relationship of these two chromosomes.
The present analysis included all the LOX genes previously described in apple [20]; and only one was not analysed because it lacked the complete domain structure. We obtained identical phylogenetic results as those in the previous analysis in apple. Furthermore, we also identified a new cluster of LOX genes which were not described in the previous study.
Currently, research studies on fruit volatile substances are focused on the analytical determination of volatile aromatic substances and on how the various cultivation practices and storage conditions affect the aroma of the fruit. Most of this research is focused on the European pear (Pyrus communis), with few reports on the Asiatic pear. There is still little knowledge of the molecular mechanism of the volatile aromatic substances. However in this paper we used the material of the Asiatic pear genome of Pyrus bretschneideri which is the first pear genome to be sequenced.
We aimed to study the variation in gene sequence structure and to analyse the phylogenic evolutionary relationship of the LOX gene family in the Rosaceae. This information can be used to guide future research on the evolutionary history on LOXs and associated transgenic research. This study of LOX genes is one part of the whole analysis of aroma of pear fruit. Future work will be conducted on the study of the identification of genes of the aroma pathway and the interaction network between these genes.
Conclusions 107 genes were identified as LOX homologous genes from five Rosaceae species; 23 of these sequences were from pear. Specific gene duplication was found on chromo-some4 in the pear genome. All the LOX genes fall into three groups according to phylogenetic analysis. The first group corresponds to type1 LOX, and the groups 2 and 3 corresponds to type2 LOX. Of the 23 LOX homologous genes identified in pear, 18 were found to be expressed in the fruit. The express pattern in six stages was found to correspond to the pattern of changes in the volatile components of pear fruit aroma. The research on the LOX gene family could lead to a better understanding of the molecular mechanism of aroma in fruit. Moreover, the findings of this study may facilitate the research on evolutionary history and transgenic research on LOX genes and other aroma genes.
Sequence identification and collection
By searching "Lipoxygenase" in Pfam (http://pfam.janelia. org/), seeds of Lipoxygenase domain PF00305 were downloaded to our local server. By employing the "multiple segment Viterbi" (MSV) algorithm; implemented in HMMER3 software package [27], we searched for the LOX homologous genes in fully sequenced genome of five Rosaceae species, namely peach (Prunus persica), apple (Malus × domestica), woodland strawberry (Fragaria vesca), mei (Prunus mume) and pear (Pyrus bretschneideri). 128 genes were obtained with an E value < 1e-10. After filtering according to length, 19 sequences were removed due to their shortness, and two sequences were removed for not having the LOX domain tested by SMART (http://smart.embl/ heidelberg.de/). Finally 107 LOX homologous genes were identified in this study.
We found 72 paired collinear relationships from these 107 sequences, of which 58 pairs are collinear between pear and other species. All the collinearity relationships are due to WGD or segmental duplication according to the result of MCscan.
Phylogenic analysis
Coding sequence alignment was performed using MUSCLE (Multiple Sequence Caparison by Log-Expectation) with default parameters in Molecular Evolutionary Genetics Analysis-MEGA5 [28]. The neighbor joining trees were constructed with bootstrap 1000 using MEGA5. Maximumlikelihood trees with bootstrap 1000 were also generated using MEGA5 with default settings. The tree of the five Rosaceae species was obtained by downloading data from NCBI (http://www.ncbi.nlm.nih.gov/): Common Tree in the Taxonomy section. The tree was constructed using MEGA5.
Motif and analysis of the LOX genes and proteins
Motif analysis was conducted on the website Plant Intron Exon Comparison and Evolution database (PIECE, http:// wheat.pw.usda.gov/piece/FAQ.php) [29]. Coding sequences against the genomic sequence were used to plot the figure with motif number six. Based on the Pfam motif, a phylogenetic tree was reconstructed for each gene category by integrating exon-intron and protein motif information. We also combined this database with the MEME (http:// meme.nbcr.net) web servers to draw motifs of the sequences and the information of the motifs was collected.
Basic data about the LOX proteins was calculated as follows: amino acid number (aa number); molecular weight (MW) and isoelectric point (pI).
Collinearity analysis
The whole genome sequences of the five Rosaceae species were downloaded to our local server. Then MCscan (Multiple Collinearity Scan) [30] was used to obtain the collinearity relation of each pair of species. The resulting collinearity chains were evaluated using a procedure in ColinearScan and an E-value < 1e-10 was used as the cutoff.
Transcriptome sequencing
To examine the expression of pear genes, pear (Pyrus . During the production of sequencing libraries, small RNAs ligated with adaptors were used to run RT-PCR. After that, the products were purified and sequenced on an Illumina Hi-seq 2000 Sequencer.
Availability of supporting data
The data sets supporting the results of this article are included within the article (and its additional files) and raw RNA-seq reads available in the National Center for Biotechnology Information repository under accession PRJNA185970 (http://www.ncbi.nlm.nih.gov/bioproject/ PRJNA185970).
Additional file
Additional file 1: | 2017-06-09T02:25:19.623Z | 2014-06-07T00:00:00.000 | {
"year": 2014,
"sha1": "cb905496bb3e61a6f03b814492f770063905170c",
"oa_license": "CCBY",
"oa_url": "https://bmcgenomics.biomedcentral.com/track/pdf/10.1186/1471-2164-15-444",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "a4f58937192463b427591c0a1a6eda2f8166db82",
"s2fieldsofstudy": [
"Biology",
"Environmental Science"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
} |
225557126 | pes2o/s2orc | v3-fos-license | EXIT/ENTRY, VOICE/NOISE, AND LOYALTY/APATHY IN THE ERA OF SOCIAL MEDIA IMPACT OF SOCIAL MEDIA TO PUBLIC SECTOR
Use of Social Media in public life has changed the way how citizens relate to public sector. Modern communication tools, in particular Social Media, have made citizens easier to use their “voice” to mobilize. When citizens can easily mobilize, the cost of mobilization is low for them, while its impact can lead to a larger cost for the State. As the exit/entry cost of Social Media is very low or almost nothing, a virtual network has been substituting institutions, causing new issues to the State. This leads to the issue of loyalty: citizens now feel that they do not need institution like the State to belong to, as many networks substitute its function. This paper analyses the Social Media use by citizens and its impact on public sector through Albert Hirschman’s classis “Exit, Voice, and Loyalty” and tries to address new aspects.
Introduction: Brexit as Fruit of Social Media
Use of Social Media in public life has changed the way how citizens relate to public sector. Modern communication tools, in particular Social Media, have made citizens easier to use their "voice" to mobilize. When citizens can easily mobilize, the cost of mobilization is low for them, while its impact can lead to a larger cost for the State. At the same time, use of Social Media has increased noises among the voices; however, these noises are often so well elaborated and inserted into the voices that are difficult to be identified. This again increases the cost for the State.
Social Networks enable people to be part of a system without being physically bounded to a certain geographical area, changing the exit/entry aspect. "Voices" of outsiders are noises for the State and cause problems, while the citizens can use this effect as a strategy. A modern citizen can now physically stay in a territory, while he or she can decide to "exit" from the system. As the exit/entry cost of Social Media is very low or almost nothing, a virtual network has been substituting institutions, causing new issues to the State. This leads to the issue of loyalty: citizens now feel that they do not need institution like the State to belong to, as many networks substitute its function.
It is well known that the Brexit Referendum can be interpreted as a fruit of social media. In order to map Twitter's info-sphere, and examine "Leave" (Eurosceptic) and "Remain" (pro-European) activity on Twitter in the run-up to the referendum, Hänska and Bauchowitz (2017) collected more than 7.5 million Brexit-related tweets in the month preceding the Referendum in 2016. They asked whether there was a relationship between Twitter activity and the actual vote, what kind of information was shared on Twitter, and whether Leavers and Remainers were confined to echo chambers, which kept feeding them information congenial to their views, or whether the two sides engaged openly with one another.
Their analysis showed Twitter users who supported leaving the EU were more numerous, and Eurosceptic users in general were more active (they tweeted more frequently) than Remain users (Hänska and Bauchowitz, 2017, p.29). They estimate Leave users were more numerous and more active on Twitter by a factor of 1.75-2.3. Other researchers examining Google search trends, Instagram posts and Facebook found similar patterns of Eurosceptic views being communicated with greater intensity by a greater number of users on those platforms (Herrman, 2016; Polonski 2016).
Hänska and Bauchowitz point out that local authority districts with a greater share of Twitter users supporting Leave tended to vote for leaving the EU, so Twitter activity correlated with voting in the Referendum (Hänska and Bauchowitz, 2017). This, of course, does not to mean that an analysis of Twitter activity could have predicted the Referendum. It is also not clear how the Leave margin on Twitter should have been interpreted prior to the Referendum, even with such a robust observation of more pronounced Eurosceptic activity. After all, the factor by which Leavers outnumbered and out-tweeted Remainers was much larger than the margin with which Leave won the vote.
The duo also analysed the nature of openness and homophily on Twitter, which crucially affords users the ability to interact and engage with each other. They examined the extent to which users who supported Leave and Remain interacted with each other, that is, for instance, whether a user who supported leaving the EU replied, quoted or retweeted a user who supported remaining in the EU. They found Leave users tended to be less open, and mostly engage with other Leave supporters, indicating important hallmarks of an echo-chamber. In contrast, Remain supporters were much more open. Specifically, 83 per cent of interactions initiated by Leave supporters were with other Leave supporters, while Remain supporters this figure drops to 46 per cent. Remainers replied to, retweeted or quoted Leavers 49, 39 and 50 per cent of the time, respectively. Contrast this with Leavers who replied to, retweeted or quoted Remainers only 19, 8 and 11 per cent of the time, respectively. This tendency to interact only with the like-minded is also reflected in the URLs shared. Leave users tended to share Eurosceptic domains, including The Express, the Daily Mail, and Breitbart. Leave users also linked more frequently to Bloomberg and Reuters than Remainers. Remain users tended to share links to The Guardian, BBC, The Independent, and less frequently The Mirror, The Financial Times, and The Economist. Overall, the most frequently linked domains were The Guardian, YouTube, BBC, and The Express (Hänska and Bauchowitz, 2017, p.30). YouTube was the second most prominent domain linked, indicating the importance of video as a way of distributing information about the campaign.
Overall, Twitter users who supported leaving the EU were much more active and motivated in advancing their cause, than Remainers were in advocating continued EU membership. One possible explanation of the dominance Leavers achieved on Twitter may be that slogans such as 'vote Leave', 'take control', or even 'Brexit' were more suited to simple, soundbite messaging than the Remain campaign's slogans and arguments (which is particularly useful given the character constraints of a tweet). Press coverage of the Referendum also favoured leaving the EU. Weighted for circulation, 82 per cent of newspaper articles in the lead-up to the Referendum supported leaving the EU, as other contributors to this book have noted (Deacon 2016). The balance of Eurosceptic information, views and opinion on Twitter thus appear to be leaning in the same direction as the balance of information in the press, meaning both online and offline citizens were more likely to encounter Eurosceptic voices (Hänska and Bauchowitz, 2017, p.30). The frequent and aggressive use of Twitter by Leavers during the Brexit Referendum campaign is quite interesting, considering the voting results by age groups. While 71% voters between 18 and 24 years old and 54% between 25 and 49 voted Remain, 60% voters between 50 and 64 and 64% over 65 voted Leave (YouGov, 2016), thus it is obvious that the younger the voters, they voted Remain and the older the voters, they voted Leave. Social media use is often strongly correlated to the age groups, making the younger generation more active user of social media. Indeed, a research by the London School of Economics and Political Science (2017) shows that the largest demographic group of Twitter users in UK are between the ages of 18 and 29 (37%) and 25% of users are between 30 and 49 years old. However, the analysis of Hänska and Bauchowitz shows that the Leavers were much aggressive in using Twitter.
The questions arising from this study are as follows: 1) those who aggressively used Twitter for Leave are the voters who actually voted for Leave?; 2) if so, is it possible to explain the frequent and aggressive use of Twitter by Leavers was mainly conducted by a rather small number of young Leavers?; 3) to which extent the aged Leavers were influenced by Twitter-transmitted messages?; 4) if the Leavers were not reading nor influenced by pro-Leave tweets, how can we explain the closed characteristics of Leaver tweets?; and 5) after all, as many media as well as researches suggested, can we still confirm that the result of Brexit Referendum was influenced by social media?
Methodology and Design of the Research
In order to respond to these questions in general term, not just on Brexit case, and to understand use of social media in public sector, this paper explores the characteristics of social media in public sector through literature review as well as an empirical study. Most of the literatures on the topic investigate, on one hand, in highly theoretical or conceptual manner, or, on the other hand, through empirical cases like the research on Brexit campaign in the introductory section (Hänska and Bauchowitz, 2017). However, the gap between theoretical/conceptual literatures and research on cases is so wide that it is not easy to understand the latter with theoretical framework and vice versa.
Thus the paper first explores several existing theories and concepts through literatures, and then analyses the case of UK government. Regarding the case study, which is a qualitative research, the author examined government documents, including policy papers, white papers, and national plans, while interviewing key actors. The author and her research partners conducted about 30 semistructured interviews to the key actors from November 2018 to November 2019 period. The interviews were conducted without recording but with detailed transcriptions, in order to encourage interviewees to express freely their opinions and views.
The aim of this research is to investigate how social media impacts public policy making as well as social life. The research approach is a single case of the UK government (Yin 2014). Data were collected indeed from two sources: semi structured interviews to key actors and written documents available in the public domain. Case study research is appropriate for this research as it makes use of multiple sources of evidence in order to create a picture of the phenomenon under investigation and is methodologically appropriate when exploring complex issues, those that occur over an extended time period (Gratton & Jones, 2010) or when researchers have little or no influence on the event being studied (Yin, 2014) such as in this research. Document analysis is appropriate in this case, as documents are a rich source of data and in this instance they provided valuable primary data. Documentary analysis of strategic plans, policy documents, and government reports contributed to the understanding of the case study in three ways.
First, the document analysis allowed the context for the case study to be understood, prior to the interviews. It also provided a historical account of the public policy in UK. Finally, using document analysis also allowed for triangulation of information obtained through the interviews.
Information used in this paper is based on the interviews conducted to the following, among others: The paper is part of the results of a research on "Improving operational efficiency in manufacturing and physical distribution sites through negotiations using AI", which is awarded by "2 nd Crossministerial Strategic Innovation Promotion Program (SIP), Cyberspace fundamental technology utilizing big data and AI", a Japanese government project, and a research on Big Data and Open Data in relation to evidence-based policy making in the area of sport policy, a research project awarded by Japan Society for the Promotion of Science (JSPS) entitled "Research on sport policy making based on Big Data: Olympic Games as a trigger" (Research ID: 18H00819 2018-2023).
Social Media in Public Administration and Social Media for Public Policy: Theoretical background and concepts
Why social media has become important for public administration and in public policy making? Before answering to this question, some key concepts should be clarified.
Social media is the collective of online communication channels dedicated to community-based input, interaction, content-sharing and collaboration. Websites and applications dedicated to forums, microblogging, social networking, social bookmarking, social curation, and wikis are among the different types of social media. Social media has several characteristics, such as: 1) social media are interactive Web 2. Some examples of popular social media platforms are; Facebook, Twitter, YouTube, Instagram, LinkedIn, Reddit, Pinterest, and WhatsApp. As users engage with these services, they create highly interactive platforms through which individuals, communities, and organizations can share, co-create, discuss, participate and modify user-generated content or self-curated content posted online. Networks formed through social media change the way groups of people interact and communicate or stand with the votes. They "introduce substantial and pervasive changes to communication between organizations, communities, and individuals" (Kietzmann and Hermkens, 2011).
In business, social media has been used to market products, promote brands, connect to current customers and foster new business. In terms of customer feedback, social media makes it easy to tell a company and everyone else about their experiences with that company. The business can also respond quickly to feedback, attend to customer problems and maintain, regain or rebuild customer confidence. Social media is also often used for crowdsourcing. In ICT projects, crowdsourcing usually involves engaging and blending business and ICT services from a mix of internal and external providers, sometimes with input from customers and/or the general public. Other B2B applications of social media include social media analytics, the practice of gathering data from blogs and social media websites and analysing that data to make business decisions. The most common use of social media analytics is to mine customer sentiment to support marketing and customer service activities.
Internally, social tools can help employees access information and resources they need to work together effectively and solve business problems. Externally, social media platforms help an organization stay close to their customers and make it easier to conduct research to improve business processes and operations. The integration of social media in business can also pose challenges. Social media policies are designed to set expectations for appropriate behaviour and ensure that an employee's posts will not expose the company to legal problems or public embarrassment. Such policies include directives for when an employee should identify himself or herself as a representative of the company on a social networking website, as well as rules for what types of information can be shared and this is applied to public institutions as well.
Social media deals with all types of data posted, shared, and analysed by its users. Data comprises facts, observations and raw information. Data are, indeed, forms of information. The concept of data is worthy of book-length explication (Borgman, 2016); however, in order to explore how data are created, used and understood, it might be enough to define it by examples, such as facts, numbers, letters, and symbols (National Research Council, 1999). Data itself has little meaning if it is not processed (Monino and Sedkaoui, 2016). Indeed, some interviewees confessed that data collected without clear design proved to be useless as information, because of this characteristic. Information, indeed, consists of interpreted data and has discernible meaning. It describes and answers to questions like "who?", "what?", "when?", and "how many?" (Monino and Sedkaoui, 2016).
Data posted, shared, and analysed on social media platforms are open. As members continuously create and exchange data on those platforms, they are, indeed, Big Data. Open Data refers to the principle according to which public data (gathered, maintained and used by government institutions) should be made available to be accessed and reused by citizens and businesses, while Big Data is used when the amount of data that an organization has to manage reaches a critical volume that requires new technological approaches in terms of storage, processing, and usage. Volume, speed, and variety are usually the three criteria used to qualify a database as "Big Data" (Monino and Sedkaoui, 2016). Openness is a trend, which have changed relationship among stakeholders in all sectors (Borgman, 2016). Open models of government, standards, data, services, and collaborative production of knowledge have contributed to this transformation. Openness is claimed to promote the flow of information, the modularity of systems and services, and interoperability (Borgman, 2016). As Open Government Data has become increasingly a set of policies that promotes transparency, accountability and value creation by making government data available to all (OECD, 2013; Ubaldi, 2013), use of social media has also become important to public bodies, although no so many institutions clearly and publicly mention the issue (Balcells et al., 2015;Bryer and Zavattaro, 2011;Karakiza, 2015;Taylor, 2017;Zavattaro, 2013). Their major concern is still Open Data. By making their datasets available, public institutions are believed to become more transparent and accountable to citizens. By encouraging the use, reuse and free distribution of datasets, governments are expected to promote business creation and innovative, citizen-centric services. Data governance constitutes a framework of quality control for management and key information resource protection within an institution. Its mission is to ensure that the data is managed in accordance with values and convictions of the institution to oversee its quality and to put mechanism into place that monitor and maintain the quality. Data governance includes data management, oversight, quality evaluation, coherence, integrity and ICT resource security within an institution (Monino and Sedkaoui, 2016). Theoretically, the importance of co-creation with citizen and/or user, can be explained from New Public Management (NPM) concept. Information and Communication Technology (ICT) is considered to be introduced in public administration along with other new managerial techniques, especially under the NPM concept in the Nineties. With NPM, the use of ICT started to focus on managerial process of public administration. Various managerial tools enabled by ICT were introduced to improve the speed and transparency of administrative procedure. Exchange of documents and elaboration through multiple actors became easier, thus improving interaction and collaboration among stakeholders. Not only the internal managerial issues, but also the public service delivery utilizing and benefitting from ICT, especially web-based technologies became popular (Kudo, 2018). Many former counter services were transformed into on-line services, making citizen possible to access directly to information as well as public services (Alford and O'Flynn, 2012). E-Government has been challenged with "digital era governance", which goes beyond the NPM (Dunleavy et al., 2006). In this view, all stakeholders are related in public governance network. The introduction of New Public Governance (NPG) in public service delivery is an important turning point as concept as well as practice. Citizens and communities are invited to participate not only in the decision-making process, but also the service delivery process, thus realizing co-design, cocreation, and co-production (Granier and Kuro, 2016; Kudo, 2018). They are redesigning the structure of service delivery.
Digital services of governments have become an importance aspect of technology and/or innovation driven public services. This concept as well as practice was enabled through various elements, including co-design and co-production with citizens and other stakeholders, digital technologies enabling data analytics, thus better designing services, based on data and evidences, NPG helped the realisation of co-production with citizens and other stakeholders, while NPG encouraged ICT to be an effective and efficient instrument of government (Kudo, 2018). Many of the digital services are not only a result of technological innovation and advancement, but also a product of institutional reform and revolution. ICT, per se, is not a solution, but could offer and become an opportunity.
In line with this theoretical evolution of public sector governance, Social Media, Open Data, and Open Government Data have become essential to government institutions, not only for their innovation but also for the possible realisation of co-design and co-production with citizens and other stakeholders. Indeed, the research focuses on this topic because of this very reason.
UK Approach to Social Media in Government
The UK government has committed to Open Government as well as use of Social Media. Regarding the first, the UK's fourth National Action Plan for 2018-2020 was launched in 2018 and was developed in collaboration with the UK Open Government Network (OGN), a coalition of active citizens and civil society organisations committed to making government and other powerful institutions work better for people through enhanced transparency, participation and accountability.
The NAP sets out various commitments in line with the Open Government Partnership values of access to information, civic participation, public accountability, and technology and innovation.
In terms of Social Media, the government is aware of the recent behaviour changes: almost every internet user can now be reached via social media; social media use continues to grow rapidly; all eyes turn to mobile-first social videos as 60% of internet users primarily use social sites to find entertainment; social media's role in the purchase journey expands; one in three minutes online is spent on social networking and this is an increase across all markets and major demographics; social overtakes search for [16][17][18][19][20][21][22][23][24] year olds and this demographic is turning to social as the preferred channel for brand discovery and research; social networks serve more ads, while consumers block them. The rise of ad blocking is a loud warning that citizens still resist and resent broadcast-style marketing tactics; and the future of online product research is visuals and voice, that pictures are substituting keywords (Hootsuite, 2018a).
Indeed, 64 percent of the UK population are active on social media, and they expect immediate access to information and real-time responses, even to the public institutions. With this in mind, the UK government is steadfast in its commitment to building a digital infrastructure to keep up with citizen expectations. "The Value of Social Media in the UK Government", a Hootsuite White Paper explores why governments need to adopt a citizen engagement and service delivery strategy that puts social media and digital first (Hootsuite, 2018b). The Paper analysis that, "the UK Digital Strategy 2017 policy paper (UK government, 2017) outlines a digital strategy for a digital economy, that will ensure the best digital connectivity for consumers and businesses alike; however, providing a seamless, efficient, and integrated online platform to better serve its citizens is only one part of the digital jigsaw for the UK government". Digital connectivity is now a utility and modern life in the digital age is becoming impossible without it. And nowhere is this relentless march towards digitisation epitomised better than in social media. Today's population has social media networks at their fingertips every minute of the day. This means there is a disconnect between traditional approaches and the expectations of today's hyper-connected, always-on citizens. UK government thus needs to expand its efforts, embracing a government wide citizen engagement and service delivery strategy with social media and mobility (Hootsuite, 2018b).
The White Paper analysis that never before has government had such a low-cost solution for engaging citizens. Maximising the investment in social media requires a strategic approach to social engagement. Government agencies that are doing it well have mastered a way to adopt a humancentric approach when engaging with citizens on social. But government is still perceived as being the most annoying industry on social media. Government organisations that are looking to embrace the power of social media have to adopt tools to ensure good outcomes. Social media management solutions can simplify social engagement and pay for themselves by providing operational efficiencies, streamlined service delivery, and improved citizen satisfaction with government interactions (Hootsuite, 2018b).
The Paper explores the value of social media as an agent to drive efficient, effective, and citizencentric engagement within the UK government and suggests four pillars: 1) citizen engagement. Listening to citizens is the first step to learning what's happening within communities. Tweets and Facebook events, polls, lists, and livestreams can be used to gauge and monitor citizen engagement and community concerns. Indeed, the UK government takes public safety seriously, using social tools to promote good citizen behaviour, advocate safety, and address sensitive subjects; 2) government service delivery. Social media can play a role in helping to reduce the public-sector resources consumed in the delivery of citizen services. Services like tax payments, permit applications, and license renewals, for instance, can be delivered as do-it-yourself services; 3) critical response communications. Social media can help spread the word quickly about a disaster or public emergency. Twitter and Facebook updates can alert citizens to power outages, terror alerts, and weather disasters in real time for immediacy and accuracy. First responders can monitor community activity to identify areas where citizen support is needed; and 4) government operations. Social media can create efficiencies in other areas of government operations. Social tools can attract and retain highly skilled workers where traditional recruitment fails. This can help enhance the employer brand of government agencies and help them compete against the private sector for fresh talent (Hootsuite, 2018b).
In order to use social media more effectively in government, the While Paper recommends the following points: 1) simplified social outreach. Social media engagement can be used to enhance agency outreach by delivering a unified communications strategy. A single platform for social management can centralise social outreach and engagement at the central and local levels for efficiency and ease of procurement; 2) consolidated social monitoring. Social listening helps agencies learn what's on the minds of constituents. Real time posts give governments insight into relevant discussions. Listening to these discussions sets up agencies to respond to messages and comments across social channels, helping them resolve problems; 3) solid metrics to prove value. Measurement of success is key to proving the value of social and increasing budget for social initiatives. To measure performance, agencies need to start by defining what success means to them. With a well-established definition of success in place, agencies can use social media management software to measure and demonstrate progress towards program objectives; 4) security and process. Few things can be more damaging to perception of an institution than a negative social post that turns viral. A centralized social media management platform makes it easy to recognise and address potentially damaging social content. Staff access can be managed across departments with secure logins, publishing approval workflows, and flexible permission levels. This allows for consistency across departments, agencies, and locations, and ensures that posts are reviewed, approved, and compliant.
The use of social media in government is aimed to: 1) advance public sector missions. An effective social media campaign to communicate the goals and benefits of government initiatives can help constituents understand public policies (UK government, 2015); 2) streamline agency operations. Social media management tools can speed up government responses to emergencies to safeguard constituents and communicate in a timely manner. Governments can post emergency alerts and instant updates across several social channels from a single dashboard while monitoring responses and inbound communications; 3) reduce cost of citizen outreach. A shared platform for social media campaigns can unify and speed up social interactions, reducing redundancies and making information more transparent and available for citizen and interagency consumption. Every government agency is at a different stage of the digital transformation. While social media is just one stage, it offers government agencies a relatively quick win in terms of enhancing the citizen experience. Government social media use satisfies citizen demand for real-time information, makes citizen engagement easier, and streamlines government operations. Using social media, governments can control costs, increase transparency, earn greater public trust, and create positive public sentiment. Effective social outreach requires that government users listen as much as they post, and that they respond quickly to comments and mentions.
"The State of Social Media in Government in 2018: Hootsuite's annual report on social media trends in government" (Hootsuite, 2018a) outlines five recommended areas of opportunity for social media in government; 1) drive government-wide efficiencies with social; 2) restore declining trust with peer influence; 3) combat brain drain with social-first recruitment; 4) rethink crisis communications in the wake of extreme weather conditions; and 5) build compliance into your social media strategy. It also deals with how to streamline and coordinate social media across departments and agencies and how a centralised social media strategy can help control costs, increase transparency, earn greater citizen trust, and create positive public sentiment.
The question that arises is that if social media is such an almighty tool for government and public policy making as well as for the citizen, as it has been described as above and if it does have ant issues. Indeed, most of the interviewees, who were or have been directly involved in governmental digitalisation process and had experiences in dealing with citizen via social media, points out the following. First, as citizen mostly use social media for entertainment, there is still a strong hesitation for them to communicate with public institutions via social media, resulting that the voices to government via social media are not necessary reflecting the voices or majority and thus representing the population (Mellon and Prosser, 2017;Nseke, 2018). Second, similar to data, the quality of inputs via social media varies, making it difficult to evaluate and treat them. Third, as it is the characteristics of social media, only sensational topics are raised and discussed, leaving many ordinary issues untouched and/or ignored, thus, institutions are aware that following only the voices of social media cannot improve the public policy in general. Forth, institutions often do not know how to interpret voices on social media. They are aware of its importance; however, voices often are confused with noises, which sometimes are stronger then the first, or they cannot reflect meaningful voices into public policy (Sloan et al., 2013). Lastly, given the characteristics of public services, that have to reach to those who are in need, it is not easy to identify the silent needs of many, who are completely out of the social media, because of various reasons.
Some interviewees noted that, for several empirical cases, including park run and/or social prescribing, citizen engagement through social media has been vital to the projects. However, the successful cases are limited to those, where citizen participation are spontaneous and independent, and where only active and engaged participants use social media to reach their objectives. For disaster and crisis management, social media can be an important tool to identify the problems; however, in practice, many noises have caused problem during operations. So far, the use of social media in government remains theoretical framework, although public institutions are keen to utilise it.
Voice, Noise and Exit: Who's Cost?
Since the aim of the paper is to explore social media use by government in terms of Hirschman's theory, this section analyses theories and the case with the framework, slightly revise (Witt, 2011) from the original.
Social media is based on co-production with civil society and among institutions as many authors have explored. On this regard, social media enables public institutions to capture voices, even those, which traditional and conventional channel could not have captured, with rather small investment, as many authors proved. On the contrary to the initial hypothesis, indeed, social media in not only "cheap" tool to the citizen, but it is so to the government as well. However, one major issue is the representativeness of the voices (Nseke, 2018;Sloan et al., 2013) and the co-existence of noises, which, sometimes resemble voices, even according to the sophisticated analytical tools.
The often-cited example on healthcare services is definitely a good practice, as, for example, Social Prescribing attracts rather active and/or proactive citizens, who know the significance of preventive healthcare and are rather responsible for their own health condition. It is a good example of coproduction of service with civil society and citizens, also heavily using social media. Thanks to these new tools, it provides citizen opportunities, in which they can learn about the possibilities and design their own personalised solutions, i.e. "co-produce" their "social prescription" -so that people with social, emotional or practical needs are empowered to find solutions which will improve their health and wellbeing, often using services provided by the voluntary and community sector. It is considered to be an innovative solution, with the potential to reduce the financial burden on government. This has been, so far, possible, because the voices are from engaged citizens and public institutions do not have to deal with many noises and/or exit, which are unfortunately common in other fields. When we, however, consider the real vulnerable individuals in healthcare sector, i.e. aged, immigrants, and those with socio-economic difficulties, with poor health condition and/or poor digital literacy, their voices would never become voices, if public institutions rely heavily on social media communication.
These are the shortfall of social media dependence; institutions cannot capture the real social needs, but will perceive only the virtual ones. Thus, the "cost" of those who fell out from the system is not considered, while the general cost cutting benefit has been stressed. This practice actually creates "forced" exit of citizens, in the name of popular voices and some noises as well. Can we discard voices from those who do not express their voices or who remain silent because they do not have means? Well, this is a further issue to be investigated, both theoretically as well as in practice.
Social policies were thought to benefit more from social media; however, besides healthcare, there are not so many good practices in other fields. Education policy was thought to have improved using social media, as the target demographic group is the most active social media users; however, it seems that it is difficult to use the input from social media in constructive way in this field.
Conclusion: Findings and limitations
This paper aims to explore the theories and current situation of social media use in public sector and for public policy through literature reviews and a case study in a particular framework, that of Hirschman's theory on voice and exit. Literature reviews show conceptual objects and benefits of social media in government; however, the real outcomes are, so far, mixed. Interviewees, also, pointed out the operational issues in using social media in government, which were easy to guess from the literatures, but are not easy to resolve.
Social media is becoming an integrated part of many citizens' social life; however, it is not yet universal and has several characteristics of its own, that have to be considered, before being used in public sphere. Interviews revealed that there are issues such as capacity development of public employees for using these inputs from social media into public policies.
The result from case study contributes to theoretical discussions, as they show empirical issues, many of which are not explored in existing literatures. The case also contributes to the theory of coproduction of public service delivery discussion as well, since it is an example of it.
Given the limitation of one case study, the further research which will follow would be on several other governments, and compare those cases. Besides, some empirical examples of public policies using social media inputs could enrich the future research. | 2020-08-06T09:07:35.738Z | 2020-07-16T00:00:00.000 | {
"year": 2020,
"sha1": "610f4ffd825f010cd7780779d92b34fab91e8b27",
"oa_license": null,
"oa_url": "https://ejournals.facultas.at/index.php/ocgcp/article/download/1907/1621",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "4a1929d948e1d27a506b017b16aa1ff8a35dde67",
"s2fieldsofstudy": [
"Political Science"
],
"extfieldsofstudy": [
"Business"
]
} |
9629250 | pes2o/s2orc | v3-fos-license | VennDiagram: a package for the generation of highly-customizable Venn and Euler diagrams in R
Background Visualization of orthogonal (disjoint) or overlapping datasets is a common task in bioinformatics. Few tools exist to automate the generation of extensively-customizable, high-resolution Venn and Euler diagrams in the R statistical environment. To fill this gap we introduce VennDiagram, an R package that enables the automated generation of highly-customizable, high-resolution Venn diagrams with up to four sets and Euler diagrams with up to three sets. Results The VennDiagram package offers the user the ability to customize essentially all aspects of the generated diagrams, including font sizes, label styles and locations, and the overall rotation of the diagram. We have implemented scaled Venn and Euler diagrams, which increase graphical accuracy and visual appeal. Diagrams are generated as high-definition TIFF files, simplifying the process of creating publication-quality figures and easing integration with established analysis pipelines. Conclusions The VennDiagram package allows the creation of high quality Venn and Euler diagrams in the R statistical environment.
Background
The visualization of complex datasets is an increasingly important part of biology. Many experiments involve the integration of multiple datasets to understand complementary aspects of biology. These overlapping results can be visualized in a number of ways, including textual tables (e.g. two-way tables), network diagrams [1,2] and in some cases heatmaps [3,4]. Venn diagrams have seen increasing use due to their familiarity, ease-of-interpretation, and graphical simplicity. For the purpose of this publication, Venn diagrams can be defined as diagrams that use simple geometrical shapes such as circles and ellipses to display all 2 n -1 possible areas created by the interaction of n sets. The use of simple geometrical shapes reduces figure complexity and size relative to space-consuming tables or network layouts.
However, despite this popularity, there are currently few packages for generating Venn diagrams in the widely-used R statistical environment. These packages are limited in their ability to generate high-resolution, publication-quality Venn diagrams in that they allow little customization of colours, line-types, label-placement, and label font. Numerous special-cases are handled inappropriately, and the output is not usually in the format of high-resolution, publication-quality TIFF files. Other, non-R-based local or web-based software capable of generating Venn diagrams exist, such as Venny [5], BioVenn [6], ConSet [7], and VennMaster [8]. All of these suffer from some of the weaknesses listed above. Further, integration into standard R-based statistical/ computational pipelines such as the widely used Bio-Conductor libraries of the R statistical environment [9] is viable, but not technically trivial.
Additionally, if some intersecting or non-intersecting areas in a Venn diagram do not exist, another class of diagrams called Euler diagrams may be more desirable. Euler diagrams are equivalent to Venn diagrams when all intersecting and non-intersecting areas exist. However, areas containing zero elements are shown on Venn diagrams (by definition), whereas Euler diagrams show only non-zero areas. In many cases, Euler diagrams further reduce figure complexity, increase graphical accuracy and improve overall readability relative to Venn diagrams. Unfortunately, almost all existing packages cannot generate publication-quality Euler diagrams in R, although VennEuler does generate Euler diagrams.
To address these issues we introduce VennDiagram, an R package for generating highly customizable, highresolution Venn diagrams with up to four sets and Euler diagrams of two or three sets in the R statistical environment.
Implementation
The VennDiagram package has been developed in and designed for the R statistical environment. The R environment is open-source and available online under the GNU General Public License (GPLv2). R was chosen because of its open-source nature, versatile functions, and general preference within the bioinformatics community. The use of R should facilitate integration with existing data-analysis pipelines. All code was designed and tested using version 2.12.1 (32-bit and 64-bit versions) of R. The VennDiagram package is available as Additional Files 1 (linux .tar.gz file) and 2 (windows .zip file).
VennDiagram uses the grid package for graphics. The grid package is a base (standard) package available in all installations, and offers more manoeuvrability than default R graphics in terms of graphical options and the existence of modifiable grid objects. VennDiagram uses these features to dynamically stretch/compress diagrams to fit the dimensions of the output file and to offer a vast number of graphical options.
Results
Almost all graphical options in the VennDiagram package have been parameterized and made customizable. Default values were selected to generate sensible diagrams, so in simple cases a high level of customization is not required. Figure 1 highlights the diversity of parameterizations available. Four major graphical parameter groups exist: shape-fill, shape-line, labels and titles. Shape-fill refers to the colours within each circle or ellipse. All colours available in the R environment can be used, and alpha-blending can be adjusted on a pershape basis. Shape-lines are the lines that surround each circle or ellipse. They can be entirely absent (figure 1C), solid (figures 1A and 1B), or any other R line-type available ( figure 1D). Their colour can be changed, and each shape can have a separate set of parameters. Labels refer both to the captions describing each circle or ellipse and to the numbers within them. Again, these can be customized in terms of colour, font-type, and font-size with any available R parameter. The positions of caption labels can also be customized. Titles, which include the main title and the subtitle as demonstrated in figure 1D, can also be customized in the same way as labels.
Beyond these specific graphic elements, VennDiagram also offers many general options, such as the scaling, rotation, or inversion of diagrams. Diagram scaling was implemented with the goal of displaying Venn diagrams where the graphical sizes of the partial areas (areas bound on all sides by curves and that cannot be further subdivided) actually correspond to the numerical values of the number of elements within each region. Scaling of two-set Venn diagrams and a select number of three-set Venn diagrams is possible -we discuss below the challenges of making this possible for all three-set cases. The use of scaling can sometimes lead to overlapping areas being too small for numeric labels. Figure 2, row 1, column 1 shows a programmatically generated connecting line that allows special handling of this case. Automatic recognition of a large number of Euler diagrams is implemented, but this mode can be deactivated with a function-call parameter to plot standard Venn diagrams instead. Figure 2, row 1, columns 2 and 3 show two examples of two-set Euler diagrams. Rows 2 and 3 show a subset of implemented three-set Euler diagrams with row 3 showing those of the scalable variety. We note that figure 2 is presented in black and white to highlight compositional differences across diagrams, but the graphical parameters and customizations used in figure 1 are fully available. While Venn-Diagram defaults to writing graphics to high-resolution TIFF files, if the filename parameter is set to NULL the raw grid object can be returned and used in any graphics mode available in R. The code to generate all figures is given in Additional File 3 and an example of all available parameterizations is shown in Additional File 4.
Discussion
During development of the VennDiagram package, it was discovered that it was impossible to draw accurate, scaled Venn diagrams with three sets using circles. This conundrum is illustrated in the following scenario. In a system of two circles A and B, the distances between the centres of the circles, d AB , could be determined as long as the areas (A A and A B respectively) and the intersection area (A A ∩ A B ) are both known. This is possible because in a two-circle system a single A A ∩ A B corresponds to a unique value for d AB . Therefore, a system of three circles A, B, and C, d AB , d BC , d AC could be calculated as long as A A , A B , A C , A A ∩ A B , A A ∩ A C , A B ∩ A C are all known. However, d AB , d BC , d AC make a unique triangle, implying that a Venn diagram can be drawn without ever knowing the overall intersection A A ∩ A B ∩ A C . In other words, the size of the overlap between all three circles does not alter the presentation of scaled Venn diagramsthe area is unchanged even if one system has zero overall intersection (i.e. A A ∩ A B ∩ A C = 0)! This conundrum results from the (arbitrary) choice of circles to represent set size, which reduces the degrees of freedom by one. Unique solutions can be identified by using ellipses or polygons to draw Venn diagrams but the resulting diagrams would lose the instant recognisability and familiarity associated with circular Venn diagrams, defeating the point of a convenient display of information. Non-circular diagrams would also require iterative algorithms to compute the positions and sizes of the shapes, greatly increasing computational burdens, as has been discussed by others [10]. Consequently, scaling of three-set Venn diagrams is disabled in the Venn-Diagram package. Similarly, Venn diagrams containing more than four sets [11,12] were not implemented in the VennDiagram package because they become too complex for intuitive visualization.
A general caveat when using Euler diagrams is that although they reduce the graphical complexity of some Venn diagrams, their non-traditional shapes may also be less recognizable in some cases. When empty areas are present, the user needs to choose between the familiarity of Venn diagrams and the increased accuracy of Euler diagrams. Figure 3 illustrates a situation where either a Venn or an Euler diagram may be appropriate depending on user preferences.
The VennDiagram package handles all two-set Euler diagrams and the majority of all conceivable three-set Euler diagrams. Three-set Euler diagrams that could not be drawn using circles or ellipses are not supported. For example, an Euler diagram for the case where two nonintersecting sets comprise the third set cannot be drawn using circles and ellipses, though it may be drawn using polygons. This type of figure lacks a ready analytical layout and would require iterative fitting; no polygonrequiring Euler diagrams are available, but standard Venn diagrams are available for these few unsupported cases.
After comparing with other programs capable of generating Venn diagrams (Table 1), advantages of the VennDiagram package include: • Drawing Euler diagrams using circles and/or ellipses with two or three sets Figure 2 Selected Venn diagram special cases and Euler diagrams drawn by the VennDiagram package. Row 1, column 1: automatically drawn, customizable lines that optimize display of partial areas when individual partial areas become too small in two-set Venn diagrams. Row 1, column 2: a two-set Euler diagram showing total inclusion of one of the sets. Row 1, column 3: a two-set Euler diagram showing two distinct sets. Row 2, column 1: a three-set Euler diagram where one set has no discrete elements. Row 2, column 2: a three-set Euler diagram where one set has no discrete elements is totally included in one of the other two sets. Row 2, column 3: a three-set Euler diagram where two sets have no discrete elements and are included in a larger third set. Row 3, column 1: a three-set Euler diagram showing total inclusion of two sets that are distinct from the third set. Row 3, column 2: a three-set Euler diagram where one set is totally included in another set, which is itself totally included in the third set. Row 3, column 2: a three-set Euler diagram showing three distinct sets. The code to generate all diagrams shown here is included in Additional File 3.
Figure 3
A side-by-side comparison of an Euler diagram and a Venn diagram for the same hypothetical sets. A) The Euler diagram shows only non-zero areas and can therefore be more graphically accurate. B) The Venn diagram shows the non-existent area as an area with zero content. Though this is not graphically accurate, it preserves the recognisability of a Venn diagram.
Conclusions
The VennDiagram package advances both the ease-ofuse and the degree of customizability in the generation of Venn diagrams in a bioinformatics context. While other tools offer much of the functionality presented here, the implementation of all features together in the widely-used R statistical environment will promote the usage of automatically generated Venn diagrams within computational pipelines. | 2014-10-01T00:00:00.000Z | 2011-01-26T00:00:00.000 | {
"year": 2011,
"sha1": "780725e727d6cc7a8c32dd613540960457bf188a",
"oa_license": "CCBY",
"oa_url": "https://bmcbioinformatics.biomedcentral.com/track/pdf/10.1186/1471-2105-12-35",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "4c5ba4e322b044dea95df4755211415e95636c22",
"s2fieldsofstudy": [
"Biology",
"Computer Science"
],
"extfieldsofstudy": [
"Computer Science",
"Medicine"
]
} |
2102017 | pes2o/s2orc | v3-fos-license | Priming of reach trajectory when observing actions: Hand-centred effects
When another person's actions are observed it appears that these actions are simulated, such that similar motor processes are triggered in the observer. Much evidence suggests that such simulation concerns the achievement of behavioural goals, such as grasping a particular object, and is less concerned with the specific nature of the action, such as the path the hand takes to reach the goal object. We demonstrate that when observing another person reach around an obstacle, an observer's subsequent reach has an increased curved trajectory, reflecting motor priming of reach path. This priming of reach trajectory via action observation can take place under a variety of circumstances: with or without a shared goal, and when the action is seen from a variety of perspectives. However, of most importance, the reach path priming effect is only evoked if the obstacle avoided by another person is within the action (peripersonal) space of the observer.
For more than a decade there has been a substantial amount of research investigating how the actions of other individuals are encoded. A key idea is that one means of understanding the behaviour of others is by simulating their actions. Thus, the motor representations that would be activated when undertaking a task are active when one is merely observing another person undertake the same action. Perhaps the key to motivating this research approach was the discovery of mirror cells. These were neurons in ventral premotor region F5 in the macaque that responded when the monkey grasped an object, but also responded in a similar way when the same action was observed (e.g., di Pellegrino, Fadiga, Fogassi, Gallese, & Rizzolatti, 1992). Later research has also found neurons with mirror properties in the inferior parietal lobe (IPL) of monkeys (Fogassi, Gallese, Fadiga, & Rizzolatti, 1998;Fogassi et al., 2005).
After the initial discovery of mirror systems via single unit recording in the monkey, subsequent work has revealed similar systems in humans. Via functional magnetic resonance imaging (fMRI), regions around the dorsal and ventral premotor cortices and in the inferior parietal lobe have been identified that become active when observing another person's behaviour (e.g., Binkofski et al., 1999;Gazzola, Rizzolatti, Wicker, & Keysers, 2007;Iacoboni et al., 1999). Furthermore, activation of visual-motor systems via observation predicts that one's own actions are primed by this activation. Indeed, observing an action such as grasping an object facilitates the production of the same action shortly afterwards (e.g., Castiello, Lusher, Mari, Edwards, & Humphreys, 2002;Craighero, Fadiga, Rizzolatti, & Umiltà, 1998;Edwards, Humphreys, & Castiello, 2003). Similarly, Bach and Tipper (2007) showed that when actions such as kicking a soccer ball or typing are observed, subsequent foot and hand responses, respectively, were facilitated.
A key issue is what form of information is encoded when another individual's actions are observed. It is clear that there are two components to prehensile actions: reach and grasp. These appear to be encoded by different neural systems (e.g., Jeannerod, 1988). It might be the case that both of these motor processes are simulated, such that the path the hand takes to reach an object and the highly specific patterns of the finger movements as they shape to grasp the target are reflected in neural responses. However, in singlecell studies of mirror systems sensitive to the form of grasp, the specific reach path as the hand approaches an object has not been reported as critical to neural activity (e.g., Fogassi et al., 2001;Murata, Gallese, Luppino, Kaseda, & Sakata, 2000). It is therefore possible that the mirror-like system's primary role is to understand the goal of the action (pick up a glass), with little concern for the specific way the goal was achieved (how the hand reached the glass; van Elk, van Schie, & Bekkering, 2008).
There is clear evidence to support the idea that the action simulation processes are concerned with encoding the behavioural goal, rather than the low-level specific reach properties of the action: When observing a reach, some parietal cells respond selectively when the final grasp is for eating but not when it is for placing (Fogassi et al., 2005; and in humans, Iacoboni et al., 2005). In fact the action goal, such as pressing down on an object, can produce facilitation priming effects when repeated, even if the observed and subsequently produced action is undertaken by a different effector (e.g., hand then foot; Costantini, Committeri & Galati, 2008). Similarly, the discovery of acoustic mirror neurons that respond to the sound of an action that cannot be seen (Kohler et al., 2002) and cells that respond when action cannot be directly observed when grasping an object behind an occluder (Umiltà et al., 2001), also suggest that the mirror system may code actions at an abstract level in terms of the goal to be achieved rather than the specific form of the action (e.g., Gallese & Lakoff, 2005). Similarly, in human participants, Castiello (2003) showed that the motor system of a participant can be primed when no action is directly viewed. That is, observing another person looking towards target and distractor objects activates the intention to act, which in turn stimulates similar motor representations in the observer. That actions can be primed when no action is observed is a strong demonstration that motor simulation is driven by abstract goals and that the specific properties of an action, such as reach trajectory, are not necessary for action priming (see also Bach & Tipper, 2006).
In the current study we attempt to find evidence that not only is the goal of an action encoded via grasp information, but in fact in some circumstances more specific properties of action, such as reach trajectory, are also encoded (see also Hamilton & Grafton, 2006). Clearly prehensile actions directed towards objects require these two reach and grasp processes, and we believe that in some situations the reach, as well as action goal of grasp, is simulated. For example, consider the task of reaching around an obstacle to grasp a target object. In this situation we hypothesize that the reach component is now relevant to understanding the other person's behaviour and hence will be simulated. Therefore, although grasp-and goal-related simulation has been detected thus far, the key component of how the hand reaches to a target should also be simulated in some situations for a full understanding of another person's behaviour.
To investigate whether the reach of another person can be implicitly simulated we employed priming methods. As noted above, a large number of studies have investigated how the implicit encoding of observed, goal-orientated, grasping actions affects subsequent action by the observer (e.g., Brass, Bekkering, & Prinz, 2001;Craighero et al., 1998;Stürmer, Aschersleben, & Prinz, 2000). Other studies have focused on related issues-that is, situations where actions have to be directed to target objects in the presence of distractors. For example, Tipper, Lortie, and Baylis (1992) and Tipper, Howard, and Jackson (1997) demonstrated action-centred selection processes in a selective reaching task. That is, when a person was reaching for a target, distractors closer to the hand produced significant interference and were associated with inhibition, as revealed by curved trajectories and subsequently slowed responses (i.e., negative priming effects). Thus, selective action directed towards a target was achieved in part by inhibition of the competing distractor, and in other studies (e.g., Schuch & Tipper, 2007) we have provided some evidence that another person's inhibition of an action can be simulated by an observer (see also Frischen, Loach, & Tipper, 2009).
In the current work, to investigate simulation of selective reaching, we utilized the methods developed by Jax and Rosenbaum (2007). They reported that, after avoidance of an obstacle, subsequent reaches to targets when there was no obstacle present showed curved trajectories. That is, prior obstacle avoidance primed the motor system such that those representations were accessed for subsequent reaches, resulting in curved trajectories when a straight reach was more appropriate. Therefore the obstacle avoidance priming effect would seem to be an ideal technique to examine whether the processes involved-that is, the avoidance reach path-are simulated. We adopted the methods of Jax and Rosenbaum because they produce robust effects and because the avoidance of an obstacle is a salient property of another person's reach trajectory, and hence it is a suitable approach to examine whether the obstacle avoidance priming effects generalize between people.
A series of experiments are reported. As a preview, in a new procedure we replicate the basic effects reported by Jax and Rosenbaum (2007) where a reach is more curved when a previous reach by the same person avoided an obstacle. In a further two experiments, when two people undertook alternate trials we did not find any evidence for the obstacle avoidance of one individual affecting the reach trajectory of a second person on the next trial. Although this suggested that obstacle avoidance is not simulated, we noted that in these experiments the obstacle avoided was outside the peripersonal space of the observer. In previous research investigating response to targets in the presence of distractors, effects appeared to be influenced by the distance between the distractor object and responding hand (e.g., Meegan & Tipper, 1999;Tipper et al., 1992). Indeed, a subsequent study showed that when the observed obstacle was within a viewer's peripersonal action space, another person's reach path was simulated and influenced the observer's subsequent reach trajectory. In final studies we show that these peripersonal reach simulation/priming effects generalize to situations where two people are in a variety of different spatial positions relative to one another, whether they respond to the same target object, avoid the same distractor obstacle, or even when they respond to completely separate targets and obstacles in different tasks. A final study employing a transparent barrier between an observer and another person's reach confirmed the effect when the obstacle was within the peripersonal space of the viewer.
EXPERIMENT 1: SINGLE PERSON REACHING
In this initial experiment we sought to replicate the carry-over effect found by Jax and Rosenbaum (2007) and to establish whether this approach, placed into a real-world setting, would provide a methodology that could be used to investigate between-person effects. Note that in Jax and Rosenbaum's work, targets and distractors were presented on a computer screen, and reaches were made by moving a pen across a graphics tablet oriented 908 to the display and in a different spatial location. In sharp contrast, our current experiment requires participants to reach for and lift up target blocks directly in front of them. When an obstacle was present they were asked to avoid that obstacle by reaching vertically over it.
Method
Participants A total of 24 right-handed students (20 female), with a mean age of 20.3 years, participated in this study in return for course credits. All participants had normal or corrected-to-normal vision. Participants gave informed consent, and the study received the approval of the School of Psychology's ethics committee.
Materials and apparatus
Each participant had a retro reflective marker placed on their right wrist. Participants' movements were tracked using a Qualisys ProReflex motion capturing system (Qualisys AB, Gothenburg, Sweden), and the data were recorded using Qualisys Track Manager (QTM) software (Qualisys AB). The target object to be reached for was 3 Â 2 cm, and it was 9 cm high. The obstacle to be avoided was 4.5 Â 4.5 cm, and it was 18 cm high. These materials and apparatus were constant throughout the series of experiments.
Procedure and design
Participants sat at a desk with the chair adjusted so that the arm to be used to reach to the target rested comfortably on the desk with the forearm at right angles to the upper arm and the hand approximately 20 cm from the trunk. The far edge of the target block (which participants have to reach to in order to grasp) was 40 cm from the starting position of the participant's reaching hand. This distance was selected as a comfortable reach distance, which was nevertheless close to the full extent of reach. The near edge of the obstacle, when present, was 20 cm directly ahead of the hand. The distances of the participant's target and obstacle blocks remained constant throughout all the experiments reported here. The experimenter sat directly in front of the participant. The obstacle, when not in place, was kept out of view. See Figure 1 for details.
The experiment sought to investigate the effect of n-1 and n-2 trials on the current trial n. The order of obstacle presentation was therefore counterbalanced for n-1 and n-2 trials. Trials could be with or without an obstacle; counterbalancing for the current and previous two trials gave eight possible trial orders. Each of these occurred 10 times for the participant, in random order; two additional trials that were not further analysed were added to the start of the experiment to provide an n-2 condition history for the first trials of relevance.
Participants carried out 10 practice reaches. They were instructed to close their eyes at the end of a reach, opening them again on the instruction to initiate movement. This ensured that participants did not observe the experimenter's arm movements. They were instructed to reach out, lift up, and place the target block back down again in the same spot. If the obstacle was present they were instructed to reach over the obstacle in the vertical plane.
Results
For each trial we measured the maximum height reached on the outward movement. Unlike Jax and Rosenbaum's (2007) work we considered and compared only the n trials without an obstacle. With the use of real-world stimuli we found that participants were careful to avoid knocking over the obstacle and thus cleared the obstacle block Figure 1. Experiment 1: Single person reaching. This diagram demonstrates an example series of reaches, where the same person performs on every trial. n-1 is the trial that preceded the current trial. n-2 is the last but one trial. Black square is the target; white square is the obstacle. This example shows an O-No-No trial sequence. THE QUARTERLY JOURNAL OF EXPERIMENTAL PSYCHOLOGY, 2009, 62 (12) with great care. This meant that during reaches over an obstacle there was very little variation in height between the trials; the obstacle trials had significantly smaller standard deviations than the non-obstacle trials [F(1, 13) ¼ 41.05, p , .001]. Due to this lack of variability we observed no effects of the presence or absence of an obstacle in previous trials (n-2 and n-1) when analysing reach path on the critical trial (n). Therefore we do not discuss data from obstacle avoidance trials in the rest of the paper (though they are shown in the Appendix); rather we only discuss effects of prior trials on reaches where no obstacle was presented on trial n.
For the analysed trials where no obstacle was present in trial n there were four conditions (No ¼ no obstacle, and O ¼ obstacle): We excluded a number of trials where the participant knocked over one of the blocks, and the two trials that followed such errors were removed (2.08% of trials). We have also excluded a number of trials where part of the trajectory was not properly tracked by the equipment. Figure 2 represents the results. The heights were analysed using a two-way within-subjects analysis of variance (ANOVA), with two factors: n-1 trial type (with or without obstacle) and n-2 trial type. This analysis revealed a main effect of both the n-1 trial [F(1, 23) ¼ 14.57, p , .001], and the n-2 trial [F(1, 23) ¼ 10.36, p ¼ .004]. That is, when participants have just avoided an obstacle, response on the next reach (n-1 effect), or the second reach (n-2 effect), are higher than if the previous trials had not contained an obstacle. There was no significant interaction between n-1 and n-2 [F(1, 23) ¼ 2.39, ns].
Discussion
The results confirm the previous findings of Jax and Rosenbaum (2007) and van der Wel, Fleckenstein, Jax, and Rosenbaum (2007). The trial after obstacle avoidance is more curved as measured by the highest point of the reach trajectory. Although for our aims we did not do the thorough analysis of Jax and Rosenbaum, we have shown that prior obstacle avoidance influences actions two reaches in the future (n-2 effects), as reflected by the additive effects in the O-O -No condition. Therefore this procedure enables us to examine the effects when two people undertake alternate trials. We are able to examine the effects of one person on another (n-1) as well as examine the effects of an individual's own behaviour on their subsequent reach trajectories (n-2; see Schuch & Tipper, 2007, for similar approaches).
EXPERIMENT 2: SHARED TARGET, SEATED OPPOSITE, SAME HANDS This experiment employed a very similar procedure to that of Experiment 1, except that two Figure 2. Means of the maximum height reached for non-obstacle reach trials (trial n), with their standard errors. The bars marked à show a significant t test result. These are one-tailed t tests, as are those in the following experiments, as they examine a priori predictions. The bars are labelled with the trial type (n-2, n-1, n), where "No" represents non-obstacle trials and "O" where an obstacle was present.
people, seated opposite each other, participated in the reaching to grasp task. The participants alternated their reaching actions, such that the possible priming effects produced by observing another person's reach could be observed on the next trial.
Method
Participants A total of 24 right-handed students (17 female), with a mean age of 18.9 years, participated in this study in return for course credits. All participants had normal or corrected-to-normal vision.
Procedure and design
Having established that the Jax and Rosenbaum (2007) method elicited within-person obstacle priming effects, we modified the procedure for use with two persons. Our participants alternated reaching and observing while seated opposite each other. The shared target block was the same distance away from each person's reaching hand, as in Experiment 1 (40 cm). The experimenter sat at right angles to the participants. The obstacle block was placed at the same distance from each participant's reaching hand, as in Experiment 1. That is, the obstacle was 20 cm from each person's hand, and each participant had their own obstacle and never reached over the other person's obstacle. See Figure 3, Panel A for details.
In order for each person to carry out each type of trial 10 times there were 164 trials per experiment, with a 5-minute break half way through. Participants alternated reaching and were called by name at the beginning of each trial. They were instructed that between trials their eyes would be closed. On hearing the name of the person who would act they were told to both open their eyes. The participant whose turn it was executed the reach and lift, returned her hand to the starting position, and then closed her eyes. The other participant was instructed to observe the scene, passively watching the other person's reach. Each participant carried out 10 practice reaches at the start of the experiment.
Results
As previously described, error trials (e.g., obstacle collision) were removed from analysis. Furthermore, where participants failed to open their eyes on an observation trial their following action trial was discounted. We removed 0.99% of trials. The heights were again analysed using a twoway within-subjects ANOVA for each person's results, with two factors: n-1 trial, the effect of the other person's reach; and n-2 trial, the effect of her own movement (each factor being with and without obstacle). This analysis revealed no main effects for the within-subject n-2 effect [F(1, 23) ¼ 2.33, ns], the between-subject n-
Discussion
Somewhat surprisingly, we did not detect any evidence for the idea that the reach trajectory of an observed person is simulated. That is, there was no evidence for a participant's reach to be higher just after they had observed another person reach over an obstacle. This lack of obstacle priming effect between people may be taken as evidence that indeed action simulation processes are somewhat abstract. Motor systems represent the goal of the action, in this case grasping the wooden block, but the specific manner of the action is not encoded. Such a result would be in line with previous work where mirror systems can encode actions even when they are not directly observed (e.g., Umiltà et al., 2001).
A second finding in this study is that the person's own prior reach has no significant effect on their subsequent reach path. That is, n-2 effects are not observed, which suggests that observation of another person undertaking the reach-to-grasp task interferes with the representation of one's own previous action.
To confirm the lack of reach path priming effects it was necessary to replicate and extend these results. Therefore in a follow-on study the same between-person priming study was undertaken, except that one participant reached with the right hand, whilst the other reached with the left (see Figure 3, Panel B). Previous work by Koski, Iacoboni, Dubeau, Woods, and Mazziotta (2003) has shown that inferior frontal mirror cells respond more when participants imitate as in a mirror (right hand -left hand) than when there is anatomical matching (right hand -right hand), and developmental studies show that early in life children tend to imitate as in a mirror, copying another person's right-hand actions with their own left hand (e.g., Bekkering, Wohlschlager, & Gattis, 2000;Wapner & Cirillo, 1968). Therefore this "mirror" condition may result in greater compatibility between observed and produced actions in that they were both presented in the same visual field, as compared to the opposite visual fields in Experiment 2 (compare Panels A and B in Figure 3). EXPERIMENT 3: SHARED TARGET, SEATED OPPOSITE, MIRRORED HANDS Method Participants A total of 24 right-handed students (18 female), with a mean age of 20.2 years, participated in this study in return for course credits. All participants had normal or corrected-to-normal vision.
Procedure and design
This experiment is the same as Experiment 2 except that one of the participants in each pair used their left hand, while the other used their right hand, so that the participants mirrored each other (see Figure 3, Panel B).
Discussion
This experiment replicated the results of Experiment 2 and seems to provide evidence that, as might be the case with the monkey mirror neuron system, the reach trajectory by which a target is approached is not encoded. However, it is necessary to be cautious in generalizing the results from any given set of experiments, and concluding that observed reach trajectories are never encoded might be premature. Certainly under the conditions described in these two experiments trajectory is not encoded, and the fact that in both experiments the person's own carry-over effect (n-2) also disappeared gave concern. This lack of own reach priming is discussed further in the General Discussion.
We thus sought to design an experiment in which the participants might engage more in the experiment and feel that the actions of the other person are more relevant to them. Note that in Experiments 2 and 3 during action observation the other person's obstacle was 54 cm (near edge) from the reaching hand and approximately 74 cm from the trunk of the observer. This resulted in the observed obstacle being just outside comfortable action (peripersonal) space for our participants. That is, without further actions such as leaning forward and raising the upper body, a reach action could not be achieved. Certainly, a reach over the other person's 18-cmtall obstacle was not possible.
Therefore we hypothesized that perhaps the observation of avoidance of an obstacle outside peripersonal space, on which the participant could not act, might have made the action less relevant. This lack of relevance could have prevented the activation of simulation processes. Previous research has indeed shown that during selective reaching tasks, the distance of an irrelevant to-be-ignored distractor object from the reaching hand was critical for the obtained interference and priming effects. For example, Tipper et al. (1997;Tipper et al., 1992) revealed action-centred selection processes in a selective reaching task. That is, when reaching for a target, distractors closer to the hand produced significantly higher interference and negative priming.
Clear evidence for the existence of separate coding of peripersonal space comes from studies of neuronal activity in monkeys. Two areas involved in processing information in peripersonal space are the ventral intraparietal sulcus (VIP; Colby, Duhamel, & Goldberg, 1993;Duhamel, Colby, & Goldberg, 1998; and the ventral premotor cortex, F4 (Fogassi et al., , 1996Graziano, Yap, & Gross, 1994) with the VIP having projections to F4 (Luppino, Murata, Govoni, & Matelli, 1999). F4 itself has projections to the primary motor cortex (Barbas & Pandya, 1987;Graziano & Gross, 1998). Therefore there is clearly a network in place in monkeys that specifically processes peripersonal space and allows this information to influence action.
That the brain makes a distinction between peripersonal and extrapersonal space in humans has been part of a number of theories. Previc (1998), for example, has proposed distinct cortical networks dealing with near and far space. He proposes that the dorsal visual pathway is involved with peripersonal space and actions carried out within it, whereas the ventral processing stream is concerned with extraperipersonal or far space. Similarly, the results of imaging studies by Weiss and colleagues (Weiss et al., 2000;Weiss, Marshall, Zilles, & Fink, 2003) support the differential involvement of these two streams. That near and far space might be dissociable is further indicated by the patients of Vuilleumier, Valenza, Mayer, Reverdin, and Landis (1998) and Halligan and Marshall (1991), the former having a patient who suffered from lateral neglect in far but not near peripersonal space, and the latter showing neglect for peripersonal near but not far space.
The fundamental importance of object distance for action is clear. That specific neural systems are dedicated to encoding peripersonal space, which enable immediate reach-to-grasp actions, while other systems encode objects in far space that require other motor processes (e.g., walking) before action can be produced, makes sense in terms of computational efficiency. It therefore remains a reasonable hypothesis that simulation processes of observed action may also be influenced by the distinction between peripersonal and far space. Therefore in Experiment 4 we replicated Experiments 2 and 3, but crucially, when observing obstacle avoidance the obstacle was within the peripersonal space of the observer; that is, it was 20 cm from the observer's hand.
EXPERIMENT 4: SHARED OBSTACLE, SEATED OPPOSITE, SAME HANDS Method Participants A total of 24 right-handed students (18 female), with a mean age of 23.2 years, participated in this study in return for course credits. All participants had normal or corrected-to-normal vision.
Procedure and design
Both participants used their right hands and sat opposite each other. Unlike the previous two experiments, participants shared the obstacle, which remained 20 cm from their reaching hand. Therefore the obstacle was now in the peripersonal space of both the participants. In this new task the participants now reached for separate target objects, which were 40 cm from their reaching hand (see Figure 3, Panel C), with the other's target now 5 cm from their hand.
Results
We removed 1.20% of trials due to collision with the obstacle or failed recording.
Unlike the previous two experiments, this experiment showed a significant main effect for the influence of the other participants' action on their current reach [n-1: F(1, 23) ¼ 14.983, p , .01], showing that the priming effect, previously only observed within a participant's own previous actions, had transferred between people. Interestingly, also in contrast to Experiments 2 and 3, observing an action does not remove the effect of a person's own previous reach [n-2: F(1, 23) ¼ 13.259, p , .01] (see Figure 4, Panel C). These two effects, within and between people, appear to be independent, and no interaction was found [F(1, 23) ¼ 1.21, ns]. Further planned contrast t tests revealed that reaches in each of the obstacle conditions (n-2 and n-1) were significantly higher (p , .01) than the baseline condition (No-No-No) In addition to the analysis mentioned above we compared the results from Experiment 4 with those of the previous two experiments. Comparing Experiment 4 with Experiment 2 in a mixed twoway ANOVA revealed a significant interaction between experiment and n-1 obstacle priming [F(1, 46) ¼ 7.193, p , .05], and between experiment and n-2 priming [F(1, 46) ¼ 6.423, p , .05]. Similarly, contrasts between Experiment 4 with Experiment 3 revealed significant interactions between experiment and n-1 obstacle priming [F(1, 46) ¼ 15.809, p , .01], and between experiment and n-2 obstacle priming [F(1, 46) ¼ 4.147, p , .05]. These results further confirm the distinction between the results of Experiment 4, where we find the obstacle priming effect, and Experiments 2 and 3, where no priming effect occurs.
As Experiment 4 is the first discovery of reach path priming between people and to further investigate the nature of the trajectory differences between Experiments 2 and 4, we carried out further analysis on several points along the trajectories, in addition to the comparison of maximum heights previously described. The Panels in Figure 5 illustrate the n-2 effect, that is, the comparison between No For the first time we have shown that observing another person's actions appears to evoke simulations of the reach path of the hand. Thus, after observing another person reach over an obstacle, the observer's subsequent reach is higher. Such an effect implies that prior activation of a reach simulation remains active, or can be retrieved from memory, to affect a subsequent reach. Importantly, we have identified a critical boundary condition to such a reach trajectory simulation effect. That is, the contrasting data between Experiments 2 and 3 and Experiment 4 support our proposal that reach path simulation only takes place when objects are within peripersonal action space. We also ran a control experiment to confirm that the obstacle avoidance priming effects were indeed produced by observing another person's reach and not just due to the presence of the objects. In a new version of Experiment 4 a single participant was required to reach for targets only on alternate trials and to merely look at the display on the other trials. That is, participants were asked to merely look at the n-1 display and then undertake the next reach trial (n). In this situation the same objects were viewed on n-1 trials, but there was no other person making a reach. We found no hint of an obstacle avoidance priming effect, confirming that it is the observation of another person's action that mediates the effect.
However, note that there are other contrasts between Experiments 2 and 3 and Experiment 4 that could influence when reach simulation processes are activated. For example, in Experiments 2 and 3 where no simulation/priming effects were observed, participants reached towards and grasped the same target, while they avoided completely different obstacles. In contrast, in Experiment 4 where reach simulation effects were observed, participants shared an obstacle, but reached to completely different targets. Although we have no a priori reasons for predicting that these circumstances could mediate our contrasting effects, we needed to investigate more formally the roles of sharing a target as compared to sharing an obstacle. Therefore in Experiments 5a and 5b we developed a new task to examine three issues: First, in these experiments the obstacle was always within the action/peripersonal space of both participants, and hence we hoped to replicate reach simulation/priming effects. Second, in Experiment 5a both participants reach over the same obstacle while responding to different targets. This is similar to the procedure of Experiment 4 and hence should replicate those reach simulation/priming effects. In contrast, Experiment 5b required participants to reach to the same target while avoiding different obstacles. This procedure is similar to that of Experiments 2 and 3. If we observe reach simulation/priming in this latter condition, the prior results cannot be explained by whether or not obstacles are shared. Third, we examined a new interpersonal spatial layout in this experiment. Rather than two people sitting opposite each other, they were oriented 908 (see Figure 6, Panels A and B). We predicted that reach simulation processes were not constrained by specific viewpoints of other people, and effects would be detected in this new design. Figure 6. Seated at 908. Panel A: Experiment 5a: shared obstacle, seated at 908. Panel B: Experiment 5b: shared target, seated at 908. In Experiment 5a participants avoided the same obstacle. In Experiment 5b participants grasped the same target. Black square is the target; white square is the obstacle.
Method Participants
A total of 24 right-handed students (17 female), with a mean age of 21.63 years, participated in this study in return for course credits. All participants had normal or corrected-to-normal vision.
Procedure and design
Both participants used their right hands. They were seated at 908 to each other on two sides of a table. They both shared the obstacle, which was 20 cm from each of them. They responded to separate targets, which were 40 cm from the reaching hand (see Figure 6, Panel A).
Results
We removed 0.68% of obstacle collision and failed recording trials.
Results are shown in Figure 7, Panel A. As with the previous experiment significant main effects were found for the influence of the other participant on the reach [n-1: F(1, 23) ¼ 9.086, p , .01] and of an individual's previous reach [n-2: F(1, 23) ¼ 7.402, p , .05], again with no significant interaction [F(1, 23) ¼ 4.05, ns]. Further planned contrast t tests revealed that reaches in each of the obstacle conditions were significantly higher (p , .01) than those in the baseline condition (No -No -No)
Discussion
This experiment replicated Experiment 4, showing again that reaches within peripersonal space, with a shared obstacle, are encoded. It additionally shows that our results are not view dependent and generalize to other perspectives. It is an important companion experiment to the following Experiment 5b, which investigated the effect of shared targets.
Method Participants
A total of 24 right-handed students (16 female), with a mean age of 21.00 years, participated in this study in return for course credits. All participants had normal or corrected-to-normal vision.
Procedure and design
Both participants used their right hands. They were seated at 908 to each other on two sides of a table. They both shared the target object, which was 40 cm from each of their hands. Their separate obstacles were 20 cm from their reaching hands (see Figure 6, Panel B). The other person's obstacle was 42 cm from their hand and was within peripersonal reach space. Figure 7. Means of the maximum height reached for non-obstacle reach trials (n), with their standard errors, for Experiments 5a and 5b. The bars indicated by à show a significant t test result at p , .01; þ represents a p value , .05. The bars are labelled with the trial type (n-2, n-1, n), where "No" represents non-obstacle trials and "O" where an obstacle was present.
Results
We removed 0.57% of obstacle collision and failed recording trials.
This experiment showed significant main effects for the influence of the other participant's reach [n-1: F(1, 23) ¼ 8.549, p , .01] and of the individual's previous reach [n-2: F(1, 23) ¼ 46.026, p , .001] (see Figure 7, Panel B). There was no interaction between these two factors [F(1, 23) ¼ 0.13, ns]. Further planned contrast t tests revealed that reaches in each of the key obstacle conditions were significantly higher than those in the baseline condition (No-No -No)
Discussion
This result further extends and clarifies the circumstances under which observation of action primes the trajectory of a person's future actions, showing that in addition to occurring when participants avoid the same shared obstacle it can also occur when the participants are sharing a target object, and an obstacle is not shared. This demonstrates that the lack of significant results from Experiments 2 and 3 did not occur because the participants shared the target rather than obstacle, but it lends weight to our argument that the factor influencing whether or not priming occurs is in fact whether or not observed action takes place in the peripersonal space of the viewer. In Experiment 5a the observed obstacle was 20 cm, and in Experiment 5b it was 42 cm, from the observer's reaching hand, both within action space.
EXPERIMENT 6: SEATED ADJACENT, NO OBJECTS SHARED So far our experiments have all involved the sharing of either the target or obstacle blocks. Humans posses the capability of joint action to achieve goals; this has been demonstrated in various settings (see Sebanz, Bekkering, & Knoblich, 2006, for a review). It is a plausible assumption that simulation of another's action may be tempered by how involved the participant feels in the other's action, and thus simulation might be limited to those scenarios in which objects as well as space are directly shared.
To investigate whether such a limitation was the case we designed Experiment 6. This task is represented in Figure 8, Panel A. Participants were adjacent to each other facing in the same direction. Each participant reached for their own target presented directly in front of them and also reached over their own obstacle. In essence each person was now undertaking their own individual reaching task, while the other person's reaches were to objects that were irrelevant. Importantly, although the other person's obstacle was irrelevant and never near the observer's reach path, it was within 40 cm of the observer's hand, so within peripersonal reaching space. Figure 8. Seated adjacent. Panel A: Experiment 6: seated adjacent, no objects shared. Panel B: Experiment 7: seated adjacent, transparent Perspex barrier. In Experiments 6 and 7 neither the target nor the obstacle is shared by the participants, and they reach to different locations. The black square is the target; the white square is the obstacle. Importantly in Experiment 6 the other person's obstacle was within reaching peripersonal space of the observer. In Experiment 7 the obstacle was visually within peripersonal space; however, the Perspex barrier (the dark grey vertical line), prevented the participants from being able to actually reach to the other's obstacle. The barrier was 50 cm high and 55 cm long. It extended over the edge of the table between the participants by 5 cm.
Method
Participants A total of 24 right-handed students (17 female), with a mean age of 18.9 years, participated in this study in return for course credits. All participants had normal or corrected-to-normal vision.
Procedure and design
In this experiment participants sat adjacent to each other. They both used their right hands and did not share each other's blocks. As in the previous experiments the obstacle when present was 20 cm from their hand, and the target block was 40 cm. The other participant's obstacle was 40 cm away from their reaching hand.
Results
We removed 1.62% of obstacle collision and failed recording trials.
Discussion
Here we show that it is not necessary for there to be joint action on shared objects in order for priming to occur, merely that the actions of the other person take place in peripersonal space. Furthermore, the viewpoint of observed actions has been changed again, revealing that these reach path priming effects are robust in a range of settings, as long as the obstacles are within peripersonal space.
EXPERIMENT 7: SEATED ADJACENT, TRANSPARENT PERSPEX BARRIER
In the final experiment we examined further properties of the peripersonal action priming effects: first, whether they were determined by the geometrical distance of the obstacle to the participant's hand and, second, whether they were determined by the object's potential for action. The approach is based on our previous studies of selective reaching (Meegan & Tipper, 1999;Tipper, Meegan, & Howard 2002). 1 We had previously demonstrated that when reaching for target keys to be depressed, irrelevant Figure 9. Means of the maximum height reached for non-obstacle reach trials, with their standard errors, for Experiments 6 and 7. The bars indicated by à show a significant t test result at p , .01. The bars are labelled with the trial type (n-2, n-1, n), where "No" represents non-obstacle trials and "O" where an obstacle was present.
to-be-ignored distractors interfered with (slowed) response to the target and were associated with inhibition, as measured via negative priming effects (e.g., Tipper et al., 1992).
These distractor interference and negative priming effects were hand centred, in that they were larger when the distractors were closer to the participant's reaching hand. We argued that such hand-centred effects were due to the near distractor winning the race for the control of action. This race model predicted that if we could slow down response encoding of the distractor, while maintaining the same visual information, interference and negative priming effects would be reduced. To this end we presented transparent obstacles over the distractor object, which made the key depression response to them, when they were targets on other trials, more difficult. The results were very clear. Although the visual properties of the distractor were held constant, making the response more difficult with the transparent obstacle placed over the key greatly reduced how much the distractor interfered, and it abolished the inhibition associated with it.
Therefore we replicated Experiment 6, but now presented a transparent Perspex barrier between the two participants (see Figure 8, Panel B). If it is simply the metric distance of the obstacle to the participant's hand that determines whether simulation of reach path is evoked, then we should see similar action priming to that observed in Experiment 6. However, following the findings of Meegan and Tipper (1999) and Tipper et al. (2002), we predicted that even though the other person's reach over an obstacle could be clearly seen, because it was not a potential obstacle for the viewing participant, it would not be simulated and hence would have no effect on the participant's subsequent reach.
Method
Participants A total of 24 right-handed students (14 female), with a mean age of 24.4 years, took part in this study in return for course credits. All participants had normal or corrected-to-normal vision.
Procedure and design
The seating arrangement and block placement in this experiment were identical to those of Experiment 6, with participants seated adjacent to each other. However, in this experiment participants were separated from each other by a clear Perspex screen (see Figure 8, Panel B). The screen was 50 cm high and 55 cm long. The screen extended from the table, between the participants, by 5 cm. The screen was 5 cm from the blocks of the participant seated on the left side.
Results
We removed 1.24% of obstacle collision and failed recording trials.
The results from this experiment replicate and extend the results from Experiments 4, 5a, 5b, and 6. Again there was a significant effect of own previous movements [n-2: F(1, 23) ¼ 8.09; p , .01] and of the other person's movements [n-1: F(1, 23) ¼ 4.987; p , .05] on the current reach trajectory (see Figure 9, Panel B). Furthermore there was a significant interaction between n-2 and n-1 [F(1, 23) ¼ 4.38, p , .05], revealing that obstacle priming effects were more potent for n-2. Further planned contrast t tests revealed that reaches in each of the key n-2 and n-1 obstacle conditions were significantly (p , .01) higher than those for the baseline condition (No-No-No) [that is,t(23) ¼ 4.59;t(23) ¼ 3.46].
Discussion
This experiment tested two alternative accounts of the between-person obstacle priming effect. In one, the metric distance of the obstacle from the participant's responding hand was computed, and if this was perceived to be within peripersonal action distance, simulation of the other person's obstacle avoidance processes was activated. The alternative hypothesis was that the potential for action was encoded. Thus, although the obstacle avoidance of the other person's reach could easily be seen through the transparent barrier, because the obstacle could not be directly acted upon by the viewer the simulation processes would not be activated.
As noted above, our expectation was that the latter account would be supported. When viewing the other person's reach over an obstacle through a barrier, simulation would not take place, and hence no n-1 reach path priming effects would be detected. Clearly this was not confirmed, as significant n-1 obstacle priming was detected. Thus after observing through a transparent barrier a person reach over an obstacle, the participant's subsequent reach was higher. This result contrasts with our previous work (Meegan & Tipper, 1999;Tipper et al., 2002). However, we note that in the previous studies the obstacle was placed over the target, so it did not influence much of the reach path, but affected the final adjustment of the hand as it depressed the target key. In contrast, in the current study the obstacle was placed midway between the hand and target and influenced the reach aspect of the prehension system. These different findings perhaps reveal a further contrast between reach and the final stages of action such as grasp and key depression. Certainly the present results of Experiment 7 support the notion that the simulation of another person's obstacle avoidance reach path is determined by the metric distance of the obstacle from the observers' hand, and not higher level factors such as the potential for action.
GENERAL DISCUSSION
Previous studies of mirror systems in monkey and humans have found evidence for the simulations of another individual's actions. This work has shown that a particular object -effector interaction is encoded in the understanding of the goals that another individual is attempting to achieve. This form of action simulation does not appear to consider the specific forms of the action, as simulation can take place even when the action cannot be directly seen, or is only heard. However, the current studies have provided evidence that in addition to encoding and simulating goal-orientated actions, the means by which the goal is achieved is also simulated. We have shown that viewing another person avoiding an obstacle primes the following actions of the observer, such that their own reach trajectory deviates more strongly as a consequence.
Our experiments have shown that the reach simulation/priming effects are quite general, occurring in a number of different scenarios: Thus they generalize across a range of different interpersonal viewpoints (e.g., allocentric in Experiment 4 and egocentric in Experiments 6 and 7); they appear to be independent of whether the participants jointly act on the same objects (Experiments 4, 5a, and 5b) or share no objects (Experiments 6 and 7); and, perhaps most surprisingly, simulation of another's obstacle avoidance takes place when it is viewed through a transparent barrier (Experiment 7). However, the key determining factor in our experiments appears to be the distance between the obstacle and the viewer's acting hand. When the obstacle was beyond the comfortable reach space of the observer (Experiments 2 and 3), no simulation of the reach path appeared to take place.
At the time of submitting this article for publication we were not aware of any investigations that had examined the possibility that monkey mirror neuron activity may be mediated by whether or not certain actions take place in peripersonal space. However, since submitting our article we recently became aware of the work of Caggiano, Fogassi, Rizzolatti, Their, and Casile (2009). They demonstrated that mirror cells in monkey F5 can be sensitive to the location of grasped objects. That is, when viewing another individual grasp an object, some cells only respond when the object is within peripersonal reach space, while others respond only when it is in far extrapersonal space.
However, two contrasts between Caggiano et al. (2009) and our current data are of note. First, although some mirror cells are encoding the distance of the object, the overall population of mirror cells is not restricted to only peripersonal space, many respond to far space, and indeed the majority of cells are unaffected by object distance. In contrast, our study of reach trajectory suggests the encoding of distance plays a more fundamental role, where reach path is only simulated when observed obstacles are in peripersonal space. Second, in a study similar to our Experiment 7, F5 mirror cells appear to compute the potential for action, where peripersonal cells no longer respond when viewing an object in peripersonal space through a transparent surface; whereas we found viewing obstacle avoidance reaches through a transparent surface had no effect on reach path simulation.
To account for apparently discrepant results, we hypothesize that whether or not observed actions are within peripersonal space is critical for the specific kinematic properties of the reach, but it may not always be so important for more general action goals. That is, priming effects examining achievement of general goals, such as whether a hand or foot response was observed, whether a peanut or apple was grasped, whether the object is visible or occluded, or whether the action is viewed or only heard, are not necessarily determined by the distance of the object from the viewer. Thus the simulations are of the general behavioural goals, where specific microdetails of the action are of less relevance. In contrast, specific effects of how a hand negotiates its way around an obstacle are far more detailed, and these are only simulated when relevant objects are within peripersonal action space.
As discussed, if observing a particular reach path primes the participant's own reach trajectory, then the details of the actual observed reach trajectories must be encoded. Whilst our study can give no indication as to where in the brain these trajectories are simulated, one can speculate that these observed actions are most likely being encoded in the same areas that encode one's own reach trajectory. Such areas implicated in one's own reaches, and which would most plausibly respond to observation of that action, include the inferior frontal gyrus, which Hamilton and Grafton (2007) have identified as being involved in the encoding of kinematics (in humans at least). Single unit recording studies in the macaque have identified a region in the intraparietal sulcus (IPS), the parietal reach region (PRR), which includes the medial intraparietal sulcus (MIP; e.g. Batista & Anderson, 2001). There is still speculation concerning where the human homologue of this region might be, and candidates include the medial occipito-parietal junction and the medial intraparietal sulcus (Culham & Kanwisher, 2001;Culham & Valyear, 2006). Of particular note, the superior parieto-occipital cortex (SPOC) selectively computes whether objects are within peripersonal action space (e.g., Culham, Gallivan, Cavina-Pratesi, & Quinlan, 2008), and encoding within this region is likely to be key to evoking the simulation of obstacle avoidance reach path.
It is also noteworthy that our obstacle avoidance priming effects are implicit. That is, participants were not required to make decisions about the other person's actions. Rather they were required to simply undertake their own reach to target task, while not making any judgements about the other person's actions. Such automatic encoding would seem to be necessary for coherent interpersonal behaviour. When interacting with others during everyday encounters a person rarely has to explicitly focus on the specific detailed forms of another person's individual reach-to-grasp actions. Rather, such processes are undertaken automatically to facilitate interpersonal interactions, enabling the limited capacity of conscious awareness to be focused on other task demands.
In contrast, many of the fMRI studies mentioned above often require more explicit encoding of actions, as when judging the weight of a box requires attention to be focused on how the box is lifted (e.g., Hamilton, Joyce, Flanagan, Frith, & Wolpert, 2007) or participants are asked to imagine themselves imitating the viewed action (e.g., Lestou, Pollick, & Kourtzi, 2008). In sharp contrast, our behavioural priming effects are implicit, in that the action of the other person is irrelevant to a participant's task. Whether the same neural systems mediate both implicit and explicit action simulation processes is an open issue.
Finally, a further interesting result from our study was the observation that in Experiments 2 and 3, in addition to the lack of priming by the observation of the other's action (n-1), participants' within-person (n-2) priming effect no longer occurred. That is, a person's previous actions no longer affected their current action. This is surprising in the light of Jax and Rosenbaum's (2007) study, which showed that the reach trajectory priming effect carried over a number of trials, and in light of the robustness of this within-person (n-2) effect in all of our other experiments.
We speculate that the maintenance and retrieval of prior reach trajectories is undertaken while relevant to the ongoing task. However, when a participant observes a reach over an obstacle that is outside their peripersonal space, and hence is never an object they have to reach over, retrieval of prior reach programmes is vetoed. That is, as Jax and Rosenbaum (2007) note, producing a reach that is more curved than is necessary is costly, but this is usually outweighed by speeded processing when prior reach programmes can be reaccessed, rather than computing new reaches on each trial. However, when those prior (observed) reaches are irrelevant to a participant's actions, because they are outside action space it is more efficient to recompute a reach on each trial rather than retrieve irrelevant actions from memory. Admittedly these are speculations to account for this unexpected finding, and future work is necessary to provide a stronger explanation.
In sum, our visuomotor systems have evolved to cope with complex environments. A central problem is directing action to relevant objects in the presence of irrelevant objects that compete for the control of action. In particular, the avoidance of objects is of fundamental importance, and constant collisions would become extremely costly (Graziano & Cooke, 2006). Thus the retrieval of previous motor programmes containing obstacle avoidance routines would be an efficient means of selectively guiding behaviour. Our current data suggest that even the mere observation of another person avoiding an obstacle can activate corresponding motor representations and facilitate a person's own obstacle avoidance behaviour by evoking more curved reaches. That these reach path effects are only evoked when the obstacle in the observed action is within the peripersonal space of the viewer suggests that such reach priming is only engaged when of relevance. | 2014-10-01T00:00:00.000Z | 2009-09-01T00:00:00.000 | {
"year": 2009,
"sha1": "89c855d89e2b1acd097d718f482c634a94e61fec",
"oa_license": "implied-oa",
"oa_url": "https://europepmc.org/articles/pmc2903771?pdf=render",
"oa_status": "GREEN",
"pdf_src": "PubMedCentral",
"pdf_hash": "89c855d89e2b1acd097d718f482c634a94e61fec",
"s2fieldsofstudy": [
"Psychology"
],
"extfieldsofstudy": [
"Psychology",
"Medicine"
]
} |
55295089 | pes2o/s2orc | v3-fos-license | Pectobacterium carotovorum subsp. carotovorum – the causal agent of broccoli soft rot in Serbia
Soft rot symptoms were observed on broccoli plants in several commercial fields in the western part of Serbia. Six strains of bacteria were isolated from diseased tissues and identified as Pectobacterium carotovorum subsp. carotovorum using conventional bacteriological and molecular methods. All strains were non-fluorescent, gram-negative, facultative anaerobes, oxidase-negative and catalase-positive, causing soft rot on potato and carrot slices and did not induce hypersensitive reaction on tobacco leaves. They grew in 5% NaCl and at 37°C, did not produce acid from α-methyl glucoside, sorbitol and maltose, nor reducing substances from sucrose, but utilized lactose and trehalose, and did not produce indole or lecithinase. The investigated strains showed characteristic growth on Logan’s medium and did not produce blue pigmented indigoidine on GYCA medium nor “fried egg” colonies on PDA. The identity of strains was confirmed by ITS-PCR and ITS-RFLP analyses and by sequence analysis of the 16S rRNA gene. In a pathogenicity assay, all strains caused tissue discoloration and soft rot development on inoculated broccoli head tissue fragments.
INtrODUctION
Broccoli (Brassica oleracea var. italica Plenck) is one of the most important herbaceous biennial crops of the Brassicaceae family that originates in the Mediterranean. High nutritive value and many health benefits make broccoli one of the most valuable vegetable crops worldwide. Broccoli has been grown in Europe for centuries. In Serbia, it is grown in smaller fields, compared to other vegetables, but its production and consumption have increased significantly in recent years. However, the increase in broccoli production, both in fields and greenhouses, has contributed to frequent occurrence of various diseases of this crop.
Bacterial diseases frequently affect broccoli production, especially when environmental conditions are favorable for disease development. Soft rot caused by the pectolytic bacteria Pectobacterium carotovorum subsp. carotovorum and Pectobacterium carotovorum subsp. atrosepticum, and by Pseudomonas spp. (Pseudomonas marginalis and Pseudomonas fluorescens) is a widespread and economically important disease of broccoli and other Brasicaceae (Canaday et al., 1991;Koike et al., 2007). Similarly, black rot of broccoli, caused by Xanthomonas campestris pv. campestris, occurs frequently and causes V-shaped lesions on leaf margins and blackened veins associated with systemic movement of the pathogen in plant (Williams, 1980). During 2012, soft rot symptoms on broccoli plants were observed in Šabac and Bogatić localities, where the vegetable is grown intensively. After the first harvest, water-soaked areas were observed on broccoli stem tissue, followed by maceration, which progressed into soft rot decay of entire plants. Disease incidence was approximately 30%. The type of symptoms observed on broccoli tissue indicated a possible infection with phytopathogenic bacteria. As symptoms of soft rot can be caused by bacteria belonging to two genera, Pectobacterium and Pseudomonas, our research focused on studying the etiology of the disease and identifying the causal agent of broccoli soft rot in Serbia.
Isolation of bacteria
Samples of broccoli plants showing symptoms of soft rot were collected after the first harvest of broccoli heads in September and October of 2012. Isolation of the pathogen was carried out using fragments cut out with sterile scalpel from border area between apparently healthy tissue and diseased dark brown tissue. The samples were homogenized in a few drops of sterile distilled water using pestle and mortar. After a few minutes, a loopful of resulting homogenate was streaked on nutrient agar medium (NA, Torlak, Belgrade) and incubated for 2-3 days at 27°C.
Pathogenicity test
Pathogenicity of the isolated strains was tested by inoculation of broccoli head tissue fragments. Inoculum was prepared from a 24 h old culture of tested strains grown on NA and suspended in sterile distilled water. Concentration was adjusted to approx. 10 8 CFU/ml (OD 600 = 0.3). Three florets per strain were inoculated by pricking petals with a syringe and hypodermic needle and depositing a droplet of bacterial suspension at the point of inoculation. Sterile distilled water was used as a negative control, while P. carotovorum subsp. carotovorum, strain KFB 85, was used as a positive control. The inoculated florets were placed in a sealed plastic container and incubated under high humidity conditions at 28°C. Tissue discoloration and development of soft rot around inoculation points were monitored over the next 48-72 h.
The strains were also tested for hypersensitivity on tobacco cv. White Barley (Klement et al. 1990). The suspension of bacteria (approx. 10 8 CFU/ml) was injected with a syringe and hypodermic needle into tobacco leaves. Pseudomonas syringae pv. syringae, strain KFB 0103, was used as a positive, while sterile distilled water was used as a negative control. Necrosis of the infiltrated area after 24 h was considered a positive reaction.
Pectolitic activity of the strains was checked by inoculating potato tuber and carrot slices. Plant material was washed, then sprayed with 95% ethanol and allowed to dry at room temperature. Potato tuber and carrot slices were cut by sterile scalpel and inoculated by placing a 24 h old culture of each strain in the centre of each slice, using bacteriological loop. Inoculated plant material was incubated at room temperature for 24-48 h to allow soft rot symptoms to develop.
Molecular identification
In order to detect and differentiate Pectobacterium species, PCR assay using universal primers (L1/G1) developed for the 16S-23S intergenic transcribed spacer (ITS) region and restriction fragment length polymorphism (RFLP) analyses (Toth et al., 2001) was done. DNA templates were prepared by heating bacterial suspensions (approx. 10 8 CFU/ml) at 95°C for 10 min. The PCR reaction was performed in a total volume of 50 μl reaction mixture containing 1× PCR Master Mix (Fermentas, Thermo Fisher Scientific, Waltham, MA), 0.4 μM of each primer and 2 μl of template DNA. Lysates were incubated on ice for 5 min and centrifuged for 5 min at 8000 rpm. The temperature profile was as follows: initial denaturation at 94°C for 5 min; 35 cycles of denaturation at 94°C for 1 min, annealing at 55°C for 2 min, and extension at 72°C for 2 min; and final extension at 72°C for 2 min. PCR reaction was conducted in a Thermo Cycler 2720 (Applied Biosystem, USA). PCR products were separated by 2% agarose gel electrophoresis in Trisacetate-EDTA (TAE) buffer, stained in ethidium bromide (1 μg/ml) and visualized under UV light by a digital imaging camera (Vilber Lourmat, France).
PCR products were further digested with restriction endonucleases RsaI and HhaI (CfoI) (Fermentas, Thermo Fisher Scientific, Waltham, MA), as recommended by the manufacturer. The digests were resolved by electrophoresis in 2.2% agarose gel at 55 V for 2 h, followed by staining and visualization as previously described.
16S rDNa sequence analysis
PCR was performed using the primers fD1 and rP2, which have been designed for most eubacterial 16S ribosomal DNA and amplify a fragment of about 1500 bp in the 16S rRNA gene (Weisburg et al., 1991). The reaction mixture (50 μl) consisted of: 1× PCR buffer (50 mM KCl, 10 mM Tris-HCl (pH 8.8), 1.5 mM MgCl 2 ), 20 µM of each primer, 0.2 mM deoxynucleoside triphosphate (each), 2.5 U Taq DNA polymerase and 2 µl of template DNA prepared as described previously. PCR amplification reactions were performed in a Thermo Cycler 2720 (Applied Biosystem, USA) using the following conditions: initial denaturation at 94°C for 4 min; 35 cycles consisting of 94°C for 1 min, 58°C for 1 min and 72°C for 3 min, and final extension cycle at 72°C for 15 min before cooling at 4°C. Amplified PCR products were resolved by 1.5% agarose gel electrophoresis in TAE buffer, stained and visualized as previously described.
The PCR products of two strains were purified and directly sequenced in both directions (Macrogen Europe) using the same primers as for PCR amplification. The obtained sequences were edited and assembled using PREGAP 4 and GAP 4 of the Staden software package (Staden et al., 1998) and then compared with available sequences deposited in GenBank (http://www.ncbi.nlm. nih.gov/Genbank) using nucleotide-nucleotide Basic Local Alignment Search Tool (BLAST) for nucleotide (blastn) alignments.
Isolation of bacteria and pathogenicity tests
Bacteria were isolated from soft rotted broccoli tissue on nutrient agar, showing shiny, greyish white, round, 2 mm in diameter colonies after 48 h of incubation.
Although a large number of isolates were collected from two localities (Šabac and Bogatić), six strains were selected for further investigation based on their pectolytic activity.
Symptoms of tissue discoloration and soft rot of broccoli florets developed within 48 to 72 h after inoculation (Figure 1). No symptoms developed on control florets. Soft rot appeared on the inoculated carrot and potato tuber slices 24 h after inoculation, indicating a strong pectolytic activity of the studied strains. The strains did not induce hypersensitive reaction on tobacco leaves.
Physiological and biochemical characteristics
All investigated strains were non-fluorescent, gramnegative, facultative anaerobes, oxidase and lecithinase negative, and catalase positive. They grew at 37°C and in yeast salts broth medium containing 5% NaCl, did not produce acid from α-methyl glucoside, sorbitol and maltose, nor reducing substances from sucrose, but utilized lactose and trehalose, and did not produce indole (Table 1). The investigated strains formed light red, 1.5-mm-diameter colonies on Logan's medium, and did not produce blue pigmented indigoidine on GYCA medium nor "fried egg" colonies on PDA. These characteristics were typical for the bacterium P. carotovorum subsp. carotovorum.
Molecular identification
PCR amplification of the 16S-23S rRNA intergenic transcribed spacer region (ITS) using L1 and G1 primers resulted in two bands of 535 and 580 bp separated by electrophoresis in 2% agarose gel (data not shown). An analysis of ITS-PCR products confirmed that the strains isolated in this study belong to Pectobacterium spp. ITS-PCR products were digested using RsaI and HhaI restriction enzymes, and profiles characteristic for P. carotovorum subsp. carotovorum were obtained (Figure 2). (Arsenijević, 1997;de Boer & Kelman, 2001).
DIScUSSION
Broccoli production and consumption have increased significantly in recent years, which has also contributed to frequent occurrence of various diseases of that crop. Bacterial soft rot is a destructive disease of broccoli causing significant crop losses (Canaday et al., 1991). The disease affects both floret and stem tissue. It appears first as a water-soaked lesion followed by maceration and soft rotting of tissue (Ludy et al., 1997). Campbell et al. (1995) reported that soft rot of broccoli, caused by two genera of bacteria, Pectobacterium and Pseudomonas, had led to crop losses of between 30% and 100%, costing the UK industry £9.5 million annually (as cited in Darling et al. 2000).
P. c. ssp. carotovorum is one of the major soft rotting causal agents of vegetable and ornamental plants worldwide (Bradbury, 1986;Arsenijević, 1997). In Serbia, this bacterium has already been confirmed as a pathogen of potato, pepper, carrot, celery, parsley, eggplant, lettuce, cabbage, cauliflower and fennel Obradović, 1996;Arsenijević et al., 1997;Jovanović & Arsenijević, 1998;Jovanović, 1998;Obradović, 1999;Gavrilović et al, 2001;Gavrilović et al., 2009). Ivanović et al. (2009) reported P. c. ssp. carotovorum as an economically important pathogen of calla lily, causing significant losses in production of this ornamental plant. Based on earlier research in Serbia, cultivated Brassica spp. have proved to be very susceptible to that bacterium, confirming it as the causal agent of soft rot of seed cabbage plants (Arsenijević & Obradović, 1996). Warm and wet conditions favour the infection, although disease may develop over a wide range of temperatures. The bacterium enters plant tissue primarily through wounds, often created by insect feeding or during harvest. Severe damage should be expected after hailstorms since the bacteria can easily infect plants through wounds caused by hailstones (Arsenijević, 1997). Insects and irrigation water are also effective modes of bacterial spreading (Bhat et al., 2010).
The bacterial strains isolated from diseased broccoli tissue were identified as P. c. subsp. carotovorum based on their biochemical and physiological characteristics and confirmed by species and subspecies specific PCR and RFLP analyses of the 16S-23S ITS region, and 16S rRNA sequence analysis. Although P. c. subsp. carotovorum can be easily isolated on different nutrient media, nutrient agar has proved very suitable because characteristic colonies appear after 24-48 h of incubation. Other authors have also reported advantages of using this medium for P. c. ssp. carotovorum isolation (Arsenijević, 1997;Jovanović, 1998;Obradović, 1999). Using different biochemical tests, P. c. ssp. carotovorum can be differentiated from other Pectobacterium spp. (Arsenijević, 1997;de Boer & Kelman, 2001;Ivanović et al., 2009). Unlike P. c. ssp. atrosepticum or Dickeya spp. (syn. Pectobacterium chrysanthemi), the investigated strains grew in 5% NaCl and at 37°C, did not produce acid from α-methyl glucoside, sorbitol and maltose, nor reducing substances from sucrose, but utilized lactose and trehalose, and did not produce indole or lecithinase. In addition, the isolated strains showed characteristic growth on Logan's medium, and did not produce blue pigmented indigoidine on GYCA medium or "fried egg" colonies on PDA. The bacterium Pseudomonas marginalis has also been reported as the causal agent of soft rot of broccoli (Wimalajeewa et al., 1987). The bacterium has a wide host range too, but unlike the P. c. ssp. carotovorum, it produces levan and oxidase, fluorescent pigment on KB medium and utilizes glucose only in aerobic conditions (oxidatively) (Bradbury, 1986;Arsenijević, 1997).
All strains in our study showed pectolytic activity on potato tuber and carrot slices but failed to induce hypersensitive reaction of tobacco. Previously, P. c. ssp. carotovorum strains had been shown to vary in their ability to cause HR in tobacco and the presence of type III secretion system (Yap et al., 2004;Kim et al., 2009;Baghaee-Ravari et al., 2011;Nabhan et al., 2012).
In addition to studying biochemical and physiological characteristics of the isolated strains, their identity was also confirmed by PCR and RFLP analyses of the ITS region, as well as an analysis of 16S rRNA sequences. ITS-PCR followed by ITS-RFLP were reliable methods for classifying the strains of Pectobacterium spp. The obtained RFLP profiles of the studied strains were in compliance with results previously reported (Toth et al., 2001;Golkhandan et al., 2013).
As the production of broccoli is currently increasing in Serbia, more attention should be paid to the presence of P. c. ssp. carotovorum, which could cause significant production losses. As the bacterium can successfully survive in plant debris and soil, infected field residues make an important source of inoculum facilitating bacterial dispersal. Therefore, one of the measures for suppressing and controlling the bacterium could be crop rotation with soft rot-resistant plants, primarily legumes. Chemical control has limited efficiency, and priority should be given to cultural practices.
acKNOWLEDGMENt
This research was part of the projects III46008 "Development of integrated management of harmful organisms in plant production in order to overcome resistance and to improve food quality and safety" and TR31018 "Introduction of integrated managing and usage of modern principles in plant protection for the control of harmful organisms" funded by the Ministry of Education, Science and Technological Development of the Republic of Serbia. | 2018-12-10T20:59:01.677Z | 2014-01-01T00:00:00.000 | {
"year": 2014,
"sha1": "b03e7e92d16e0ffee25d982018a6b513c9903b91",
"oa_license": "CCBYSA",
"oa_url": "https://doi.org/10.2298/pif1404249g",
"oa_status": "GOLD",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "7e5ca8ddd225c25f98c4fdb22128803891ae3421",
"s2fieldsofstudy": [
"Agricultural And Food Sciences"
],
"extfieldsofstudy": [
"Biology"
]
} |
11737781 | pes2o/s2orc | v3-fos-license | Design of a Global Medical Database which is Searchable by Human Diagnostic Patterns
We describe a global medical database which is designed for efficient evaluation. It allows language independent search for human diagnostic parameters. Core of the database is a fully automated electronic archive and distribution server for medical histories of real but anonymous patients which contain patterns of diagnosis, chosen treatment, and outcome. Every pattern is represented by a feature vector which is usually a sequence of numbers, and labeled by an unambiguous "pattern name" which identifies its meaning. Similarity search is always done only over patterns with the same pattern name, because these are directly comparable. Similarities of patterns are mapped to spatial similarities (small distances) of their feature vectors using an appropriate metric. This makes them searchable. Pattern names can be "owned" like today domain names. This facilitates unbureaucratic definition of patterns e.g. by manufacturers of diagnostic devices. Application: If there is a new patient with certain diagnostic patterns, it is possible to combine a part or all of them and to search in the database for completed histories of patients with similar patterns to find the best treatment. Confinement of the result by conventional language based search terms is possible, and immediate individual statistics or regression analyses can quantify probabilities of success in case of different treatment choices. Conclusions: Efficient searching with diagnostic patterns is technically feasible. Labeled feature vectors induce a systematic and expandable approach. The database also allows immediate calculation of individual up to date prediction models.
INTRODUCTION
Every conscientious doctor is aware of the boundary of the own local experience and appreciates possibilities to expand it. Advanced training can help, but even for specialists it would simply need too much time to cover without relevant simplifications the increasing complexity of all possible measurements, diagnoses and therapies. So additional decision aids are necessary, concretely for given diagnostic results the doctor needs to know possible therapies, their long term consequences and probabilities of success in case of this or that therapy. For these purposes prediction tools have been developed which use the experience from thousands of patients. These are of particular importance in case of clinical decisions with serious consequences. For example in case of cancer it is necessary to choose a therapy which avoids stoppable progression as well as unnecessary trauma. Alone for prostate cancer numerous tools have been developed to make adequate predictions, for example neural networks [1], probability tables [2] and nomograms [3][4][5][6][7][8], which are also available for sarcoma [9], melanoma [10], gastric carcinoma [11], renal cell carcinoma [12], cancer of breast [13], bladder [14], lung [15] and pancreas [16]. Here we can mention only a few examples, today there are so many models, that for some situations selection becomes difficult.
*Address correspondence to this author at the Department of Orthodontics at University of Kiel, Germany; E-mail: g51@orthuber.com All these models are derived from collections of patient histories. Meanwhile the web allows the more efficient direct way: To store all these and further documentations in standardized form in a global database which is searchable by numerical patterns. At this every pattern is represented by a feature vector which is usually a sequence of numbers, and labeled by an unambiguous "pattern name" which identifies its meaning. From such a database one could update and refine continuously given models and develop new models. It could be also used directly by the doctor to find for a given patient clinical records of patients with similar diagnostic patterns. These could contain the completed story after some chosen treatment and detailed specific and valuable advices of the colleagues, an immediate individual statistics over these "similar patients" or even a complete regression analysis using all records with patterns of this kind can be calculated which allows to estimate probabilities of success in case of different treatment choices. So valid decision help and implicitly a free individual advanced training would be always possible if somewhere similar cases have been documented and uploaded. The more specific the diagnostic measurements are, the better. Typical searchable diagnostic patterns may be: - Results of blood tests, e.g. concentrations of tumor associated antigens like PSA, f-PSA etc.
-Anamnestic data like age, gender, BMI; also body height can be relevant, e.g. in orthopedics.
There would be additional motivation for the manufacturer to provide diagnostic means which produce highly selective and reproducible results. This shows that there is much commercial potential, too.
Concise Questions and Answers
Due to the variability of diagnostic measures resp. patterns it is a comprehensive task to make them searchable and requires a clear reply to the following questions: • How can the doctor provide a diagnostic pattern for which similar patterns should be found on the database? • How can the database recognize the kind of diagnostic pattern provided by the doctor and so isolate the set of comparable patterns on the database? • How can the database quantify the similarity between the provided pattern and the comparable patterns with attached articles in the archive to calculate their rank in the search result?
It turns out that there are satisfying answers to these questions in all cases of comparable patterns. They are abbreviated: In case of concise patterns the doctor can enter the pattern directly by keyboard, e.g. as sequence of numbers, together with an unique "pattern name", which specifies the kind of the pattern. In other cases the doctor can upload the pattern as file from diagnostic software which is designed for handling of this pattern kind. If appropriate, this software may be connected with some digitizing device or it uses data from a laboratory.
The database recognizes the kind of the pattern by the pattern name. Then it searches within the archive for publications whose associated patterns have the same pattern name. Those with most similar numerical representation will have highest rank in the search result.
Quantification of similarity depends on the kind of the pattern, which is known together with the pattern name. The numerical representation of the pattern is designed in a way that the database can calculate their similarity by direct comparison using an efficient algorithm.
ORGANIZATION OF THE DATABASE IN DETAIL
The database supports the universal pattern search concept [18] which could be also applied to the total web. It is arbitrarily expandable, every kind of pattern is represented by a feature vector which is a sequence of numbers, and labeled by an unambiguous name, the pattern name: Because arbitrarily many such names are conceivable, it is also possible to define arbitrarily many different kinds of patterns. Only patterns with the same pattern name are comparable using a metric with a short nonnegative distance function, e.g. Manhattan distance, Euclidean distance etc. [19]. For every pattern name the feature vector and the associated dis-tance function can be individually defined. This means that any kind of pattern has its individual similarity criterion. Two patterns with the same pattern name are the more similar, the smaller the distance between their feature vectors is. Identical patterns have zero distance.
The subsequently suggested organizational details should represent an efficient possibility for realization. Variants are conceivable. Important is that responsibility and necessary work are clearly shared in a way that it is attractive to participate.
Names and Conventions
First of all it is appropriate to explain some frequently used abbreviations: Pattern Some digital form of information. A searchable feature vector which is usually a sequence of numbers 1 represents it. The dimensionality of the feature vector (the length of the number sequence) is variable; it depends on the pattern definition. Similarities of the original data are mapped to spatial similarities of the feature vectors.
Pattern Files
Every pattern file represents a well-defined pattern. It contains a header with the pattern name which identifies the kind of a pattern, the date, a link to associated patient records or text, and some additional information, e.g. optionally a link to original data 2 . The feature vector which is the numerical representation of the pattern follows the header. We recommend a special filename ending for quick recognition and XML format as shown in [18].
Comparison of Patterns, Distance d
If a pattern is given for a search, it is necessary to quantify the similarity to other patterns on the database with the same pattern name. The result of such comparison is the deviation or distance d 0, in which d=0 if the two compared patterns are identical, else d>0; d is the greater, the more they deviate. The smaller d, the higher is the rank of the associated URLs in the search result.
The multidimensional feature vectors which represent the patterns should be designed to be quickly comparable by the software of the database, using a short distance function for calculation of d.
Pattern Names and Pattern Domains
The pattern name is a string which can contain letters and numbers like Internet domain names, and points. It is the name which uniquely identifies the kind of a pattern; in the sense of the W3C it is an URN, a uniform resource name 1 All findings can be converted into such a numerical form. To avoid misunderstandings definition and description of the ways of conversion is done in one place, in the pattern domain. 2 Progress will continue, so also techniques for feature extraction will improve. If the numerical representation of a pattern contains features which are calculated from an original, e.g. from a picture, then also the original data should be uploaded and referred. So later advanced algorithms could be applied to calculate new additional searchable patterns. [20]. To guarantee uniqueness also in case of expansion of the search method over the web we recommend the following convention: Let dn denote the name of an Internet domain, in which patterns names are defined. Then these pattern names have the structure dn.*, where the ending * is a string without spaces. All definitions of patterns are done in the pattern domain which is a special subdirectory with name dn.pat (Fig. 2). In our case dn is identical to the domain name of this database, if all definitions are done within it. Examples of pattern names may be: "dn.blood-concentration.xyantigen", "dn.blood-concentration.psapair", "dn.ultrasonic.heart-results-1", "dn.ekg1.avr", "dn.vertebral-body-heights", "dn.dna-seq.12", "dn.features.fundus.oculi.1", "dn.features.melanoma.4", "dn.ICD.10", "dn.evaluation.1" etc.
Pattern names make it possible to develop optimized structures and associated comparison algorithms to any kind of pattern independently of other kinds. Due to the variety of diagnostic methods and associated patterns it is necessary to share the work and to give motivation to participate. Therefore we orientate on the policy for Internet domain names which has been very successful. According to our suggestion the owner of the internet domain name dn owns also the pattern domain dn.pat and with this the privilege to define all pattern names of the form dn.*. The patterns with these names form a pattern group.
Motivation for Pattern Domain Owners
If a pattern group and domain should be useful and not ignored, its owner should: -If necessary, give information about associated digitizing devices, he/she may also sell them.
Someone who invests much work in optimization of the own patterns can gain from this, because an efficient pattern is more frequently used. Some consequences: -Communication in the own special field is more efficient. - The own pattern domain "dn.pat" is more attractive -The own software and/or digitizing devices which are necessary for generation of the dn.* patterns are more attractive.
So there are scientific and commercial reasons which make pattern domains attractive. The pattern domain owners play an important role; Fig. (1) illustrates the task sharing:
Motivation for Authors
Certainly there would be much motivation for a search request, if there are appropriate diagnostic digitizing devices and a good database.
But is there enough motivation for a doctor within today's framework to invest work and upload documentations resp. articles, to share own experiences with colleagues? Perhaps the feedback on this article will give first answers. At least the success of existing electronic archives indicates, that after some time of familiarization there can be also much motivation for doctors of medicine to become author in a worldwide read open archive. They can contribute a lot to science and progress by plain reality conform documentation. Health professionals who frequently upload, will become better known. Those who upload from the beginning later also other groups will get particular attention, because initially there will not be a large number of authors, and all articles will be accessible chronologically.
There is another way for feeding the database which will become more and more important: Since some years there are increasing efforts [21] to collect all spread medical documentations of one patient in a standardized set of electronic medical records, which is accessible as a whole. After anonymization these records could be integrated into the database, if the patient explicitly whishes that. In this case he should also have the right to comment and to rate. Of course this would be an additional motivation for the doctors to achieve a good treatment result. If whished, the patient could be also contacted, e.g. for exchange of experiences in self help groups.
Patient Histories
The archive will contain a directory with patient histories. These can be standardized electronic medical records, as mentioned above. Additionally all articles with honest documentations of medical treatments are welcome, also if the result of some treatment has been disappointing. If this is documented and uploaded, all colleagues could learn from it, repetition of ineffective treatment experiments could be avoided. We recommend liberal and user friendly possibilities for upload, because there is no danger of information overflow, if there are appropriate selective search options, e.g. individual restriction to certain groups of authors or qualities, according to the wishes of the user. Of course there must be minimal formal requirements which have been approved [22]. Additionally, there should be at least one searchable pattern in every treatment documentation, and the numerical representation of all patterns must be consistent. Under these circumstances also short articles are very welcome, e.g. records of patients who participate in clinical trials. So the results of clinical trials would be directly available and systematically searchable, further many misunderstandings could be avoided, because the today necessary detours with interpretations (this makes the reader of medical publications) of interpretations (this makes the author of medical publications, he interprets the clinical records) can be avoided. The database could make objective measurement results, and of course also intuitive quantitative ratings of patients broadly available and searchable for computers, without interpretation.
Documentations of Clinical Trials, Quality Descriptor
Additionally there will be a directory for conventional articles which describe clinical trials. The associated patient records should also be uploaded and referred. They will automatically get links backwards (Fig. 2).
The author can designate his article as "xy percent documented", if in a prospective study the records of at least xy percent of initially participating patients are uploaded and evaluated. Then all referred patient records will get a mark which shows this quality. If one restricts a search request on records which belong to "100 percent documented" clinical trials, one has greater assurance that also patient records with unwanted outcomes are evaluated.
Values like xy could be integrated in a more general multidimensional pattern with name "QualityDescriptor" which can be associated to documents which fulfill a certain quality standard. It could be used as additional filter for a search.
The Search
For a pattern search the user must provide at least the pattern name and the feature vector. In case of patterns with short representation this can be done directly by typing, else indirectly by sending a file with all information to the database. Generation of such a file can be done e.g. by software which is connected with diagnostic devices, or software of laboratories which make blood analysis, or software of radiologists etc. (see examples ). More complex search requests are possible using a regular expression, e.g. for AND combination of pattern search with conventional text search.
Combination of Patterns, Multidimensional Search
It would be also possible to combine different patterns for a search. At this the weight for determination of the search result order of every pattern could be predefined by an additional number, e.g. as relative percentage.
Immediate Local Statistics
Connected with search is the possibility for "local statistics" which is done "near" the current patient: Wellstructured articles with clinical records should include not only diagnostic data but also treatment and the patient's rating as searchable patterns. These are fully machine-readable. So it would be possible to collect all articles whose diagnostic patterns are similar 3 to those of an actual patient and calculate at once the mean rating of former "similar" patients in case of this or that treatment. This can help also a general practitioners who wants to decide to which specialist he should send the patient. Today selection of a specialist can be already a preliminary decision for therapy.
Predictions from an Up to Date Model
A local statistics is not difficult to interpret, moreover it has the advantage that it can be quickly calculated. But especially if there are only a few patient records "near" the actual patient, the result could be imprecise due to coincidental fluctuations and it can be more accurate [9] to consider all patients records with the same kind of pattern like the current patient for immediate regression analysis or another appropriate precalculation, and make predictions from this. This could be even save computational time, because new precalculation is only necessary after new pattern of this kind have been uploaded.
Investment Costs
In 1991 arXiv [22] has been founded. It is an archive for e-prints of scientific papers in the fields of physics, mathematics, computer science and quantitative biology which can be accessed via the Internet. In many fields of mathematics and physics, almost all scientific papers are placed on the arXiv. As of June 2007, arXiv.org contains over 423,000 eprints, with roughly four thousand new e-prints added every month. The costs of it have been estimated in [23]: "In combination with system maintenance and upgrades and managerial and administrative support, costs of arXiv operation should average about $200,000 annually". 3 "Similarity" means that their distance d is smaller than a given maximum. This should only illustrate the possible efficiency of such archives. The authors do most work, and they do it without payment.
In comparison to arXiv a database with patterns search option needs additional manpower and we expect that complexity of requirements increases in medicine. In the course of time can emerge special needs, e.g. conversion and recalculation of patterns from original data, construction of interfaces for web services. Parallely to this traffic would increase. Then allowance of discreet advertisements and other activities 4 could secure solid finances. On these condition private funding is possible. Public funding, however, could guarantee free access to the database. This should be discussed.
Inappropriate Uploads
All reality conform 5 documentations of patient histories and of clinical trials are welcome. We hope that, like in natural science, also in medicine there will be many authors who want to share their knowledge and experiences, and that self-control of these authors works well, too.
The database's personnel could control formal requirements and rough content. Additionally all registered users have the possibility to rate any upload. Later the user who downloads information may decide individually 6 whether rating and other criteria, e.g. author's specialization, should be relevant for a search, or whether certain groups of authors should be not taken into consideration.
Quality of Pattern Definitions
Language is necessary for initial definition of new patterns. Suitable patterns can be defined without room for misinterpretation. Because definition is done once only in the pattern domain, the patterns remain unambiguous. So they 4 For example the database could support websites which initiate new contacts between doctors and patients, for a small fee, like ebay between seller and buyer, or initiate contacts in self-help groups, hopefully without fee. We can only mention this here, because more would exceed the scope of this article. 5 May be that some authors tend to sugarcoat the outcomes of own therapies. Of course this would be inappropriate, and the database cannot avoid this, but its open structure helps to detect this in the long run. Every author is responsible for his/her contribution. 6 These individual preferences could be stored as file to be quickly retrievable. represent well-defined machine-readable medical information. This aspect could contribute to the Unified Medical Language System (UMLS) [24].
Reliability of Pattern Definitions
Redefinition of a pattern with given name should be avoided, instead of this the new definition should be associated to a new pattern name within the same group, e.g. by simply appending an increasing number to the initial pattern name. If original data have been uploaded, calculation of new feature vectors with new associated pattern names would be possible even retroactively.
Integrity
As shown in Fig. (2), all contents of the database are accessible via web URLs. The initial structure is as simple and robust as possible. If there is a dead link, the rest remains intact. The database is fully open only for read. Writing of patient records, comments and articles to the database can be done only additively via upload. Deletion is possible only by the administrator.
At least one remote mirror of the database is necessary.
Complexity
It is advisable to think about possible barriers concerning the computational complexity of the project.
Complexity of Data Storage and Update
The storage space complexity seems to be no great problem because even the repeated backup of the total web has been done [25].
Time Complexity
The search time mainly depends on the dimensionality of the pattern representation, on the number of comparable patterns on the database which are only those with the same pattern name 7 and on further confinement of the search by a conventional search string. Due to this preselection the subset of concerned patterns is usually relatively small, so that a quick search is possible. Quick search is also possible in case of many concerned patterns, if their dimensionality is small enough for an appropriate tree structure [26][27][28][29][30]. Only if very many high dimensional patterns with the same pattern name are stored on the database, and if the search is not enough confined, the search time can become critical. In this case we could use parallelization techniques [31]. If we accept small errors, this could be combined with dimension reduction [32] and approximation methods [33][34][35].
Privacy
The name of the author must be published. The names of patients are not published, and the complete birth date is invisible, only the birth year. The patient history can be only uploaded, if the patient explicitly agrees to that. On that condition the patient gets a secret number which will be associated 7 These would be accessible at once using an alphabetically sorted pattern name index. Each entry of this index can point to cached collections of patterns with the same pattern name. Then these can be compared as quickly as possible.
to all future documentations which concern him. Using this number and a password also the patient can look in the database for these documentations and rate them. Later the investigator can decide how to use this additional information. Communication protocols which provide both anonymity and personal feedback have been proposed [36].
INNOVATIVE ASPECTS
There are already numerous medical databases which contain collections of patient histories, usually private and only used for one or a few clinical trials. There are also larger medical databases like the Duke Databank for Cardiovascular Diseases [37] and the ARAMIS Project [38] which led to decades of valuable clinical publications and show the benefits of open data collections. But all these databases are confined to their special application and the number of diagnostic patterns is very limited, because these are defined centrally by a few persons, e.g. by some developers.
The here described database organization overcomes this difficulty: The diagnostic and all other numerical patterns are not defined by the database's personnel, but decentrally by the "pattern domain owners". Together they have much more working capacity 8 and can develop and describe an increasing number of valuable concepts for reproducible conversion of medical reality into appropriate feature vectors which represent the patterns.
The database's personnel concentrate on development of well-defined and efficient ways for handling of these numerical forms, so that the usual problems like similarity search, comparison, statistics and modelling can be solved quickly. So the database can be used universally for handling of all appropriately defined patterns. Language independent pattern search is one important application with obvious advantages. It can be easily combined with conventional text search.
Hints for Definition
It is not difficult to define patterns according to current research -the feature vectors can contain all necessary data which are measured in clinical trials. The associated patient records can be uploaded and the results are directly accessible and comparable. For identification of diseases among others a pattern defined according to the International Classification of Diseases (ICD) can be used. Parallely to this it would be advantageous to search for pattern definitions which systematically map subjective and physical similarities of symptoms to similarities of the feature vectors. One of the first steps would be to define appropriate curved coordinate systems for the constituents of the human body.
EXAMPLES First Example, Initial Considerations
The initial considerations which lead to development of this idea arose from the field of orthodontics, which is concerned with the study and treatment of malpositioned teeth and the control and modification of facial growth. Cephalometrics is done for treatment planning [39,40]. At this lateral skull radiographs are taken under standardized conditions and measured (Fig. 4). The results can be used for building feature vectors. Using these data improves prognosis of skelettal growth.
Advanced orthodontic techniques use devices which directly digitize the three dimensional tooth positions [41] (Fig. 5). From this data coordinate system independent feature vectors can be calculated for treatment planning.
We noticed, that the approach can be generalized. Feature extraction of diagnostic findings is also possible in other areas of medicine. Often such findings are the basis of severe decisions.
The following examples illustrate this.
Second Example, Nontrivial Application and Prototype
Sometimes complex original data can need complex precalculation. If simple self-evident considerations (Fig. 10) are not enough, an appropriate transformation of pictures, sounds or curves is often the first step for calculation of feature vectors. For example in case of heart sounds a wavelet transformation allows analysis of the signal at different scales and times. Initially it is necessary to select and border Fig. (4). Cephalometric analysis; the angles and distances measured on the lateral skull radiographs (cephalograms) can be used for building a feature vector of this profile. Fig. (5). Measurements on the digitized three-dimensional surface of the dentition. Digitizing device: OraScanner™ (Orametrix, Inc. in Dallas). accurately a representative period of the sound (Fig. 6a-c).
The resulting wavelet coefficients (Fig. 7) can be used for building the feature vector which represents the pattern. Fig. (6a). Heart sounds in case of aortic valve stenosis; vertical axis: relative amplitude, horizontal axis: time in seconds. The brown dashed lines represent a bordering of the first period which has been set approximatively by the user on the screen. Fig. (6b). A catching algorithm is applied which reproducibly refines the bordering of Fig. (6a). Fig. (6c). The bordered part of Fig. (6c) is stretched that exactly one period remains. Fig. (7). The sound of Fig. (6c) after Daubechies wavelet transform; smoothened absolute values of the transformation coefficients for five different scales. They can be directly used for building a searchable feature vector of this heart sound.
Development and improvement of such calculations requires research. Remembering the variety of useful pattern structures it becomes clear that sharing of the work is necessary. The database cannot generate searchable patterns (Fig. 7), but it can store, compare and rank them (Fig. 8).
In the next chapter we show that even complex original data like MRI scans can lead to very compact patterns.
Third example, Case Report
A common question to the database will be: Should we operate, and if yes, which operation has best results. If the operation has severe consequences, there must be a good justification for it. For example Fig. (9a) shows the MRI of a 2 weeks old osteoporotic 9 compression fracture in the area of maximal kyphosis of the thoracic spine. The treating surgeon was a specialist in doing spinal fusions and recommended a dorsoventral spondylodesis. The patient trusted and accepted. After the extensive operation the patient read about kyphoplasty [42] and heard from experts that this minimalinvasive method would have been adequate in his case. So he got the impression that the operation trauma (lateral thoracotomy, muscle dissection), which leads to chronic pain, has not been necessary. The surgeon, however, dislikes kyphoplasty [43] and remains committed to his operation (Fig. 9b). Obviously medical treatment dramatically depends on the experience 10 of the chosen doctor.
This is no good situation. A comprehensive and uniform source of information like the proposed one could help and avoid dissent. If there would have been a good searchable database, before recommendation the surgeon could have looked for similar 11 fractures in the database and could have asked for chances of success in case of conservative therapy, kyphoplasty, minimally invasive endoscopic surgery [44], other therapies, or dorsoventral spondylodesis with lateral thoracotomy. If the latter would have been best, he could recommend and justify a large operation, there would not be any problem. If another possibility would have been better, he could early enough recommend another treatment and avoid a large unnecessary operation trauma with all consequences.
PERSPECTIVE
If accepted, the database would have significant influence both on research and on clinical practice. Using the database future researcher could systematically look for certain features of the human body, of medical decisions and of 9 The male patient was only 44 years old and an endocrine reason for his osteoporosis has not been found. But there was a particularity: Since the age of 18 he performed intensive daily ergometer training, and kept underweight, the BMI has been 17-18. When looking at literature at the age of 18 he got the impression that this lifestyle is good for health and kept it up because he felt fit. The osteoporotic fracture was very surprising for him and his environment. After noticing the danger for bones, he corrected weight and sport. Ergometer training was done less intensive and partially replaced by strength training. Within 16 months the DXA T-score at L2 increased from -4,6 to -3,4, the bone density increased about 20% within this time! 10 Of course global experience is too large to be captured by a single person. But we can make it better accessible by collecting it and providing an adequate interactive possibility to find its part which is relevant for the current situation. 11 With similar (v,c,d,n,t) as defined in Fig. (10). If there are no adequate symptoms but relevant risks, preventive selection of diagnostics according to statistics can be adequate. An example:
Prophylactic MRI Scans
We have seen the great significance of MRI scans. Because they seem to be innocuous, we recommend more extensive usage of this possibility also for prophylaxis: Nearly all of us have lost an affiliated person due to cancer which would have been detectable in an early state by MRI. Cancer is so frequent and so painful that we suggest as prophylaxis periodically 12 standardized MRI scans of all interested people. At least scans of imperiled tissues should be done at an age, in which these frequently lead to detection of a serious disease 13 . Three-dimensional imaging is possi- 12 Of course health insurance can only pay a limited number of scans. But the patient should know about their high detection rate, so that he/she could knowingly decide where to invest money. 13 Not only cancer, Fig. (6a) for example shows old osteoporotic fractures. Of course we know that it is difficult for the radiologist to evaluate a great ble. From the most significant scans feature extraction could be done. The resulting feature vectors could be stored as searchable patterns in the database. From this we could systematically learn about feature changes, which later correlate with serious diseases. This would lead to a well-founded basis for efficient MRI prophylaxis.
Of course such scans should be combined with other measurements, e.g. blood tests like PSA, if statistically meaningful. After establishment of the database we expect competition of diagnostic methods -the most meaningful methods can be easier recognized and selected.
Advanced Possibilities for Evaluation
Besides efficient search there are additional possibilities. Immediate individual statistics and regression analysis already have been mentioned. Because the patterns are machine-readable, they could be evaluated automatically by software, e.g. for conversion, modelling -it would exceed the scope of this article to deepen this here.
number of scans of a patient which has no symptoms. Therefore should be discussed that the radiologist is freed from liability in case of underdiagnose. Else there is especially in the initial state, without experience from an existing large database, the danger of many false positive diagnostics. Fig. (8). Exemplary output of our database prototype. The uploaded pattern represents the heart sound in case of aortic valve stenosis after wavelet transformation as shown in Fig. (7). Links to articles with most similar stored patterns are listened first. The links are accompanied by structured information for test purposes. The distance d quantifies the deviation to the uploaded pattern, the first link points to an article with the same sound like the uploaded one, therefore the distance is zero. Fig. (10). Drawn are the lines with lengths d1, d2, d3, c2, v1, v2, v3 which are used for classification of the vertebral compression fracture and for calculation of three important numbers (v,c,d) which provide decision-relevant information about the geometry of the fracture. They show the relative remainder of the vertebral body ventral (v), central (c) and dorsal (d). Let d1, d2, d3, c2, v1, v2, v3 denote the scalar lengths of the lines drawn in Fig. (10). Then we can calculate the numbers v, c, d as follows: v=2*v2/(v1+v3), c=4*c2/(v1+v3+d1+d3), d=2*d2/(d1+d3). Additionally the number n of the vertebra and a representative measurement of bone density t like the DXA T-score may be important, so that we could define the 5-dimensional feature vector (v,c,d,n,t) for a vertebral compression fracture. For a search it is preferable, but not necessary to know all 5 values. If, for example, someone has a compression fracture and only c and n are known, it is possible to search in the database for all vertebral fractures with similar c and n.
CONCLUSIONS
Up to now a great part of medical experience gets lost. But it would be technically feasible to collect anonymously medical documentations on a voluntary basis in an increasing global database and to search in it efficiently using diagnostic patterns. It could be also used for immediate calculation of up to date prediction models. Associated pattern names and feature vectors induce a systematic and arbitrarily expandable approach. Funding is necessary for good software and hardware support of the database, the definition of appropriate pattern names and feature vectors can be shared among medical institutions and further interested parties, e.g. manufacturer of diagnostic devices. The interface can be designed to allow comfortable upload as well as comfortable download of publications and patient records. We hope to start a constructive discussion which at last leads to realization of the project. All creative suggestions to this are welcome.
What is Usual Up to Now:
-Databases with collections of patient histories are standard for documentation of clinical trials and medical studies. They are separated, specialized and often private.
-All these databases are confined to their special application and the number of diagnostic patterns is very limited, because these are defined centrally by a few persons, e.g. by some developers.
What the Proposed Database Offers:
-The proposed database has an universal open and worldwide accessible interface. New anonymous patient records can be added interactively, if wished linked together with associated documentations of clinical trials.
-At this the number of allowed diagnostic and other well-defined numerical, machine-readable patterns is not limited. Their definition is done decentrally by "pattern domain owners", e.g. researchers, manufacturer of diagnostic devices, which have much more working capacity than the database's personnel. Any owner of an Internet domain name is automatically also owner of a pattern domain which starts with the same name.
-The database allows numerical search for these patterns, if wished combined with conventional text search. Furthermore it provides standard algorithms for their numerical evaluation, like statistics and modelling. Periodically calculation of up to date prediction models is possible. | 2016-10-26T03:31:20.546Z | 2008-03-13T00:00:00.000 | {
"year": 2008,
"sha1": "fc12bbeff126111f27709a1e98546675f1b20302",
"oa_license": "CCBYNC",
"oa_url": "http://benthamopen.com/contents/pdf/TOMINFOJ/TOMINFOJ-2-21.pdf",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "fc12bbeff126111f27709a1e98546675f1b20302",
"s2fieldsofstudy": [
"Computer Science",
"Medicine"
],
"extfieldsofstudy": [
"Computer Science",
"Medicine"
]
} |
6181087 | pes2o/s2orc | v3-fos-license | Prediction of novel target genes and pathways involved in bevacizumab-resistant colorectal cancer
Bevacizumab combined with cytotoxic chemotherapy is the backbone of metastatic colorectal cancer (mCRC) therapy; however, its treatment efficacy is hampered by therapeutic resistance. Therefore, understanding the mechanisms underlying bevacizumab resistance is crucial to increasing the therapeutic efficacy of bevacizumab. The Gene Expression Omnibus (GEO) database (dataset, GSE86525) was used to identify the key genes and pathways involved in bevacizumab-resistant mCRC. The GEO2R web tool was used to identify differentially expressed genes (DEGs). Functional and pathway enrichment analyses of the DEGs were performed using the Database for Annotation, Visualization, and Integrated Discovery(DAVID). Protein–protein interaction (PPI) networks were established using the Search Tool for the Retrieval of Interacting Genes/Proteins database(STRING) and visualized using Cytoscape software. A total of 124 DEGs were obtained, 57 of which upregulated and 67 were downregulated. PPI network analysis showed that seven upregulated genes and nine downregulated genes exhibited high PPI degrees. In the functional enrichment, the DEGs were mainly enriched in negative regulation of phosphate metabolic process and positive regulation of cell cycle process gene ontologies (GOs); the enriched pathways were the phosphoinositide 3-kinase-serine/threonine kinase signaling pathway, bladder cancer, and microRNAs in cancer. Cyclin-dependent kinase inhibitor 1A(CDKN1A), toll-like receptor 4 (TLR4), CD19 molecule (CD19), breast cancer 1, early onset (BRCA1), platelet-derived growth factor subunit A (PDGFA), and matrix metallopeptidase 1 (MMP1) were the DEGs involved in the pathways and the PPIs. The clinical validation of the DEGs in mCRC (TNM clinical stages 3 and 4) revealed that high PDGFA expression levels were associated with poor overall survival, whereas high BRCA1 and MMP1 expression levels were associated with favorable progress free survival(PFS). The identified genes and pathways can be potential targets and predictors of therapeutic resistance and prognosis in bevacizumab-treated patients with mCRC.
Introduction
Colorectal cancer (CRC) is the third most frequently diagnosed cancer and the second leading cause of cancer deaths worldwide, accounting for 10% of the worldwide cancer incidence and mortality [1]. Surgery is the treatment of choice for nonmetastatic CRC; however, approximately 20% of cases present with metastatic disease at the time of diagnosis and half of the patients experience recurrence and metastases even after complete resection of the primary tumor, leading to a poor prognosis and median overall survival (OS) of approximately 24 months [2,3]. The inclusion of cytotoxic agents (irinotecan and oxaliplatin) in fluoropyrimidine (intravenous 5-fluorouracil or oral capecitabine)-based systemic chemotherapy has been reported to improve the associated response rates (RR) from 15%-20% to 30%-40%, time to progression from 5-6 to 8 months, and OS from 10-12 to 20-24 months [3][4][5][6][7]. Furthermore, therapeutic benefits have been demonstrated to increase through the use of targeted drugs, such as angiogenesis inhibitors (bevacizumab, ziv-aflibercept, and ramucirumab) and antiepidermal growth factor receptor antibodies (cetuximab and panitumumab), as the first and second lines of treatment in patients with with K-RAS-wild-type tumors tumors [8][9][10][11][12].
Bevacizumab is the first agent to influence OS in patients with metastatic CRC (mCRC); when combined with irinotecan-based chemotherapy, the median OS improved from 15.6 to 20.3 months, median PFS from 6.2 to 10.6 months and RR from 34.8% to 44.8% [10]. The addition of bevacizumab to oxaliplatin-based chemotherapy improved median PFS from 8.0 to 9.4 months though there was no significant difference in OS(19.9 to 21.3 months) [13], while in previously treated mCRC; oxaliplatin based therapy improved both OS and PFS (10.8 to 12.9 months and 4.7 to 7.3 respectively) [14]. When compared for effectiveness, the irinotecan based chemotherapy has shown to have an edge over oxaliplatin based chemotherapy with the addition of bevacizumab (OS = 31.4 vs 30.1 months, PFS = 12.1 vs 10.7 months) [15]. These results have also been echoed in the MAVERICC trial (OS = 27.5 vs 23.9 months, PFS = 12.6 vs 10.1 months) [16]. Bevacizumab is a humanized monoclonal antibody that binds to vascular endothelial growth factor A (VEGF-A) and thus prevents interaction with its receptors, VEGFR-1 (Flt-1) and VEGFR-2 (Flk-1/KDR), leading to the regression of existing tumor blood vessels, normalization of the remaining blood vessels, and consequently tumor inhibition [17]. However, the therapeutic effects of bevacizumab are strongly affected by the lack of biomarkers that can facilitate selecting a population that might benefit from this medication and can predict therapeutic resistance [18][19][20].
In this study, we investigated the predictive biomarkers and pathways of bevacizumab resistance in mCRC by using microarray data from the Genetic Expression Omnibus (GEO) database. The new biomarkers were assessed for their ability to predict OS and PFS. The identification of predictive and prognostic biomarkers can facilitate improving the therapeutic index of bevacizumab.
Microarray data
The gene expression profile of GSE86525 was obtained from the GEO (http://www.ncbi.nlm. nih.gov/geo/) database [21], which was sequenced on the GPL16699 platform of Agilent-039494 SurePrint G3 Human GE v2 8 × 60K Microarray 039381 (Agilent Technologies, Santa Clara, CA, USA). The GSE86525 dataset includes microarray gene expression data derived from three bevacizumab-resistant HT29 xenograft tumors and three untreated HT29 xenograft tumors as controls. In brief, HT29 cells (1 × 10 7 ) suspended in phosphate-buffered saline were subcutaneously injected into the flanks of BALB/c nude mice, and the tumor-bearing mice were treated with bevacizumab (5 mg/kg, twice a week) for 3 weeks to obtain bevacizumab-resistant tumors. MTT colorimetric assays were used to determine the 50% inhibitory concentration for bevacizumab-resistant and untreated xenograft tumors; the tumor sizes were compared between the two groups. The sample tissues were immediately frozen under liquid nitrogen after isolation. Total RNAs were extracted from the samples, evaluated, labeled and hybridized, using a SurePrint G3 Human GE 8 × 60K microarray (Agilent Technologies). Array images were captured using a DNA microarray scanner (Agilent Technologies), and the data were analyzed using Feature Extraction Software (Agilent Technologies) to obtain background-corrected signal intensities. The expression data were further analyzed using Gene-Spring GX software (version 11.0, Agilent Technologies), and the differentially expressed genes (DEGs) between the bevacizumab-resistant HT29 tumors vs untreated control were compared using the Fisher exact test, followed by multiple corrections using the Benjamini and Hochberg false discovery rate (FDR) method [22]. Gene sets with an FDR q-value of <0.05 were considered statistically significant, and all experiments were performed in triplicate.
Data preprocessing and DEGs screening
The data were recalculated using the GEO2R analytical tool to identify the DEGs associated with acquired bevacizumab-resistant CRC [23,24]. The t test and Benjamini and Hochberg method were used to calculate the P values and FDR, respectively [22]. The genes were considered to be differentially expressed for an FDR value of <0.05 and fold change (FC) of >2 or <-2 (log2FC > 1 or < -1). The DEG expression data were extracted, and a bidirectional hierarchical clustering plot was constructed using MultiExperiment Viewer (MeV; version 4.8) software [25].
Construction of PPI networks
Protein-protein interaction (PPI) networks were plotted using the Search Tool for the Retrieval of Interacting Genes/Proteins (STRING; version 10.0; http://www.string-db.org/), an online database comprising comprehensive known and predicted interactions, to determine the interactive relationships among the DEG-encoded proteins. A combined score of >0.7 (high confidence) was used as the cutoff criterion [26]. PPI pairs were visualized using Cytoscape software (version 3.4.0; http://www.cytoscape.org/), and the CytoNCA tool was used to subcluster the plotted PPI networks [27][28][29][30]. Highly connected proteins with important biological functions were identified by calculating the degree (number of line connections between proteins) and the betweenness value (fraction of the number of shortest paths that pass through each node; A measure of how often nodes occur on the shortest paths between other nodes) of each node with a degree cutoff criterion of !2.
Enrichment analysis of DEGs
The Database for Annotation, Visualization, and Integrated Discovery (DAVID, http://david. abcc.ncifcrf.gov/) was used to classify the DEGs involved in the PPI networks according to their biological processes, molecular functions, or cellular components by using the Gene Ontology (GO) Consortium Reference (http://www.geneontology.org/) [31,32]. Gene sets with a P value of <0.05 and FDR value of <0.05 were considered statistically significant. In addition, the DAVID tool was used for pathway enrichment analysis, and the reference pathways were obtained from the Kyoto Encyclopedia of Genes and Genomes (KEGG; http:// www.genome.jp/kegg/) database website to perform KEGG pathway enrichment analysis for the DEGs involved in the PPI networks, with a P value of <0.05 and FDR value of <0.05 being considered statistically significant [33,34].
Clinical validation of the DEGs
The clinical assessment of DEGs associated with bevacizumab resistance was performed using the SurvExpress tool [ with a total of 808 cases, was used in this study. Survival profiles were compared on the basis of a high or low mRNA expression level of a particular gene, and they were censored independently for OS and PFS in months and stratified further according to TNM clinical stages 3 and 4. A log-rank P value of <0.05 was considered statistically significant, and the data were analyzed using SPSS for Macintosh (version 21, IBM Corp Armonk, NY, USA; www-01.ibm.com) for plotting Kaplan-Meier survival curves.
Gene co-expression in colorectal cancer data
The Cancer Genome Atlas (TCGA; https://cancergenome.nih.gov/) was used to obtain CRC data containing gene expression profiles. Level 3 RNASeq data containing gene expression profiles of 635 CRC cases (colon adenocarcinoma, N = 463; and rectal adenocarcinoma, N = 172) were obtained. The standard Pearson correlation coefficients (-1 to 1) and the coefficient of variation (the ratio of standard deviation to mean) of the desired gene pairs were calculated using SPSS for Macintosh (version 21, IBM Corp., Armonk, NY, USA; https://www-01.ibm.com). A P value of <0.05 was considered statistically significant and was used as the cutoff criterion.
DEGs screening and heat map clustering analysis
The GEO2R tool was used to identify DEGs from the data derived from the GPL16699 oligonucleotide microarray platform, comprising 62,976 probe sets. A total of 124 DEGs were determined to be associated with bevacizumab resistance, with 57 being upregulated and 67 being downregulated, as determined according to their log2FC and FDR values (S1 and S2 Tables). MeV software was used to construct a heat map to obtain the bidirectional hierarchical clustering of the DEGs and summarize the upregulated and downregulated DEGs (Fig 1).
Functional enrichment analysis
The DAVID tool was used to classify the DEGs involved in the PPI networks according to their common biological processes, molecular functions, or cellular components. Of the 1,454 GO gene sets included from the reference database, 111 were significantly enriched (P < 0.05; FDR < 0.05). Table 3 lists the top five gene sets, with those involved in the negative regulation of phosphate metabolic process and positive regulation of cell cycle process being the most significant and they include DUSP5, CDKN1A (p21 and Cip1), KIRREL, PDGFA, TLR4, PSMB11, BRCA1, and PBX1.
Fig 2. Protein-protein interaction (PPI) network of differentially expressed genes(A) up-regulated genes and (B)
down-regulated genes. The PPI pairs were imported into Cytoscape software as described in methods and materials. Pink nodes represent up-regulated genes while green nodes represent down-regulated genes. The lines represent interaction relationship between nodes. The highlighted DEGs represents degree = >2.
Survival analysis of the enriched DEGs
The SurvExpress tool was used to assess the enriched DEGs for their ability to predict OS and PFS in mCRC. High PDGFA expression levels were associated with poor OS, whereas high BRCA1 and MMP1 expression levels were associated with favorable PFS. However, the expression levels of CD19, CDKN1A, and TLR4 were neither associated with OS nor PFS (Figs 4 and 5).
Mechanism of gene correlation in tumor tissues
To elucidate the mechanism underlying the gene-gene correlation of the DEGs, TCGA RNA-Seq level 3 CRC data were used. BRCA1 was negatively correlated with PDGFA, CDKNA1, CD19, and TLR4 and positively correlated with MMP1. Moreover, PDGFA was negatively correlated with CDKNA1, BRCA1, MMP1, and TLR4. TLR4 was positively correlated with CDKNA1 and MMP1 and negatively correlated with CD19 and BRCA1. Furthermore, CD19 was positively correlated with CDKNA1 and negatively correlated with BRCA1, MMP1, and TLR4. However, PDGFA and CD19 were not significantly correlated (Fig 6).
Discussion
The overall mortality of CRC has remained unchanged over the past decades, despite advances in surgical and medical therapy [40,41]. This is due to the difficulties associated with early detection of the disease and the development of acquired therapeutic resistance, leading to ineffective treatment in patients with metastatic diseases [42][43][44]. Therefore, the etiological factors and mechanisms of acquired therapeutic resistance must be explored to improve pathway, bladder cancer, and microRNAs in cancer; however, only high PDGFA expression levels were associated with poor OS, whereas high BRCA1 and MMP1 expression levels were associated with favorable PFS. These discrepancies may be because the study cohort was not specifically on bevacizumab treatment, thus suggesting that biomarkers that predict OS do not Prediction of the novel target genes and pathways involved in bevacizumab-resistant colorectal cancer specifically predict PFS. Therefore, to confidently interpret the study results, these biomarkers require further assessment in patients specifically treated with bevacizumab. The results of this study reveal PDGFA overexpression to be associated with bevacizumab resistance and the prognosis of patients with mCRC. These results are consistent with those of a previous study, which identified PDGFA as a potential predictor of therapeutic resistance and an individual prognostic marker for bevacizumab treatment, because PDGFA expression was observed to be decreased after single-dose bevacizumab treatment in responders but remained unchanged in nonresponders [47]. PDGFA targeting with the PDGF receptor has been reported to increase chemotherapeutic sensitivity in different cancers [47][48][49][50]. Therefore, our study supports the current understanding that PDGFA acts not only as a predictor of treatment response but also as a prognostic factor, because PDGFA upregulation not only limited the response to bevacizumab but also affected the prognosis of patients with mCRC in this study. Notably, PDGF overexpression has been implicated in bevacizumab resistance and poor prognosis in bevacizumab-treated patients because the PDGF pathway is considered an alternative pathway in the development of bevacizumab resistance [51, 52]. The expression levels of MMP1 and BRCA1 were associated with PFS in patients with mCRC. Although this study is the first to demonstrate the aforementioned relationship in mCRC, MMPs have received attention in terms of their role in the mechanism underlying resistance to antiangiogenic therapy, because increased MMP2 and MMP9 expression levels have been associated with resistance to the anti-VEGF and antiplacental growth factor drug aflibercept and with poor OS [53, 54]. Furthermore, MMP1 expression has been strongly associated with tumor metastasis and adverse outcomes in mCRC and has been suggested as a potential prognostic and therapeutic target [55-59]. A previous study reported that BRCA1 is associated with early onset CRC and functions as a DNA repair gene to cytotoxic drugs [60]. BRCA1 has been considered as a predictor of treatment response and prognosis in breast, ovarian, and lung cancers [61][62][63][64][65][66]; however, its role in mCRC and bevacizumab resistance is yet to be explored. The present results suggest that BRCA1 may exert protective effects in mCRC; therefore, BRCA1 should be thoroughly studied because BRCA1 targeting might not only increase the prognostic and therapeutic effects of bevacizumab but also affect the expression levels of its associated genes, namely PDGFA, CDKN1A, TLR4, and MMP1.
CD19, CDKN1A, and TLR4 have also been reported to influence therapeutic resistance or overall prognosis in cancer. CD19 has been associated with chemotherapy and multidrug resistance in many hematological tumors, and plays a central role in targeted therapeutics against B-cell malignancies (because of its expression patterns throughout the B-cell lineage), and against most B-cell malignancies with successful preclinical experiments and first-generation clinical trials [67][68][69][70][71][72]. CDKN1A has been implicated in cell cycle regulation, cell death, DNA repair, and cell motility [73]. Studies have demonstrated CDKN1A overexpression to be associated with poor prognosis in gastric and esophageal carcinomas [74,75]. Furthermore, studies have reported that TLR4 plays a role in CRC; polymorphisms increasing TLR4 signaling led to a highly aggressive CRC, whereas those reducing TLR4 signaling exerted protective effects [76,77]. In addition, high TLR4 expression levels have been associated with highly advanced grades of colonic neoplasia and with lower OS, a high probability of CRC relapse, and the presence of liver metastases in humans [78-81]. Studies have also suggested TLR4 to promote angiogenesis in different cancers by activating the PI3K-AKT signaling pathway to induce VEGF expression. In addition, TLR4 inhibition is associated with VEGF inhibition [82-84]. This finding can explain TLR4 downregulation in the bevacizumab-resistant tumors in this study; however, in vitro validation of this finding is required.
Notably, five of the six genes that were commonly enriched as well as associated with bevacizumab resistance belonged to the PI3K-AKT signaling pathway. Therefore, we suggest that the PI3K-AKT signaling pathway is responsible for restraining the therapeutic efficacy of bevacizumab in mCRC. This observation is in accordance with the results of previous studies, which have suggested that modifications in the PI3K-AKT signaling pathway increase bevacizumab resistance as an alternative pathway to VEGF inhibition [85-87]. Moreover, the occurrence of mutations in the PI3K-AKT signaling pathway remains the main challenge for mCRC treatment with new biological agents [86, 88, 89].
The present findings provide novel data that could predict bevacizumab treatment response and the emergence of resistance. Furthermore, this approach can predict patient prognosis; however, additional studies are required to validate the study findings and determine their clinical applicability.
Supporting information S1 | 2018-04-03T03:27:24.205Z | 2018-01-17T00:00:00.000 | {
"year": 2018,
"sha1": "741b559d05860d74c93a8d2b8057a0931fc5556c",
"oa_license": "CCBY",
"oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0189582&type=printable",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "741b559d05860d74c93a8d2b8057a0931fc5556c",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
} |
55369781 | pes2o/s2orc | v3-fos-license | Improving Reading Comprehension Using Contextual Teaching and Learning ( CTL )
This study was aimed to find out whether teaching-learning reading comprehension using Contextual Teaching and Learning (CTL) would get better results than using Direct Instruction approach. The research used a quantitative method. The population of the research was students from the Islamic University of Kalimantan Muhammad Arsyad Al-Banjari. The researcher used cluster random sampling to select two sample groups, the control group and the experimental group. The control group with 25 students was the class that continued to use the Direct Instruction approach. The experimental group, with 20 students, was the class tested using the CTL approach. The results of the study showed that the students studying reading comprehension using CTL got better results than those who were taught using direct instruction. The test was a written test with five multiple choice questions plus an essay test. The test was used to obtain the data. The students’ scores were the data for the study of reading comprehension comparing CTL and Direct Instruction. Based on the results of the study, the researcher found that CTL was more effective than Direct Instruction because the null hypothesis was rejected and the alternative hypothesis was accepted so that CTL was found to be better for teaching reading skills. CTL encourages materials that are related to the students’ surrounding so that it helps them use English related to their daily activities.
INTRODUCTION
Most students still have problems learning English because in Indonesia, English is a foreign language which makes it difficult for them to learn it.Thus it is a compulsory subject at high schools in Indonesia which means that English must be studied at school and students must pass their final exams in English in order to graduate from high school.However students still had trouble learning English even though they had studied English for many years, they still could not speak English fluently.These problems must be overcome by teachers so that students can learn it well.
Most students have problems during the English lesson and so they give less attention to the subject in the classroom.This made their learning less optimal, therefore teachers must find ways and effective methods to gain their attention, and further pay attention to their learning styles.In learning a foreign language, the students should learn what they can use and are expected to use the parts of the language; hence the language is not only used in the classroom but also in their daily activities.Furthermore, there are four skills in learning and using English or any other languages, namely listening, speaking, reading and writing.All four skills have to be learnt by students in order to be able to use the language either in spoken or written form.
The researcher had previously observed a teacher at a university who usually taught English by using direct learning in his classroom, which his role is to model or demonstrates a procedure to the students.Therefore, the teaching and learning process focused only on him as the centre of learning in the classroom.He also used only the textbook provided by the university as the learning material for English.In other words, the main approach in direct learning is modelling in which the teacher demonstrates some procedure to the students.This made the students to be passive in the teaching and learning process in the classroom.This shows that the teacher was not creative in developing the English teaching learning techniques and materials.Consequently, the researcher of this study was interested in applying the Contextual Teaching and Learning (CTL) approach to the classroom, especially in teaching English reading comprehension since this is the skill that the students had the lowest scores in at the university.
Accordingly, in this research, the researcher focused on the comparison in the results from students studying reading comprehension by using CTL with those using direct instruction in an extensive reading class.Brown (2000, p. 7) put forward the following definitions of learning as follows: 1. Learning is acquisition or "getting".2. Learning is retention of information or skills.3. Retention implies storage systems, memory, and cognitive organization.4. Learning involves active, conscious focus on and acting upon events outside or beside the organism.
Learning
5. Learning is relatively permanent but subject to forgetting.6. Learning involves some form of practice, perhaps reinforced practice.7. Learning is a change in behaviour.Moreover, Emmit and Pollock (1991, p. 8) state that "learning is a process of making connections, identifying patterns, organising previously unrelated bits of knowledge, behaviour, activities etc., into new (for the learner) patterned wholes".Ellis (1986, p. 6) further say that the term "learning" is used to refer to the conscious study of a second language.Therefore, learning is a process of getting new information through some activities and changes in behaviour in order to identify a new pattern and using memory to store the new information.
According to Gagné (1985) in Brown (2000, p. 93), there are eight types of learning: 1. Signal learning: where individual learners make a diffuse response to a signal.2. Stimulus-response learning: the learner acquires a precise response to a discriminated stimulus.3. Chaining: what is acquired by the learner is a chain of two or more stimulusresponse connections.4. Verbal association: verbal association is the learning of chains that are verbal.
Basically, the conditions resemble those for other (motor) chains.However, the presence of language in the human being make this a special type of chaining because internal links may be selected from the individual"s previously learned repertoire of language 5. Multi discrimination: the individual learns to make a number of different identifying responses to many different stimuli, which might resemble each other in physical appearance to a greater or lesser degree.Although the learning of each stimulusresponse connection is a simple occurrence the connections tend to interfere with one another.6. Concept learning: the learner acquires the ability to make a response to a class of stimuli even though the individual members of that class might differ widely from each other.The learner learns to be able to make responses that identify an entire class of objects or events.7. Principle learning: in simplest terms, a principle is a chain of two or more concepts.
It functions to organize behaviour and experience.8. Problem solving: problem solving is a kind of learning that requires the use of internal processes usually referred to as "thinking".Gagné (1985) sees that the higher orders of learning in these types (levels 5-8) are developed upon the first few levels (levels 1-4) in which they need a great deal of progressive capacity of previous learning for their success.The first four levels apt to focus on the behavioural aspects of learning, while the next four levels focus more on the cognitive aspects.Sears (2003) affirms that Contextual Teaching and Learning (CTL) is a concept that helps teachers relate subject teaching matter to a real world situation.Sears has also written that CTL motivates learners to take charge of their own learning and to make connections between knowledge and its applications in the various contexts of their lives: as family members, as citizens, and as workers.Moreover, Suprijono (2009, p. 79) writes that: Contextual Teaching and Learning is a concept that helps teachers to connect between the instructional materials toward the real world and supports the learner to make the relationship between the learners" knowledge toward the implementation in their life as a member of family and society.CTL is an educational procedure that aims to help the learners understand the material of learning learnt by them by relating it to their own life context in a social and cultural environment.(Suprijono, 2009, p. 79) Based on the definitions above, CTL is an approach for helping teachers in the process of teaching learning using instructional material aimed to help students better understands the lesson.Johnson (2002, p. 25) has also said that CTL is an educational process that aims to help students see meaning in the academic material they are studying by connecting academic subjects with their context.Suprijono (2009, p. 63) asserts that based on Center of Occupational Research and Development (CORJD), the implementation of contextual teaching learning strategy is described as follows: 1. Relating: the process of study relates to the context in the real world or experience.2. Experiencing: studying is an activity of experiencing, students proceeds actively, tries to explore toward the things what learnt, tries to find out and create new things what learned by the students.3. Applying: studying is to emphasize in the process of demonstrating the knowledge and what possessed by the students and applying it.4. Cooperating: studying is a collaborative and cooperative process by studying in groups, communicating interpersonally.5. Transferring: studying is to emphasize in the ability of signifying the knowledge in a new context.The strategy above is well-known for its abbreviation, REACT (Satriani, Emilia & Gunawan, 2012).By providing this strategy in the learning environment, students can make connections between on what they are learning and how that knowledge will be used.Suprijono (2009, p. 46) says that the proponent theory of direct instruction is a social study theory.Based on this theory, direct instruction emphasizes that study is a behaviour change.He also informs that the main approach of direct instruction is modelling which means that the teacher has to demonstrate something, e.g.some procedure to the students.He further explains there are sequences to follow when doing modelling as below: 1.The teacher demonstrates the behaviour which he wants the students to follow and to achieve as a result of their study.2. The behaviour has to be related to other behaviour that the students have already learnt.
Direct Instruction
3. The teacher demonstrates the various steps in the behaviour clearly structured, and sequentially arranged with an explanation of what was done and why after each step is completed.4. Students must remember the steps that they have seen and then they have to imitate them.Furthermore, Suprijono (2009, p. 50) presents some phases in direct instruction as follows: 1. Establishing Set: in this case, the teacher explains the objective(s) of the study, the study background and prepares the student(s) to study.2. Demonstrating: the teacher demonstrates the exact skill and presents it step by step.3. Guided Practice: the teacher gives out the first planned exercise.4. Feedback: the teacher checks whether the students have succeeded to do the task well and gives feedback.5. Extended Practice: the teacher gives the students a sequence of pre-prepared exercises to do paying particular attention to covering a complete topic.
According to Muijs andReynolds (2008, in Suprijono, 2009, p. 51), the steps of direct instruction could be developed as follows: 1. Directions: the teacher after attracting her students" attention explains the objective of the study work to her class and ensures that all her students know what has to be done.2. Instructions: the teacher gives information and instructs well.3. Demonstrations: the teacher makes a model using source materials and visual displays while giving an explanation then asks the students to copy the demonstration 4. Questions and discussions: the teacher asks the students questions and ensures that they are all involved in the activity.5. Consolidating: the teacher maximizes efforts to strengthen and develop what has been taught through various classroom activities.6. Evaluation: the teacher evaluates the efforts/results made by her students.
Summarizing: the teacher summarizes what has been taught and what had been learnt
by her students during the study activity.
By following the steps above, it should support and reinforce the teacher in employing the direct instruction model throughout the day in her teaching.
Understanding Reading Comprehension
Reading is useful for language acquisition (Harmer, 2007, p. 99).Reading is a set of skills that involve making sense and deriving meaning from the printed word, and in order to read, the learner or user must be able to decode the printed words and also comprehend the meaning of what they have read (Linse & Nunan, 2005, p. 69).This means that reading is a set of skills which need comprehension in determining or finding out and making sense and meaning from the printed words and also the ability to decode (sound out) the printed word.Pang, et al. (2003, p. 6) proclaims that: Reading is about understanding written texts.It is a complex activity that involves both perception and thought.Reading consists of two related processes; word recognition and comprehension.Word recognition refers to the process of perceiving how written symbols correspond to one"s spoken language.Comprehension is the process of making sense of words, sentences, and connected text.Readers typically make use of their background knowledge, vocabulary, grammatical knowledge, experience with text and other strategies to help them understand written text.(Pang, et al., 2003, p. 6) Klingner, Vaughn and Boardman (2007, p. 2) also informs that "reading comprehension is the process of constructing meaning by coordinating a number of complex processes that include word reading, word and world knowledge, and fluency".Hence, reading is an activity in understanding written texts.For success in reading comprehension, the reader has to get the sense of knowledge and understanding from each passage in the text.Besides that, the reader has to be able to construct her thoughts and perceptions to find the meaning in each sentence, each paragraph and each passage.
A study by Peni (2011) has found that CTL is more effective than the traditional Grammar Translation Method to teach reading comprehension for students in a junior high school in Surakarta.There is an interaction between teaching, method and students" intelligence in teaching reading comprehension where students having high intelligence have better reading comprehension than those with low intelligence.
Moreover, CTL focuses on processing approach, hence in the study by Peni (2011), students were seen to be involved actively in class activities such as sharing experiences, working together, analysing and summarizing the reading texts.The teaching learning process was more student-oriented, while the role of teacher was merely to be a facilitator.The aim of the learning activity emphasized not only to the substances and knowledge, but also to make them more meaningful to the students.
METHODOLOGY
The researcher used a quantitative experimental method, because this research compared the effects of two treatments: (1) CTL, and (2) direct instruction to teach reading comprehension to students learning English at the Islamic University of Kalimantan Muhammad Arsyad Al-Banjari.The data collected was the students" scores.The data was the results from the tests of reading comprehension after the teaching-learning process using either the contextual teaching-learning (CTL) approach or the Direct Instruction approach.The data was collected by the researcher directly as the teacher in this experiment.It took one meeting each for the treatment on CTL for one class and the Direct Instruction for the other class.At the end of the treatment, the same reading test was given to the students.
Research Variable
This research involved two variables; they were the independent variable (X) which was the students" result after being treated with CTL for the experimental class and with direct instruction for the control class.The dependent variable (Y) was the scores from the reading test given to the students.Figure 1 shows the independent variable and dependent variable relationship: This research used a post-test only control design as displayed in Table 1.The model scheme used is as shown in Figure 2.
Population and Samples
The research population was selected from students in the Extensive Reading Classes by cluster random sampling.They were students in their third semester.The researcher took two classes, one for the experimental class and the other one for the control class.Thus, class A, the experimental group had 25 students, and class B, the control group had 20 students.
Instrument of Research
The researcher used a similar test which consists of 30 questions to assess the students" reading comprehension abilities after the teaching and learning processes using CTL for one class and the Direct Instruction in the other (see Appendix 1).
The Validity of the Test Instrument
According to Brown (2000, p. 387), by far the most complex criterion of a good test is validity, the degree to which the test actually measures what it is intended to measure.This means that validity is the degree to which the test measures how well the subjects obtain their results based on their own ability.Brown (2000) informs that there are three kinds of validity: content, face, and construct validity.Content validity is the validity of the test to measure the content of the lesson which will be tested.Face validity is the appearance of the test to make it easy to do the test.Face validity is necessary to ensure that the process of performing the test is easy to do.Construct validity is the validity of the test to measure the knowledge or the ability of the subject which the researcher has organized the particular test to measure.In this case, the researcher used content and face validity because both of these validities were appropriate for assessing the results from the tests of reading comprehension of the students.Furthermore, the researcher used the Pearson Product Moment to test the validity of the instrument.The formula that was used for this test is set out in the following figure:
The Reliability of Instrument
Reliability is a criterion for testing a test whether it is good or not (Brown, 2000, p. 385).A reliable test is consistent and dependable (Brown, 2004, p. 20).For testing the test used, whether it was good or not, the researcher used the Kude-Richard formula 21.The formula as suggested by Tuckman (1999, p. 200) is as shown in the following figure:
Technique Used for Data Collecting
The data sources for this quantitative research were the scores from a test.The researcher used CTL with the experimental class and Direct Instruction with the control class to obtain the results for reading comprehension from student in reading classes.
Techniques Used for Data Analysis
The data were gotten in this research, they were quantitative data.For analysis data, it was used descriptive data for describing respondent characteristics.The steps done to analyse the data were as follows: 1. Collect the data: the data were the scores from the tests of reading comprehension completed by the students.2. Making the tabulations: getting the scores for each item, and summing up to get a score for each variable.
Test of Normality
This test was used by the researcher to test the data for normality.Before finding the Chi-Square values, there were some calculations that had to be done by the researcher.According to Riduwan (2011, p. 121)
Test of Hypothesis
This research used a t-test (two-tail test) to test the hypothesis or the differences in reading comprehension results from students studying with either CTL or Direct Instruction in Extensive Reading classes.To test the hypothesis, the formula used was:
Hypothesis
This research was to answer the question whether there is a significant difference between visual and auditory styles for learning how to read English, so the writer proposed the hypotheses as follows: Alternative Hypothesis (Ha): "There will be a significant difference in the results for reading comprehension from students studying Extensive Reading using the Contextual Teaching and Learning (CTL) approach compared to the results from students taught using the Direct Instruction approach". Null Hypothesis (Ho): "There will be no significant difference in the results for reading comprehension from students who studying Extensive Reading using the Contextual Teaching and Learning (CTL) approach compared to the results from students taught using the Direct Instruction approach".
FINDINGS AND DISCUSSION
To find out the validity of the instrument, the researcher used the Pearson Product Moment to find whether each question was valid or not.Based on the results from calculating the Person Product Moment, only 24 of the 30 questions were valid as can be seen in the table below: ,2,3,4,5,6,7,9,11,13,14,15,16,17,18,20,21,24,25,26,27,28,29,30 24 Invalid 8,10,12,19,22,23 6 Based on the table above, there were 24 valid questions and 6 invalid ones.So only the 24 valid questions were used in the test instrument for this research.
The Reliability Test of instrument
In this research, the criteria of reliability test of the instrument could be seen, if the K-R21 was bigger than r-table (K-R21>r-table).For determining the reliability, the researcher must determine the degree of freedom (df) at the level of significance of 5%, (df) = n-2 = 25 -2 = 23 so the r-table was 0.13.The K-R21 was 0.8987, so the K-R21 was bigger than the r -table, and thus the instrument was reliable.
The Normality Test
The researcher used the Chi-Square formula to find out whether the distribution was normal or not.Based on the result of calculating the data, in the experiment group (Xe group), the interval classes consisted of 6 rows, so the degree of freedom (df=N-1=6-1=5) is 5.After getting the degree of freedom, the researcher found that r-table was 11.070.The value of Chi-Square that the researcher got was 5.18.So the distribution of normality test was normal because the x 2 was lower than x2-table (5.18<11,070).In the control group (Xc group), the interval classes also consisted of 6 rows, so the degree of freedom (df=N-1=6-1=5) was also 5. The value of Chi-Square of Xc group was 10.02.Therefore the distribution of normality test was also normal because x 2 was again lower than x2-table (10.02<11.070).
The Homogeneity
The homogeneity test is a test to examine the equality of variance of various populations whether the variances are equal or not.For the homogeneity test, the researcher used the Fisher Test (F-test).By using this formula, the researcher found that the population variance was equal.The F-table was 2.04 and the F-obtained was 0.824, which meant that F-obtained was lower than F-table so the F-obtained was homogeneous.
The Hypothesis Test
The t-table was found by calculating the degree of freedom (df=N1+N2-1=25+20-1=44), so the t-table was 0.3551 and the t-obtained was 0.7570.As the t-obtained was bigger than t-table, the null hypothesis (Ho) was rejected and the alternative hypothesis (Ha) was accepted.This meant that there was a significant difference in the results from the students" studying reading comprehension by using CTL with the results from those studying using Direct Instruction in the Extensive Reading classes.
Discussion
Based on the findings from the data above, it proved that the value of normality test using Ch-squire test was 5.18 which the value of Ch-squire (X2) was lower than the X2 table and the homogeneity of variance was an equal variance (homogeneous) or spread within the two groups, it could be seen that the F-obtained was lower than the Ftable (0.824<2.04).A continuous range of equal intervals was also found.After testing the normality and the homogeneity, the writer determined the hypothesis.Because the tests were parametric tests, the researcher used the t-test (two-tail test) to prove the hypothesis.Based on the results from the data, using the T-Test, the researcher found that the value of t-obtained was 0.7570 which was bigger than t-table (0.7570>0.3551).Thus the null hypothesis was rejected.Answering the research question, it proved that there was a significant difference in the test results for reading comprehension from students taught using contextual teaching and learning compared to those taught using Direct Instruction in Extensive Reading classes.
Because the researcher used the post-test-only control design, so to find the data, he conducted the treatment in the process of teaching and learning for only one meeting for each class.In the process of CTL approach, the first thing he paid attention to was the material.First, he started the reading activity by asking and answering about the material in accordance with the context of the students" experiences in the real world so that they could construct and inquire the new knowledge by relating to their real life context.Second, he made students to put effort in exploring the material given, to try to find out and create the main idea and the new vocabulary gain during learning.Third, he demonstrated pronunciation to words so that the students were able to read the text correctly.Fourth, he classified the students into group of four so that they could cooperate with each other to solve the questions in the reading exercise.Finally, fifth, he ended the class by discussing implications of the lesson to the students" real life situations and experiences.
In the steps of using Direct Instruction, first, the researcher explained the goal of learning and the information of lesson that were to be learnt by students.The class was teacher oriented, therefore, the class had less discussion time and each student worked individually.Interactions among students were limited, and thus this made it difficult for them to achieve the goal of the learning because they only focused on the lecturer.Second, he demonstrated a model for the learning in the classroom by reading the texts and students only paid attention to his reading aloud.Third, he assigned guided practice individually and this also did not make the students to be more active.Fourth, he gave feedback by asking questions to the students.He gave them a few minutes to think for the answer before they answered the questions.Fifth, he gave extended practice to students in the form of homework.
Based explanation above, it can be said that the teaching and learning process in CTL was more efficient compared to Direct Instruction because CTL made the process of learning more active than Direct Instruction.CTL made student more dynamic in solving the answers together in groups.The Direct Instruction made student more passive because the learning focused on the lecturer.The students acted only based on what the lecturer commanded them to do individually.
The result of this research study is similar to Peni (2011) because both studies found that CTL succeeded to activate the dynamic of the students in the classroom.Interactions became lively and the role of teachers was as facilitators.It proved that CTL is a useful approach because it relates the materials to the real word so that the material taught can be implemented in students" real life.
CONCLUSION AND SUGGESTIONS
The researcher used a t-test (two-tail test) to test the hypothesis, whether to accept the alternative or the null hypothesis.Based on the finding from the results from the data using the t-test, it was found that the value of t-obtained was bigger than t-table: 0.7570> 0.3551.So this means that the alternative hypothesis was accepted and the null hypothesis was rejected.Based on the result of the statistical calculations on the comparison between the final scores of students taught using CTL and Direct Instruction, CTL gained more significance in reading comprehension.Thus, it is expected that teachers renew their approaches for teaching by applying effective ones and CTL is among those them.The materials used in teaching are related to the students" surrounding so it helps them know how to use English in relation to their daily activities.
(4) _______ Some witnesses become confused when they see too many photographs of similar people.(5) _______ Police officers are better witnesses than ordinary people.(6) _______ American courts depend a lot on eyewitness testimony.(7) _______ The judge must decide if witness"s story is accurate.
X Y Figure 1 .
The independent variable and dependent variable relationship.
, these are: a. Find the highest score and the lowest score b.Determine the range (R).R= highest score -lowest score c.Finding out the amount of class (BK) BK = 1 + 3.3 log n n= the total number of items d.Finding out the length of the class interval: e. Making tabulation as a relief table f.Finding out the average or Mean score: g.Calculate the Standard Deviation: h.Making distribution of expected frequency.The steps of finding out the expected frequency are: Determining the limitation of class.The left score of the first interval class was decreased 0.5, and then the right score of the interval class was added 0.5 Finding out Z-score: Finding out the value of normal curve field by looking at the normal curve table Finding out the size of normal curve field each interval class by the steps; first, the number of the first row was decreased the number of the second rows, the second rows was decreased by the third. Determining the expected frequency (fe) by the way of multiplying the size of each interval class and the number of students.i. Finding out the value of Ch-square (X 2 ).The formula is (Riduwan, 2011, p. 265): Note: X 2 = the value of Chi-Square fo = the obtained frequency fe = the expected frequency
Read the passage again.Write the main idea of each paragraph by completing the sentences below.(18) Bernard Jackson was ______________________________________________ .(19) Eyewitnesses sometimes ___________________________________________ .(20) Many factors ____________________________________________________ .(21) Police officers ___________________________________________________ .(22) Courts depend ___________________________________________________ .(23) It is the responsibility of ___________________________________________ .(24) The testimony of eyewitness ________________________________________ .Consider the context of each underlined word.First, write its part of speech on the line.Then, choose the best meaning and write the letter of that meaning on the time.(25) The fire blazed for the hours and nearly destroyed the old house.In this sentence, fire (is a (an) ... .It means a. to shoot at b. red-hot flames c. a match d. to let someone go from a job (26) The workers gathered cane in the fields.In this sentence, cane is a (an) ... .It means … a. a stick used as support in walking b. a sugar stalk c. to hit with a piece of wood d. to weave wood pieces, as in a chair (27) The supervisor asked everyone to make a concerted effort to maintain company production goals.In this sentence, concerted is a (an) ... .It means … a. to act as one b.a musical performance c. bodies d. unified (28) The chair recognized each speaker at the meeting.In this sentence, chair is a (an) ... .It means… a. a piece of furniture b. sitting c. a leader d. to direct a session (29) Rice is considered a staple in many countries.In this sentence, staple is a (an) ... .It means … a. a basic food substance b. to fasten papers c. a metal clip used to attach items d. to supply with food (30) The company of soldiers left their base to patrol the countryside.In the sentence, base is a (an) ... .It means ahis Graduate study of English at the Teachers Training Faculty at University of Lambung Mangkurat in August 2009.He finished his Master"s study at the English Education Program at the University of Palangka Raya in August 2013.He had taught at the Private Junior High School, SMP Johansyah Asan, in Banjarmasin for 2 years.Now he teaches the courses of Extensive Reading, Curriculum and Material Development, Measurement and Evaluation at the Islamic University of Kalimantan Muhammad Arsyad Al-Banjari.
Table 1 .
Research design.Direct Instruction Y e = mean score from final results from the Experimental class Y c = mean score from the final results from the Control Class
Table 2 .
Material content in the reading test.
Table 3 .Validity of instrument test
In line 27, it is their responsibility to decide if….Who does their refer to? | 2018-12-07T13:09:01.974Z | 2016-09-10T00:00:00.000 | {
"year": 2016,
"sha1": "c360c5ef4982535c06566f9326187265d565e423",
"oa_license": "CCBY",
"oa_url": "http://jurnal.unsyiah.ac.id/SiELE/article/download/4964/4227",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "c360c5ef4982535c06566f9326187265d565e423",
"s2fieldsofstudy": [
"Education"
],
"extfieldsofstudy": [
"Computer Science"
]
} |
16255623 | pes2o/s2orc | v3-fos-license | Decoration Increases the Conspicuousness of Raptor Nests
Avian nests are frequently concealed or camouflaged, but a number of species builds noticeable nests or use conspicuous materials for nest decoration. In most cases, nest decoration has a role in mate choice or provides thermoregulatory or antiparasitic benefits. In territorial species however, decorations may serve additional or complementary functions, such as extended phenotypic signaling of nest-site occupancy and social status to potential intruders. The latter may benefit both signaler and receiver by minimizing the risk of aggressive interactions, especially in organisms with dangerous weaponry. Support for this hypothesis was recently found in a population of black kites (Milvus migrans), a territorial raptor that decorates its nest with white artificial materials. However, the crucial assumption that nest decorations increased nest-site visibility to conspecifics was not assessed, a key aspect given that black kite nests may be well concealed within the canopy. Here, we used an unmanned aircraft system to take pictures of black kite nests, with and without an experimentally placed decoration, from different altitudes and distances simulating the perspective of a flying and approaching, prospecting intruder. The pictures were shown to human volunteers through a standardized routine to determine whether detection rates varied according the nest decoration status and distance. Decorated nests consistently showed a higher detection frequency and a lower detection-latency, compared to undecorated versions of the same nests. Our results confirm that nest decoration in this species may act as a signaling medium that enhances nest visibility for aerial receivers, even at large distances. This finding complements previous work on this communication system, which showed that nest decoration was a threat informing trespassing conspecifics on the social dominance, territory quality and fighting capabilities of the signaler.
Introduction
Among vertebrates, birds are renowned for their construction and architectural capabilities. They build nests as containers for their offspring and/or as roosting sites, and may invest a substantial amount of time and energy in building activities, even if many are ephemeral structures that last only a few weeks [1]. Avian nests are often concealed or camouflaged because individuals are frequently vulnerable to attacks by predators or aggressive conspecifics when incubating eggs or brooding nestlings [2,3]. Accordingly, behaviors and nest designs that minimize predation risk and/or brood failure are favored by natural selection. Thus, birds choose carefully the location and design of their nests in relation to the presence and density of predators [4,5] and any element that may increase conspicuousness, such as the whitish faecal sacs excreted by the nestlings of passerine species, tend to be removed readily by the attending parents [6].
Although crypsis seems to be sought by a large proportion of species, a number of taxa use conspicuous (e.g. artificial) materials for nest building or decoration, but the adaptive function of this behavior is still poorly understood (review in [7]). Typically, nest decorations have been suggested to act as extended phenotype signals in courtship displays. For instance, the addition of ornamental feathers outside the nest cup by female spotless starlings (Sturnus unicolor) or male blue tits (Cyanistes caeruleus) seems to have a role in sexual selection [8]. Conversely, in other species, marking the nest might be a way to advertise nest-site occupancy and social status (i.e. territoriality) to potential intruders, even in the absence of the owners [7]. Under this scenario, only high quality owners would expose their nest and assume the costs of aggressive encounters with conspecifics that may covet the territory, thus ensuring the reliability of the signal [9]. These costs were shown to hold true for a population of black kites (Milvus migrans), a raptorial bird that decorates the nest with white artificial materials [10]. In such study, the authors showed through observational data and multiple manipulative experiments that nest decoration: (1) was minimum for youngest individuals, peaked for individuals in prime age (7-11 years old) characterized by maximum reproduction and survival, and declined with senescence thereafter; (2) it acted as an honest signal informing conspecifics on the viability, territory quality, fighting ability and social dominance of the signaler; and (3) low quality individuals refrained from dishonest signaling due to the high costs associated with it (increased aggressive intrusions and risk of depredation), thus ensuring the reliability of the signal. However, given that black kite nests are often well hidden in the canopy, the crucial hypothesis that nest decorations increased nest-site visibility to other kites, which was assumed to be true, must be specifically tested to confirm that nest decoration could function as a reliable signal of threat against intruders.
Black kites build platform nests composed of tree sticks and branches. These open structures are lined up with finer materials such as twigs, soil residues or mud, and decorated just before egg laying by adding artificial materials, such as plastics or paper. Nest sites are often reused for several years in succession and new materials added every breeding season. In a previous study, decoration was rated as the percentage of the nest surface covered by non-natural fabrics, such as plastic or paper [10]. The rating was done by an observer who could not consider the actual visibility or conspicuousness of the nest-site as perceived by a flying intruder and that can be affected by factors such as nest size or tree architecture, among others. Thus, to fully test the hypothesis that decorated nests may function as signaling devices to trespassing conspecifics, it is essential to demonstrate that ornamented nests are indeed more visible from the air than they would be without decoration. The implications of such a demonstration are vast, because evidence for social status signaling through nest marking is very scarce in birds (e.g. [11]).
In this study, we used state-of-the-art UAS (Unmanned Aircraft Systems) technology to simulate the aerial perspective of trespassing, flying black kites, and assess whether decorated nests were more conspicuous than undecorated ones to a human observer. To this end, we flew at pre-determined distances from actual nests built by black kites a UAS, equipped with a digital high-resolution camera, and gathered images of the nests with and without an experimentally placed decoration. The images were later standardized using ad hoc prepared software and shown to volunteers through a standardized routine to determine whether detection rate varied according to nest decoration status and distance.
Ethics statement
The study was conducted in accordance with EC Directive 86/609/EEC for animal experiments, and with the current Spanish legislation involving aviation safety. Non-invasive methods were used during experiments. No animals were sacrificed, handled nor sampled and thus, animal ethic evaluation was not required. Field technicians had the required licenses to operate in the frequencies used for this work. All the field procedures were reviewed and accepted by the authority yielding the permit (Doñana National Park authorities, Junta de Andalucía; permit reference: 3236/FQH/mdcg).
Nest inspections
Field work was conducted in Doñana National Park (Spain: 37°6' N, 6°28' E) at the beginning of May 2014, when tree foliage is consistent with the average situation experienced by kites during the peak of territorial intrusions (trespassing) by non-breeding individuals attempting territory takeovers. By this date, most individuals were in the pre-laying or incubation stage of the breeding cycle (the mean laying date of the population is 15 April, but may range from March to mid-June), the periods with the highest decoration intensity and trespassing rates in the population [10]. A total of 15 nests were randomly selected for inspection within the Park, covering the most representative areas, tree types and range of nest-exposures (from very exposed to hidden in the canopy). Nests with eggs or chicks were excluded from the study to avoid disturbance, except for one nest that contained cold, already abandoned eggs, which were removed for contaminant-analysis before the flights and thus did not interfere with the experiment.
During each UAS flight-session, the target nest was climbed, cleaned of any decoration material if already present and then assessed through UAS-flights twice, i.e. with and without being decorated by us with a white plastic bag. A white plastic bag was chosen because this is by far the most preferred (in relation to other colors and materials) and commonly used item (90% occurrence) by black kites in Doñana to decorate their nests [11]. The tree-climber, experienced with checking hundreds of kite nests in previous years, was instructed to place the white plastic bag so as to simulate a high level of nest decoration, as observed in real kite nests (e.g. see photo in Fig 3b of [11]). Thus, when decorated, the white bag covered 60-80 percent of the nest platform corresponding to a high-level of nest decoration [10]. This value was chosen based on the following reasons: (1) Financial and logistical constraints limited the sample size to a maximum of 15 nests and precluded the possibility to examine how visibility varied along a gradient of decoration intensity. Therefore, we decided to focus on a restricted range of decoration-levels corresponding to individuals that make the active decision to decorate their nest and benefit from it (see below). (2) In this sense, the basic question of the study was: "for a kite that decorates the nest, does this increase its potential visibility to other kites?". Based on previous experiments, the portion of the population that most clearly, decorate the nest is composed by individuals in prime age [10]. These are the ones that most frequently collected experimental decorations when offered by the researchers [10], and, among those that collected these experimental items, more than half had decoration levels of 60-80%. (3) Individuals that decorate the nest profusely (i.e. at the end-tail of the decoration range) are those that will most benefit by the signal in terms of lowered intrusion rates and fights against challengers, as shown in [10]. Thus, they would be the ones most interested in "designing" a decoration configuration that is visible from as far as possible (otherwise, by the time a conspecific sees the decoration, it may be too late to avoid the aggressive encounters due to the intrusion). (4) Finally, by focusing on high-decoration levels, the analysis of the basic question outlined above was conservative: if nest decoration, as simulated here, does not increase nest visibility and detection, then it would be even less likely to do so for lower levels of decoration.
In addition to the manipulation, each nest was characterized by its dimensions (length and width), measured in cm with a ruler, and "opening angles", defined as the three angles of unobstructed view of the sky, measured with a compass from the center of the nest-cup outwards: (1) to the right in horizontal, compared to the North; (2) to the left in horizontal, compared to the North; and (3) vertically, compared to the level of the platform. This characterized the potential breadth of visibility of the nest from the outside of the tree both vertically and horizontally.
We performed two flights per nest (one with and one without artificial decoration) using a hexacopter to photograph them (with a Sony Nex5 camera, 16 Megapixels resolution) at different distances and angles (Fig 1 and Figure A in S1 File). All nests were assessed following the same sequential procedure: (1) first, we took one zenithal image ("zenithal snapshot" hereafter), hovering at 50 m above ground level (AGL hereafter). (2) Then, the hexacopter flew at 10 m AGL around the tree, designing a circle of 15 m radius 360°and making a series of eight regularly-spaced photos of the treetop (hereafter "lateral snapshots"). Finally, the hexacopter flew away from the nest and then approached it making a series of photos (hereafter "approaching snapshots") at distances of 115, 95, 75, 55, 35 and 15 m (at 30, 26, 22, 18, 14 and 10 m AGL, respectively; Fig 1). This sequence of declining distances and heights was chosen to simulate the progressive approach of a trespassing, prospecting black kite that intends to inspect the content of a nest. In Doñana, kites breed at high density in loose aggregations with small internest distances (e.g. [12]), so that, on average, a prospecting trespasser would start to enter the defended area and elicit the aggressive response by the owner between 30-120 m from a nest. Thus, the sequence of distances of the approaching snapshots covered the gradient ranging from approaching the border of a territory from its outside to a full intrusion and mimicked the actual behavior observed during trespassing, although simplified into a straight-line flight.
We randomly alternated the sequence of decoration treatments within nests to minimize the potential effect of light conditions on the detectability of the nests. All flights were performed in daylight between 7:30 and 21:00h (local time) by a professional team of operators and field coordinators from Flying-Cam (http://www.flying-cam.com). Each nest-assessment required around 1.5 hours of overall work (approximately 15-20 min to climb the tree and measure the nest characteristics, 45-50 min of active flight and 20-30 min of material download, assemblage, testing and reload on vehicles) to complete all recordings (of the decorated and non-decorated treatments), while moving from one nest to the next required~20-60 min. Flights were performed on autonomous mode guided by the on-board autopilot that followed a GPS waypoints-trajectory (pre-programmed in the previous days based on the GPS position of the nest location), but keeping the capability for the human pilot to take control of the aircraft in the eventuality of danger. The take-off and landing were performed in manual mode. A second operator using real time video from the UAS adjusted the gimbal and remotely controlled the onboard camera to perform the snapshots of the nests. All the images were associated to their GPS coordinates and barometric altitude extracted from the flight tracks.
Preparation of nest images for visibility trials
We took 662 images from the 15 nests inspected with and without decoration (Figs 1 and 2). Preliminary trials by DC and MM showed that finding nests in the pictures taken at distances of 95 and 115m (26 and 30m AGL, respectively) during the approaching flights had a very low detection rate and thus, those images were removed for further analyses. Before performing the definitive trials, all images were revised and, if necessary, treated with Photoshop CS6 (Adobe, San Jose, CA, USA) to remove features potentially facilitating the detection of nest location (for example, the car in the nest proximity).
To estimate the detectability of black kite nests to trespassers, we conducted trials of nest detectability using 25 volunteers as "experimental conspecifics". The assumption behind such approach is that, if decoration makes a nest more visible to a human observer, it will make it even more visible to the keener vision of a raptorial bird [13]. To make the assessment by volunteers more rapid and standardized, we developed a Java program that projected the images on a laptop computer in a predetermined order (see below) and asked each observer to try to detect one nest per image and click on its presumed location. The program allowed 15 seconds to the observer for locating the nest, recorded the latency to locate it and assessed if the "clicked area" matched the real nest location. Each time an observer clicked on an image, the program switched to the next one, regardless of whether the volunteer succeeded or not in correctly detecting the nest and without feedback about their success in locating the nests. Also, if the observer did not click on the image within 15 seconds, the nest was recorded as non-detected and the program switched to the next photograph. Previous to the trials, we conducted a training session with 10 pictures that portrayed real kite nests and provided feedback to the participants, so as to accustom them to the general appearance of the target object of their search and standardize as much as possible the search image across participants. The images shown during the training session (15 seconds per image) were the same for all the participants and none of them were used in the subsequent trials. The observers had no previous knowledge of the aims of the study and none participated more than once in the experiment.
Each observer visualized 150 images of the 15 experimental nests divided in two sets. In the first set, consisting of 60 images, the observer was exposed to: (i) a pair of zenithal images of each nest (with and without decoration) taken at 50 m AGL, and (ii) a pair of lateral images of Images taken by the hexacopter of black kite nests experimentally decorated by the researchers (left side) and of the same nest without decoration ("self-control nest", right side). The experimental decoration covered ca 80% of the nest platform, corresponding to a high-level of decoration in real nests, typical of birds in prime age (see [10]). Black arrows indicate the position of the nests. The images have been cropped and enlarged for illustrative purposes. each nest (with and without decoration) taken circularly around the nest at 15 m radius and 10 m AGL. In the second set of images, formed by a total of 90 photographs, the volunteers visualized three pairs of approaching snapshots per nest (for the 15 decorated and 15 non-decorated nests), ordered from farther to closer (75 m, 55 m and 35 m distance), thus simulating the progressive approach of a trespassing black kite that intends to inspect the content of the nest.
Cumulatively, each volunteer dedicated approximately 25 minutes to visualize all the images. The images projected and their relative order varied between trials as, for example, there was more than one lateral image available for the 10 m AGL snapshot. To minimize memory-biases (e.g. nest location and characteristics of the surrounding landscape), the pictures from the same nest (regardless of its decoration treatment) were separated from each other by at least four photographs. Similarly, in the case of the approaching flights, the series of approaching snapshots for the same nest (i.e. from the decorated and non-decorated treatment) were separated from each other by at least four series of photographs. Finally, we randomized the position of the nest within the images by cropping each photograph to the size of 4175 x 2774 pixels (a 15% of reduction from the initial resolution of 4912 x 3264 pixels) in Photoshop CS6. In this way, we reduced the habitat similarity between images of the same nest. Based on pre-trials, we are confident that such procedures prevented any substantial effect of memory-biases, which would be homogenous across treatments and observers anyway. All trials were conducted using the same screen (HP ZR22w, 22 inches, 1920 x 1080 resolution) positioned at a distance of 50 cm from the observer.
Statistical analyses
We used generalized linear mixed models (GLMM) to test whether (i) the probability of nest detection (0/1; binomial distribution, logit link function) and (ii) the latency to nest detection (time elapsed between initial image projection and nest detection, more details below; Gaussian distribution) was related to the decoration treatment (decorated vs non-decorated), the distance of the hexacopter to the nest and/or the interaction between nest decoration and distance. "Nest identity" and the "identity of the volunteers" were included in all the models as random factors. In the models on latency, we only included images in which the nests were detected by the participant in both treatments. Previously to running these models, latency to nest detection (max = 15 seconds) was log-transformed to satisfy the assumptions of normality. We built separate models to analyse zenithal, lateral and approaching snapshots. Additionally, in the case of the approaching snapshots, we ran models including only the first 'correct detection' of each nest (i.e. 2 images per nest, one per treatment) instead of the 3 consecutive images per treatment. For example, if Volunteer A detected nest X in the 75 m distance photo, then all subsequent detections of nest X by A (regardless of the decoration treatment) were not included in the analyses. Results remained unchanged (see Table A in S1 File) for both the probability of nest detection and latency to nest detection.
Selection of the minimum adequate models was done by sequentially dropping non-significant terms from fully saturated models (containing all main effects and interactions) in a hierarchical way, starting with the least significant terms. We performed likelihood ratio tests to confirm whether the inclusion of a predictor was significantly informative. In these tests, the full model, including the focal predictor, is compared with its restricted counterpart without the same predictor, and the significance of the predictor is obtained by a chi-square distribution.
We systematically performed model diagnostics statistics while modelling to avoid misleading conclusions based on statistical artifacts. To this end, we checked assumptions about the distribution of residuals through diagnostics plots and examined collinearity. These analyses did not show any obvious deviation from GLMM assumptions or any collinearity problems. Statistical analyses were implemented in R 3.1.2 (R Development Core Team 2015) with the package lmerTest [14].
Results
Overall, the group of decorated nests had a higher detection frequency and a lower latency to detection than the non-decorated nests (GLMM: number of detected nests: p < 0.001; Latency to detection: p = 0.027, Fig 3). The effect of nest dimensions and nest "opening angles" on the probability and time to detection was never significant (GLMM: all p > 0.18).
Once we experimentally decorated a nest, its detection probability increased significantly and its latency to detection declined (Tables 1, 2 and Fig 4). This held true for all set of images, i.e. zenithal, lateral and approaching snapshots (Fig 4). In addition, for the set of approaching snapshots, the latency to nest detection depended on the interaction between treatment and distance. At shorter (35m) and medium (55m) distances, the latency to detection, although always lower in decorated nests, varied similarly in both decorated and non-decorated nests. With further increasing distances (75m) however, the latency to detection showed a steeper increase in non-decorated nests (Fig 4).
Discussion
This is the first study that attempts to directly estimate nest detectability from a bird's perspective based on human vision using UAS technology. Using such tools, we experimentally show that the presence of ornamental white plastic in black kite nests increase their conspicuousness, even at large distances, compared to undecorated versions of the same nests. These findings, in combination with previous work on this system [10], support the hypothesis that the nest decoration behavior of black kites increases the potential nest visibility to aerial trespassers.
Although complex nest structures are common in nature, there are relatively few examples of the use of external objects to decorate breeding sites [7,15]. In most cases, ornamentation seems to act as an extended phenotype in mating contexts [7,15], as reported in some passerine species that place ornamental feathers on the nest exterior (e.g. [8,16,17]). In other cases, the use of (non-natural) elements has been suggested to have an antiparasitic function (e.g. cigarette butts in the nests of house sparrows, Passer domesticus, and house finches, Carpodacus mexicanus; [18]), or to provide thermal benefits (e.g. use of dung by white storks, Ciconia ciconia; [19]). However, evidence of birds using ornamental items as social signals is very scarce [10,11]. In Doñana, a large fraction of the black kite population decorates the nest with white materials (preferentially plastic, as shown experimentally by [10]), although the extent of decoration varies with age as only individuals in prime condition decorate their nests profusely. Furthermore, rates of conspecifics' trespassing and egg predation increased when decoration was experimentally augmented, suggesting that dishonest signalers may incur great fitness costs [10]. As we now demonstrate here, using UAS to simulate a bird's perspective when approaching a nest, the detectability of black kite nests increases with white decorations. Overall, these findings support the key assumption that such decorations increase nest conspicuousness from above and may thus serve as a reliable signal of threat against intruders, providing them with potential information on the social dominance, territory quality and fighting capabilities of the owners [10]. More crucially, the current results show that this information is available from far away, at distances from the nest that allow trespassers to gather knowledge before incurring the costs of a physical attack from the territory owners. Thus, signal design in this system seems consistent with the idea of a communication channel making a nest more visible from as far away as possible to an aerial intruder. Interestingly, neither nest dimensions (length and width) nor its exposure from above (opening angles) influenced the probability and time to detection by our human observers (used as proxies). Since photographs were taken at different angles and heights, a possibility is that the fraction of the nest visible in each image may have not captured adequately all the tridimensional characteristics of the nests. Alternatively, the limited availability of nest sites may have affected the detectability of nests. In this saturated high-density population, many pairs have only 1-2 trees as available substrates [12] and, once a tree is chosen, few positions within the tree may be available for adequate nest-location given the constraints of local micro-climate or physical feasibility of construction (e.g. [20]).
Whatever the interaction with tree configuration, our results remark the use of white ornamentation as an efficient channel for transmission of information in space (i.e. towards the sky), since its effect may at least partially override the constraints imposed by available tree architecture (e.g. nests well concealed within the canopy had higher detection rates and lower latencies when decorated than when non-decorated). In fact, the effect of decoration was clearly noticeable to human eyes even in the pictures taken at the furthest distances analyzed and probably from much farther to actually prospecting kites, given their more acute vision Table 1. Nest detection probability by human observers in relation to decoration treatment (decorated and non-decorated) and distance to the nest. Three models were run to analyses independently the detectability of the nests from: zenithal snapshots (50m AGL), lateral snapshots (15m and 10m AGL) and approaching snapshots (75, 55, 35m and 22, 18, 14m AGL) respectively.
Estimate
Std. Error z value P Nest Decoration Increases Conspicuousness [13]. Furthermore, the addition of plastic is probably less energetically demanding and time consuming than construction of larger, more conspicuous nests. It is also less risky than placing (large) nests in the external, more visible, but also weaker and more unstable upper branches of trees. In this regard, summer temperatures in Doñana may reach beyond 50°C in doi:10.1371/journal.pone.0157440.g004 areas exposed to direct sunlight, which is clearly above the thermo-neutral zone for eggs and nestlings [21]. Thus, by decorating with white materials, kites might place their nests in sites with lower exposition (i.e. well within canopies) while still improving visibility. Also, the fact that nest decoration is placed at the peak of territory-intrusions just before laying, and declines as the breeding season progresses and temperatures increase [10], seems to rule out thermoregulation as the main function of this behavior.
Potential limitations of the study
Although the use of humans observers is a popular approach in evolutionary studies to measure crypticity or conspicuousness (e.g. [22][23][24]), we recognize that the use of humans to rate pictures is an oversimplification of field conditions. However, it offers significant advantages by permitting image manipulation and controlled conditions, in terms of time, distance, and perspective, to assess detection of both decorated and undecorated nests. Furthermore, some experiments on crypsis have reported similar detection rates between avian and human predators (e.g. [22,[24][25][26]). Humans and birds, including black kites, share a visually oriented sensory system and an excellent color and detail discrimination. If anything, diurnal raptors possess a higher visual acumen [27,28]. A main concern using human observers as black kite surrogates is that humans, unlike birds, have no ultra-violet (UV) sensitivity [29,30]. In the context of the present study, the use of humans to assess nest visibility may have led to underestimate the detection capacity of actual kites due to the UV component of the white decorations. Thus, given such potential differences in visual acuity and UV-perception, our results are likely to represent a conservative estimate of the detection advantage offered by nest ornamentation. A second, potential limitation of the study was represented by the fact that the experimental material used to decorate the nests in our experiments was a standardized, new plastic bag. In our experience, kites tend to place new, shiny materials in the nest when first decorating it. For example, many kites readily collected new pieces of white plastic when these were experimentally offered in a previous study [10]. However, these materials will unavoidably become dirtier and less bright with time through physical wear, exposure to elements and contact with the nest contents. Thus, even if kites regularly refill their nests (F. Sergio, pers. obs.), our manipulation may have included brighter materials than experienced by an average intruder. In this sense, our measures may reflect the "ideal" decoration-decision by a kite at the moment of placing the item, later altered by environmental constraints.
Finally, we imposed high levels of nest decoration during the flights, so as to ensure an overfocus on individuals that intend to signal and most benefit from it. Due to financial and logistical constraints, we were unable to examine the level of visibility-enhancement offered by intermediate to low levels of decoration, associated with individuals that may strike a balance between an attempt to threaten conspecifics and to pass sufficiently unnoticed to avoid excessive intruder pressure.
In contrast with the above limitations, note that, given the proverbially extreme latero-vertical maneuverability of kite flight, a real kite intruder would be much more adept at finding the distance, flight altitude and angle that maximizes the visibility of an observed nest than our simplified, pre-planned, straight trajectories. This likely made our assessment unavoidably highly conservative and could have amply counter-balanced our over-decoration of kite nests. Thus, if on one hand we could have over-decorated our nests, on the other hand, we used a simulated intruder with very limited capacity of opportunistic aerial relocation for nest visibility-maximization from a distance. Because of the above uncontrollable factors, difficult to incorporate in an assessment of this kind, this study should be best seen as a clear, unavoidable simplification of an ideal situation, focused on physical signal transmission and only considered in conjunction and through its congruence with previous analyses of the same communication system [10].
Conclusions
To our knowledge, this is the first study using UAS as a tool to simulate animals' perception.
Here we exploited the capability of UAS to act as an "eye in the sky" [31], but advances in the development of more types of miniaturized sensors may allow these systems to acquire further roles in ecology than simple aerial photography. Thus, it is now technologically possible to equip UAS with, among others, microphones, traps for microparticles and sensors of micrometeorological variables, which may allow even more refined assessments of animals' perceived and actual environments in the future [32].
The experimental simulation of nest inspection by a prospecting kite supported the idea of nest decoration by this species as a signal designed to enhance nest visibility and transmit information to flying territory trespassers. As a consequence, the latter could assess nest contents from a distance and within a shorter time span, thus minimizing the probability of an aggressive, physical reaction by the territory owners. Avoidance of physical fights for both the signaler and receiver may benefit both parties and contribute to the evolutionary maintenance of this communication system.
Supporting Information S1 File. Fig A. Example image of the hexacopter during the experimental flights around black kite nests. Table A. Nest detection probability (A) and latency to nest detection (B) by human observers in relation to decoration treatment (decorated and non-decorated) in the approaching snapshots. Aside of the results shown in Table 1 of the main text, the models below included only the first 'correct detection' of each nest (i.e. 2 images per nest, one per treatment) instead of the 3 consecutive images per treatment. For example, if the subject A detected nest X in the 75 m distance photo, then all subsequent detections of nest X by A (regardless the decoration treatment) were not included in the analyses. See method for further details. (DOCX) | 2018-04-03T00:45:02.255Z | 2016-07-25T00:00:00.000 | {
"year": 2016,
"sha1": "0e154afc4453f7d968159b85a2becc587530e872",
"oa_license": "CCBY",
"oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0157440&type=printable",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "c52c2fb3022201c4b3389df0db52fa3ef2260345",
"s2fieldsofstudy": [
"Biology",
"Environmental Science"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
} |
33036489 | pes2o/s2orc | v3-fos-license | Immersive Real-Acting Virtual Aquarium with Motion Tracking Sensors
We realized a real space-based virtual aquarium equipped with a multiview function that provides images for users and audiences at the same time through motion tracking sensors. A virtual reality system needs more natural and intuitive interfaces so as to enhance users' immersion. We attach markers on users and camera devices in a real space designed in the one-to-one size as the virtual space to trace user and camera motions, which is reflected in real time to generate virtual world images. These images are transmitted to the user's immersing image devices. Also, the system allows audiences to share experiences by providing them with virtual synthetic images from a third-person perspective including a user after taking the user in the real space with a camcorder on which motion tracking markers are attached. For this, the system provides the functions of marker-based motion tracking with sensors, recognition of user's motions, real-time actual image rendering, and multiview to realize a system to simulate more intuitive and natural virtual space interactions, which can be used for the construction of motion-based realistic/experiencing systems, which increasingly attract interest.
Introduction
The computer graphics industry has provided images with amazing quality owing to rapid development of the graphicexclusive hardware, the card only for graphics, since it was fully used in the entertainment field. In particular, in the field of computer games, SEGA created a surprising sensation with the opening of the first 3D fight action game, "Virtual Fighter, " in 1993. It gained enormous popularity with the setup of free perspectives from which users feel actually engaged in the game and realistic hitting senses beyond the simple 2D image and scenes from a fixed viewpoint. Users are satisfied with their achievements in virtual spaces as their characters freely walk around in virtual spaces to carry out their given missions owing to the combination of virtual reality, a computer graphic technology, with games.
In addition, user interfaces for interaction in virtual reality have evolved. Early virtual reality-based games enabled interactions by assigning functions to combinations of the direction stick and buttons of a joystick, but natural immersion was difficult to expect as it was an interface hardly related to virtual reality. Though interfaces using hardware to realize car drive controlling environments including an accelerator, a brake pedal, and a driving wheel in virtual reality like a racing game were introduced, they were limited to particular virtual reality. Users expected more natural reality.
According to such a trend, virtual reality-based contents experienced simply with eyes and hands have evolved into interfaces with bodily interactions using various sensor technologies. A game, which adopted an interface that allows users to step on sensors installed on the bottom while dancing to music, caught enormous popularity rapidly, and then games using motion sensing sensors like cameras or magnetic sensors were out to the market one after another.
Nintendo gave the world Wii, which adopted a remotecontrol type control interface called Wii remote control. It allowed users to control virtual environments with their motions taken from the remote control held on their hands actually escaping from a simple-form joystick. Microsoft's Kinect, different from existing interfaces, is equipped with a camera module to sense users' motions with a motion capture device, on the basis of which games run. However, as it has sensors on a fixed position, there are spatial limitations, and it is insufficient in realizing near-actual free motions.
Motion capture equipment began to be adopted to express precise and natural movements of characters in virtual reality. Motion capture equipment is classified into mechanical/gyro equipment and optical equipment. The former installs potential meters on a performer's joints to abstract the rotational motion values of the joints, while the latter attaches markers on a performer's joints and films them with 6∼8 cameras to analyze, track, and capture the 3D motions. The optical motion capture equipment in turn is classified into active marker and passive marker equipment. The former has luminous sensors in its markers, while the latter uses infrared light reflecting markers. The early equipment price has greatly lowered along with the development of the technology, and it has been made easier to construct a system employing motion capture equipment.
We try to get information about the user and camera motions on the basis of marker-based optical motion tracking sensors. We provide images reflecting users' interactions based on that information with virtual spaces according to their eye movements and gestures. Also, in order to allow audiences to feel the interactions between users and the virtual reality, we realized a real space-based virtual reality system equipped with a multiview function that can provide images for users and audiences at the same time. Then, the paper is organized as follows. Section 2 introduces virtual reality techniques and an experiencing system based on them. Section 3 accounts for the composition of the whole proposed system and the method to realize it. Section 4 introduces the actual composed system. The study is concluded in Section 5.
Virtual Reality (VR).
Virtual reality (VR), meaning the technology that uses computers to provide a specific environment similar to the real one, stimulates users' five senses to provide them with spatial and temporal experiences similar to the real ones [1]. Users not only are immersed in virtual reality, but also can interact with virtual reality through various interfaces. VR can be classified into three types, that is, monitor-based VR, projection-based VR, and head-based VR [1]. The monitor-based VR, the simplest form VR system, uses common monitors. Monitors that can be purchased in market are economical and easy to use while providing relatively high resolutions.
However, it is a less immersive VR than any other VR types. The projection-based VR provides great images for users in a way that allows users to be more immersed by increasing the field of view. The increased field of view (FOV) can provide reality experiences for users but has problems of limitations in spaces for image projection. This is because more ample spaces are needed if a user makes a move. In the head-based VR, users are supposed to wear equipment like head-mounted displays. Contrastively, in the monitorbased VR and projection-based VR, the screen is not fixed but moves in accordance with the movement of users' sights. They can provide high immersive VR for users as they put out images in accordance with users' sights. However, users are not greatly satisfied with them if they do not support realtime image generation based on detailed motion tracking and tracking information for taking information of users' sights.
VR is used diversely in various fields of the entertainment industry including games, virtual museums, galleries, theatres, and theme parks, and, in particular, its utility is prominent in the edutainment industry, which combines education and entertainment. Customers who would visit real aquariums will have greater feelings when they are located by themselves underwater to see a great number of fish and water plants of tens of kinds that surround them and when they see scenes moving in accordance with their own actions. For this reason, spaces like aquariums and undersea vehicles are often realized in virtual spaces [2][3][4][5].
What is important in VR is to allow users to interact with virtual spaces and objects therein by controlling their avatars naturally and effectively. Interactions with virtual spaces are classified into three types: manipulation, navigation, and communication [1]. Navigation refers to users looking around virtual spaces, and most virtual space systems use navigation control panels, but some systems began to provide users' gestures. Takala et al. [6] proposed a virtual aquarium system adopting a gesture sensing system. Users take a swimming motion to move forward in the virtual space. Virtual space experiences with such real actions are simpler, more intuitive, and more natural [7].
Gesture Sensing.
A gesture is a kind of information about the expression of one's meaning and feeling by means of the movement of hands or body parts, and VR uses it as a method to interact with virtual spaces. Methods of analyzing and expressing motion information taken from motion tracking in gestures include directional feature vector, vector fields, and shape descriptor [8,9].
There are two problems, segmentation ambiguity and spatiotemporal variability, in sensing precise gestures from collected motion information [9]. The problem of segmentation ambiguity is a problem of precise sensing of the starting and ending points of a gesture in a motion a man makes. In order to clarify this, some techniques use methods that add particular motions notifying this at the starting and ending points [10]. For example, if a user tries to make a hand gesture, user does not move user's hands at the starting and ending points for a certain period of time.
Another problem is spatiotemporal variability of gestures. If one is asked repetitively to make the same gesture, the shape or duration of the gesture varies every time even with the same person. Even, in such a situation, the user's gesture should not be influenced by the shape, size, and direction of the input gesture in order to precisely sense it [11].
The Proposed System
3.1. Operation of the System. We realized a virtual aquarium, a highly immersive virtual space system. The system we proposed has improved virtual space exploration, sharing of experiences with audiences, and natural interactions with the virtual space. The size of a real space and that of the virtual space are in one-to-one correspondence, and users' motions in experiencing the virtual aquarium are tracked by means of motion capture equipment.
Two cameras are installed in the actual studio in oneto-one correspondence with the virtual aquarium. One is an immersive image device providing scene from the first person perspective for experiencing persons and the other is a camcorder for generating images from the audience's viewpoint. The experiencing person in the studio space wears on his head the immersive image device to which motion tracking markers are attached to appreciate virtual space scenes. He also wears on his hands the bands attached motion markers. Tracking the markers which are attached to the immersive image device, camcorder, and the user's body parts, the motion capture equipment collects the motion information of the movements and directions of the user and camcorder. The system sets up the user and audience viewpoint cameras that compose the virtual space based on motion information, and then it generates images (Figure 1).
In other words, when the user makes motions, wearing the immersive image device, the system projects the motion information onto the virtual space in real time to allow the user to feel the immersive images as if user moves in a real space. The system senses the users' gestures by tracking the markers attached to users' both hands (Figure 2). The immersive image device is equipped with a wireless image transmission device to allow the user to move freely in the studio. Also, images are provided for audiences who do not engage in the experiencing service so that they can experience the same virtual aquarium that the user does.
Gesture Tracking Sensors.
For the sake of interactions with the virtual reality, we used motion sensing system to 4 International Journal of Distributed Sensor Networks track user's gesture. The user wore the immersive image device on users' head and attached markers on user's hands so that we could extract 30 samples. The motions of the immersive image device were reflected in controlling the user's viewpoint in the virtual reality. The movements and sight rotations of the user were reflected in the system. The markers on the hands were used as interfaces for controlling virtual objects in the virtual reality space, and the motion information was expressed as a trajectory in a 3D space so as to sense it as a gesture. First of all, in order to sense gestures made by the user, reference gestures were generated in the way that the user made already determined gestures, which were registered as reference gestures. The reference gestures were registered as those at three rates, slow, normal, and quick, so that we could prevent decline of sensing due to temporal changes by gesture rates.
For gesture sensing, the number of 3D samples from hands motion tracking taken successively from the user is compared with the length of the trajectory composing all the reference gestures. When the length corresponds to the number of samples, it is examined whether it corresponds to the relevant reference gestures. Here, Procrustes analysis is used as a method to compare current input gestures with reference gestures. In order to compare current input gestures and reference gestures, we should compare the two types of gestures to clarify how similar they are, but it is difficult to simply compare the two. This is because their 3D positions and directions do not correspond at all as the user does not look in a certain single direction but does freely move about in the virtual gesture space. Procrustes analysis makes input gestures optimally overlapped on reference gestures by appropriately conducting operations like translating, rotating, and scaling on the input gestures. In this system, a total of 10 reference gestures have been defined for the sake of interactions with the fish in the virtual aquarium (Figure 3).
Multiviews for User and Audiences.
The system provides multiple view images in order to allow not only users participating in the virtual reality but also audiences to share experiences in the virtual aquarium. As for images for the user, movement and rotation information taken from the tracking of markers attached to the HMD device worn on the user's head is transmitted to the virtual aquarium system in real time. The images are in turn generated from the information and then return to the user (Figure 4).
Audiences who do not participate directly in the virtual aquarium system can see images in the virtual reality through the audience view. They do not simply see the same images as the user receives but are provided with the real-time synthesis of the images in the virtual space and the actual images taken from the filming of the user in order to show how the user interacts with the virtual space. As for images for the audiences, movement and rotation information taken from the tracking of markers which are attached to the camcorder.
In order to apply the technique of Chroma key for the sake of foreground extraction for real-time image composition, we painted the floor and walls of the studio green. Figure 5 shows the steps for generating the audience view. Figure 5(a) shows taking the actual images of the user and Figure 5(b) shows the images taken from the camcorder. Figure 5(c) shows how to generate images in the virtual reality reflecting the location information of the camcorder taken by tracking markers attached to it. Figure 3(d) is an image taken by reflecting the location information of the camcorder. Figure 5(e), an audience view, is finally generated by compositing Figure 5
Studio Configuration with Motion Sensors.
In order to reflect user's actual motions, we set up a 7 m × 7 m space where motions can be tracked by motion tracking equipment and painted the floor and walls green for the sake of realtime image composition. Two computer servers for motion tracking and real-time image generation, respectively, are used for the virtual aquarium system. OptiTrack made by Natural Point was used as a motion sensor system for motion tracking. A total of 16 OptiTrack S250e cameras were connected through a network using two hubs and controlled with Tracking Tools produced by Natural Point Corporation [12]. The motion server analyzes images transmitted from the 16 sensor cameras and markers included in the relevant images to extract motion information. It changes relevant information into the information of the directions and positions of markers and transmits them to the render server through the network. The render server generates images of user and audience views based on the directions and positions of user and camcorder.
The user can walk about freely in the virtual aquarium space because the generated user view is transmitted to his HMD through a wireless HDMI transmission device. Also, as almost every HMD product supports stereo 3D images, the render server also provides more realistic images by generating stereo images in the side-by-side method. A Sony PMW-F3K camcorder is used to take actual 720 p images, and the taken images are transmitted to the capture board of the render server through BNC cables. Then, the foreground images extracted in real time are combined with the virtual reality images to generate audience view. Figure 7 shows some scenes as seen by the user in the virtual aquarium.
Generation of Scenes in the Virtual Aquarium.
Ogre3D, an open game engine, is used for realtime scene generation [13]. A total of 30 species of fish and 10 species of seaweeds were produced, and normal and light maps were basically assigned to individual fish so as to enhance the quality ( Figure 6). Also, in order to give undersea feelings in an aquarium, shader, an optical effect, is applied to add God Rays and wave effects and so as to show rich inner scenes of the aquarium; more than 300 objects are made to swim in the water. As objects composing such a scene are made in the one-to-one proportion with the actual space, the user can move to a position user wants to reach walking as user does in the actual space. If the user wants to take a more detailed look at the rock on the lower right side after user sees the scene in Figure 7(a), user can just walk toward it. Then, as in Figure 7(b), user can watch the rock in detail in the position that the user has reached. The user can be more immersed because users' actual behavior is reflected in the virtual aquarium in this way without modification. 10 reference gestures were defined for interactions with fish in the virtual aquarium. When the user makes motions, corresponding to the relevant gestures with user's right or left hand, the fish make motions in accordance with the relevant gestures ( Figure 8). Each gesture lasts for a duration determined by the system. The basic value is determined as 15 seconds.
Conclusion
We constructed an immersive virtual aquarium projecting motions in reality, which supports the following functions.
(i) Natural Real-Acting Navigation with Motion Sensors.
A user make moves as if the user moves in a real aquarium space. The system uses the motion tracking sensor to trace the user's movements of head and hands and projects the motion information to the virtual space without modification.
(ii) Audience View Support. The system allows not only the user who participates in the virtual space but also audiences to experience the virtual space. This delivers the user's interaction with the objects in the virtual space to audiences in real time.
(iii) Interaction by Gestures. Rather than using some special equipment for user's interaction with the objects in the virtual aquarium, the system supports user gesture sensing to help them experience natural virtual spaces. By using the method of Procrustes analysis to promote the rate of user gesture sensing, the system is designed not to be influenced by the spatial positions, sizes, and directions of the motions users make.
The system, equipped with such merits, provides an experiencing service in which users and audiences are more immersed and sympathized. The current system uses motion sensor equipment to track the motions of the user and camcorder. However, while it has excellent performances, this equipment is much more expensive than other products of motion sensor equipment like Kinect. In future, we will construct a motion sensor-based virtual reality system like the current one by using multiple sets of Kinect. | 2018-04-03T02:21:03.430Z | 2014-05-01T00:00:00.000 | {
"year": 2014,
"sha1": "2c69ecfd2ea32ab8308e1f8d41a43f86513ebe98",
"oa_license": "CCBY",
"oa_url": "https://journals.sagepub.com/doi/pdf/10.1155/2014/857189",
"oa_status": "GOLD",
"pdf_src": "Adhoc",
"pdf_hash": "31200af001e66b326c834f6aec9b2efd69821ab4",
"s2fieldsofstudy": [
"Computer Science",
"Art"
],
"extfieldsofstudy": [
"Computer Science"
]
} |
211804908 | pes2o/s2orc | v3-fos-license | AN OVERVIEW OF ISLAMIC LAW AGAINST BUYING AND SELLING SAND-LAND WITH A LIBERATION SYSTEM
Buying and selling as a meant to get goods easily, one could exchange money with goods he needed the seller. Of course with the value agreed upon by both parties. The law of buying and selling were permissible basically, It was meaning that every Muslim in earning a living could by meant of buying and selling. The aim was to describing and analyzing the perspective of Islamic law about buying and selling Sand-Land with a systems slash of case studying Mount Wurung with the views of Islamic law. The research was used qualitative method with descriptive analysis. According to Madzhab of Imam Hanafi allowed the buying and selling which contained a little gharar because the buying and selling had been estimated based on experience that was close to truth and buying and selling including buying and selling named jizaf, whereas according to Madzhab of Imam Syafi'i was not permitted because there was potential gharar which allowed each of them had a loss or profit.
I. Introduction
God makes each human being to worship others so that they can help, exchange in all matters aimed at the common good. In this way people's lives become orderly and fertile, and one relationship with another becomes solid. Buying and selling in terms of fiqh is called albay' which means selling, replacing, and exchanging something with something else.
This verse gives us a lesson that to obtain rizki may not be in a vanity way, which is contrary to Islamic law and buying and selling must be based on volunteering, not cheating, no may lie, and may not harm the public interest. In its development there have been ways to buy and sell in the community. One of them is buying and selling based on scales or doses that can be estimated and proven directly or indirectly by the buyer, there is also buying and selling by way of slash which occurs in Mount Wurung, Mojokerto Regency.
Wurung Mountain is 20 km south of Mojokerto City Mount Wurung is located between the borders of the Bangsal, Dlanggu and Kutorejo sub-districts. This mountain has a wealth of abundant sand, so many people or contractors who take material goods from the mountain for the construction of houses or buildings. Farming communities who have orchards on the upper slopes of Mount Wurung, garden thecan be planted just one crop only, namely corn. So that farmers cannot grow other types of crops, because Land is a Land of parcels (unproductive). So the owner Land wants to mine Land to become productive Land. However, farmers cannot afford to mine sand, because of constraints that are incapable of licensing and costs. (RI Religion Departemen, 2010). Mining is very costly, therefore farmers prefer the Land to be sold slash to other people to mine the sand so that it becomes productive Land.
In the Mojokerto area there are people who carry out buying and selling transactions by way of slashing against the sand that is inLand Land. Prospective buyers will do the estimates on the content of sand in the Land that will be mined sand. After a price agreement, both parties hold a sale and purchase agreement. Case in Kuto Porong Village, Bangsal District, Mojokerto Regency. The owner of the Land was Land named Somo Joyo. He has Land a terraced area of 1400 m2. Sand content in the Land he sold the to Supono for IDR 40,000,000.00. On July 4, 2011, Supono paid a down payment of Rp.5,000,000.00. On December 27, 2011, Supono paid an installment of Rp.7,000,000.00. On January 22, 2012, Supono paid an installment of Rp.8,000,000.00. After Four Months on April 13, 2012, Supono paid in full at a price of Rp. 20,000,000.00. The agreement or agreement was carried out at the Kuto Porong Village witnessed by community leaders and village officials as witnesses.
In the agreement the owner Land provides a depth range to the slicer to mine no more than the river water level. After mining, the owner of the Land asked the penebas to flatten the mining Land and there was no time period for the sand mining settlement, until the mining was completed. (Supono,2014). Facts above show that in the practice of buying Sand-Land in Gunung Wurung, Mojokerto Regency, it is possible that the sellers and buyers (penebas) both do not know the content of sand in the soil of the Land . Means to buy and sell the sand there is a potential gharar (lack of clarity on both sides) that allows each of them suffered a loss or gain. If this is the case, then the fact about buying and selling Sand-Land in Mount Wurung, Mojokerto Regency has the potential not to harmonize with the following hadith of the Prophet, which means: "Narrated by Abu Hurairah, he said:" The Messenger of Allah forbade buying and selling by throwing crickets and prohibiting the sale and purchase of elements of fraud " . (Narrated by Muslim). (al-Mundziri, 2012) This is what attracts the attention of researchers to analyze the perspective of Islamic law about buying and selling Sand-Land with asystems slash of case studying Mount Wurung, Mojokerto Regency so that what actually happens can be carefully described with the views of Islamic law.
II. Materials and Methods
The research method used in this study is qualitative-descriptive research. This method is used because this study aims to provide an explanation of the perspective of Islamic law concerning buying and selling of Sand-Land with asystem slash case studyin in Mount Wurung of Mojokerto Regency district through a literature study approach obtained from interviews with perpetrators directly.
Creswell (2015: 471) explains that qualitative research is an "interpetative" research that with that research researchers can make personal assessments as descriptions that are appropriate to the situation or theme that raises the main categories of information. Interpretations made by researchers on a transcript, for example, are different from the interpretations made by others. This does not mean that the researchers' interpretation is better or more accurate; this only means that the researcher brings a perspective to his interpretation.
The technique of collecting data through interviews is a conversation with the aim of obtaining data between interviewers and interviewees. (Moleong). This technique is used to collect data about the seller's knowledge of the level and quality of the sand contained in the plot of the Land he owns. Data about the buyer's knowledge of the level and quality of the sand contained in the plot of the Land he bought. Data about the duration of sand mining. The second is the documentation study, is a way of collecting data through written relics mainly in the form of archives and includes books on opinions, arguments related to research problems. In this study, it was used to collect data on receipts for sale and purchase of Land for excavation.
III. Results
From the results of field studies that have been carried out, most of the Landowners sell their Land or Land to buyers to be excavated, but in this case the data can be described through 3 cases: the Land, he did not know in estimating the level and quality of the sand that was in the plot of thehe could not know Landbecauseit based on the sale and purchase of slash regarding sand in thecontent Land . The later the sand content in the Land He soldto Supono for Rp. 40,000,000.00. Supono is a person who has a profession as a contractor in buying and selling building materials. When purchasing Land Somo Joyo's (Land), he was able to estimate the volume of sand in the Land, from which there was a price determination in the sale and purchase. Based on 15 years of experience in buying and selling with asystem slash or by buying and selling with a free system. On July 4, 2011, Supono paid with an advance of Rp.5,000,000.00, and the remaining payment in installments, occurred on December 27, 2011, Supono paid installments of Rp.7,000,000.00, On January 22, 2012 , Supono paid installments of Rp.8,000,000.00. After four months on April 13, 2012, Supono paid in full at a price of Rp. 20,000,000.00 ,. In the execution of the sale and purchase Supono can estimate the number of fleet (trucks) that will collect sand in mining / quarrying in the range of 1,200 trucks in oneplot garden of sand that he bought to Somo Joyo. In each truck, heavy sand loads range from 7 to 8 tons and each load of sand in a truck costs Rp. 550,000.00, which previously only cost Rp. 350,000.00 ,. From the sale and purchase process between Somo Joyo and Supono, it took place at the Kuto Porong village office. They make payment transactions using receipts given to both parties as authentic evidence. And the sale and purchase agreement involved village officials as witnesses.
2. The case between Giman and Supono. Giman is a man who has a profession as a farmer, he has been involved in the profession since 1985. He has threeof blocksLand in the Gunung Wurung area of Kuto Porong village which he previously worked on to grow crops. In 2007 he has bought and sold-Lander histo someone with asystem slash (rent), he sold one plot Land at a price of Rp. 27,000,000.00, in the buying and selling of Sand-Land the mining settlement lasts 3 years. Increasingly increasing years. Giman resumed buying and selling Land hiswith a named Supono on November 5, 2013. He sold two plots measuring 2830 m2 Land Supono hers at a price of Rp. 95,000,000.00, they use buying and selling with a slash (rental) system. Before making a transaction, they had negotiations in the selling price of the Land. (Giman 2015) In the implementation of the sale and purchase Supono was able to estimate the number of fleets (trucks) that would take sand in the mining / excavation which ranged from 2,400 trucks in 2 plots block sand-he bought to Giman. From the sale and purchase process between Giman and Supono, it took place in the village office. They make payment transactions using the model cash, the evidence uses receipts which are given to both parties as authentic evidence. And the sale and purchase agreement involved village officials as witnesses.
3. Tisan is a man who has a profession as a farmer and owns a shop business in his house located in the Sidoarjo area, he has 1.5 plots measuring 2115 m 2 Land on Mount Wurung consisting of three belonging to his brother, Tisan is a brother the oldest. The not made into plantations because the Land was very barren (parcels) only just left until the alang-alang plants filled the surface of the Land was.
Definition of Buying and Selling
Humanis increasingly progressing, so that at this time people can exchange goods with money and even exchange valuable paper with money and fellow valuable paper which is usually managed by the Bank of Trade and others, so that exchanges occur more and more smoothly. Since its inception, Islam has regulated trade traffic called al-bay 'as syirai which means buying and selling. (Asqalany, 1992). The laws concerning muamalah have been explained by Allah in the Qur'an and also explained by the Messenger of Allah in the holy as-Sunnah. (Saleh, 2006) The word buying and selling comes from the word al-bay'u which means sale, oposit from the word istara which means to exchange, or buy. (Munawir, 1997). Then the two words becomewords parallel of two words that are basically opposite. In the editors of classical books use the word al-bay'u to show the meaning of buying and selling. (Sabiq, 1983) Buying and selling can be defined as a transfer of ownership of an item from a party (seller) to another party (the buyer) in return for another item or money. Or in other words, buying and selling is ijab and qabul, which is a process of surrender and acceptance in transactions of goods or services. Islam requires the existence of mutual relations between the two parties who transact. The narrators of Ibn Hibban and Ibn Majah explained this, which means: "Indeed, buying and selling must be like each other like it." Because willingness is a hidden matter, legal legal dependence on buying and selling is seen in visible ways (dahir) who shows likes and likes, such as the submission and acceptance. (Muhammad 2014) In Islamic law, buying and selling is the exchange of certain assets with other assets based on the pleasure between the two. Or in other terms based on ownership rights with other property rights based on material calculation agreement. (Sabiq, 2006) In this case they emphasize the word "property and ownership", because there are also exchanges of property that do not have to be owned, such as leasing rents (ijarah). According to jumhur ulama said al-Mal is material and benefits. Therefore, the benefits of an object according to them can be sold. Whereas according to the Hanafiyah Ulama, it means al-Mal with a material that has value. Abu Sura'i Abdul Hadi, in his book "Bunga Bank Dalam Islam" suggests, basically buying and selling is halal. That buying and selling is one form of transaction that is justified as long as it runs on the correct principle according to the conditions set by religion. (Abdul, 1993) From the definition of buying and selling above it can be concluded that buying and selling is an event in which a seller hands over his goods to another person (buyer) after there is agreement between them regarding the goods and the price. Then the item is received by the buyer in exchange for the goods that the seller has given forever and all of them are carried out on the basis of mutual likes and mutual willingness. Thus the sale and purchase will involve two parties, namely the buyer submits the goods or money as the surrender of the goods received and the seller hands over the goods to the buyer in exchange for the money received.
Legal Basis of Buying and Selling
The ulama took a rule that all forms of buying and selling legal origin could be except for buying and selling which is prohibited by Allah and His Messenger. Namely every sale and purchase transaction that does not meet its legal requirements or there is a prohibition in the element of the sale and purchase. (Muhammad, 2014) Buying and selling is an action or transaction that has been stated in the sense that there is a clear law in Islam. Which is related tolaw taklifi. The law is permissible, this skill can be found and the Qur'an, the Hadith of the Prophet and the Islamic scholars.
Surah al-Baqarah verse 282 which means: ". . . and take witnesses when you conclude a contract. Let no scribe be harmed or any witness. For if you do so, indeed, it is (grave) disobedience in you. And fear Allah. And Allah teaches you. And Allah is Knowing of all things".
Surat an-Nisa 'verse 29 which means: "O you who have believe, do not consume one another's wealth unjustly but only (in lawfull) bussiness by mutual consent. . . " As-Sunnah, Hadiths from Rifa'ah Ibn Rafi 'which means: "Rifa'ah bin Rafi' recounts, that the Prophet SAW was once asked by the person "What is the best effort?" He replied: "A person's efforts with his own hands and every halal sale and purchase ". (Narrated by Bazzar and Al-Hakim).
Hadiths narrated by Al-Baihaqi, Ibnu Majjah and Ibnu Hibban mean: "Reported to us Abbas Ibn Al-Walidi Dimasyqiy, Marwan Ibn Muhammad, Abdul Aziz Ibn Muhammad from Dawud Ibn Salih al-Madini from his father Said I heard Aba Said al-Khudriyyi say the Prophet said: that actually buying and selling was based on liking.
Based on the verses and hadiths mentioned above, it can be understood that buying and selling is a lawful and noble job. If the culprit is honest, then his position in the hereafter will be equivalent to that of the Prophet, Martyr, and Sadiqin. The scholars and all Muslims agree on the possibility of buying and selling, because this is very much needed by humans in general. In the reality of everyday life not everyone has what they need. By way of buying and selling, people help each other to fulfill their needs. Thus, the wheels of economic life will run positively because what they do will benefit both parts of the reward. (Muslich, 2010) The verses and hadith above give the impression that property belongs to all humans together and God divides it between them fairly according to His wisdom and through legal and ethical stipulations, so that the efforts to obtain and use it do not cause disputes and destruction, also give the impression that rights and truth must be between them, so that the whole must not be with drawn by the first party so that all of them belong to them, not to the second party. Profit and loss in principle must be jointly shared or suffered together.(Sunan Ampel, 2005) Based on scholars' Ijma', buying and selling is permissible and has been practiced since the time of Rasulallah SAW up to the present. (Sabiq, 2006). Ulama have agreed that buying and selling is allowed on the grounds that humans will not be able to meet their needs, without the help of others. However, assistance or other people's goods that they need, must be replaced with other appropriate items. From the content of the verses of Allah and the sayings of the Prophet above, the fiqh scholars say that the original law of buying and selling is mubah (may). However, in certain situations according to Faith Ash-Syatibi, the Maliki fiqh expert, the law may be obligatory. Imam Ash-Syatibi gave an example when there was a practice of ihtikar (stockpiling of goods so that stock was lost from the market and prices soared). If someone does ihtikar and causes the price of goods to be hoarded and stored to increase, according to him, the government may force the trader to sell the item according to the price before the price surge occurs. (Haroen 2000) The rules or laws of buying and selling in Islam are stipulated as follows: 1. It is justified that the sale and purchase is not in the form of usury. 2. In buying and selling there must be ijab qabul which is spoken or spoken, and allowed in the heart of each. 3. It is prohibited to trade blood, carcasses, thefts, endowments, public property, liquor, pigs, goods that are of no price, and items that have no owner.
The principles that must be considered in buying and selling are as follows: 1. The principle of freedom of transaction, this principle is based on the thoughts of the jurists' who argue that the origin of buying and selling is permissible, and strengthened by hadith which means: 5. The principle of justice is in accordance with the Qur'an surat ar-Rahman verse 9 which means: "And establish weight in justice and do not make deficient the balance".
Position and Function of the Contract of Sale and Purchase
Position and function of the contract is the most important tool in legitimate or not, and becomes the ultimate goal of muamalah. In Arabic, the contract comes from the word 'aqada -ya'qudu -aqdan, plural to 'uqud which means bond. (Mahmud, 1990). A Contract is an agreement between ijab and qabul in a way justified by syara' which is establishes the legal consequences of the object. Ijab is the first party statement regarding the contents of the desired agreement, while qabul is the statement of the second party to accept it. (Ashar). In the case of ijab and qabul there is no obligation to use special words because the legal provisions are in the contract with purpose and meaning, not with words and the form of the word itself. What is needed in the case of ijab and qabul is mutual (ridha), between the two parties, which is realized in the form of taking and giving or other means that can make pleasure and based on the meaning of ownership and ownership. From this understanding, the contract takes place between two parties who are equally willing, and give rise to obligations on each of them reciprocally. In addition, the contract can also cover all people who are carried out with strong intentions and desires in the heart. In order for the contract to be strong, let it be written in the implementation and witnesses, especially for long-term contracts, and debt contracts. The aim is that the rights of each party are guaranteed and avoid mistakes. As explained in the word of Allah SWT which means: "who believe, if you believe (owe the promises set by the time, let you write it. " Thus, the position of the contract is a legitimate condition of buying and selling, while its function is to move property from one party (the seller) to the other party (the buyer). In the sale and purchase agreement, if there is a handover, then the contract is canceled, then each party must return what has been received, the buyer returns the item, the selling returns assets (money), but if there is no return, for example, the assets (money) have run out or the goods have been damaged, then the cancellation of the sale and purchase contract cannot be held (Hasbi)
Types and Selling Shares Buy
Can be reviewed in several ways. In terms of law, there are two types of sale, legal sale and legal repatriation, in terms of buy and sell objects. Selling is a form of alliance which contains elements that are the subject of sale and purchase. These are the elements that determine the form of the sale and purchase. In this case there are several kinds of sale and purchase from various aspects. But here are some kinds of forms and the form of sale and purchase of the legal or not to be three forms, namely: a. Sale of a Shahih, A sale is said to be a sale and purchase of a shahih when the sale and purchase are prescribed, fulfilling the rules and conditions specified, not the property of others, irrespective of the right hiyar . Then such a sale is said to be a sale of shahih.
b. Sale and Purchase Cancellation, the sale is said to be canceled if one or all of the pillars are not met, or the sale is on the basis and its nature is not prescribed, such as sale of children, crazy people, or goods sold for goods -blocks prohibited ', such as bagh, blood, pig, and khamar.
c. Buying the fasid, ulama Hanafiyah distinguishes fasid sale with canceled sale. If the damage to the sale and purchase is related to the goods sold, then the law is null, such as trading the hamsters (khamr, pigs and blood). If the damage to the sale and purchase is related to the price of the goods and can be repaired then the sale and purchase is called thesale fasid. (Haroen, 2000) Observed in terms of objects that are used as objects of sale and purchase can be suggested Imam Taqiyuddin that the sale is divided into three forms namely, 1) sale of things that appear, 2) the sale of the mentioned properties in the promise, and 3) sale of things which does not exist. Buying a visible item is at the time of the sale and purchase of goods or goods sold in the presence of sellers and buyers, such as buying rice in the market.
As for the sale and purchase of the properties mentioned in the agreement is the purchase of greeting (order). Whereas the sale and purchase of objects that are not there and can not be seen is the sale and purchase that is prohibited by Islam because the goods are uncertain or still dark so fears that the goods are obtained from stolen goods or goods that consequently can cause loss of one party. Considered by the actor (subject), the sale is divided into three parts: oral, intermediary, and act. The sale and purchase contract is done by most people. For the silent person is replaced by a signal, because the signal is a natural bearing in revealing the will. The things seen in the contract are intentions or wills and excerpts, not talks and statements.
Sale and purchase by messenger, intermediary, writing, or correspondence as well as ijab qabul with speech, such as via post and demand. This sale is made between a seller and a non-dealing buyer in an agreement of the contract, but by post and demand, this kind of sale is permissible according to syara '. In the understanding of some scholars, this form is almost identical to with the purchase of greeting,only selling greeting between sellers and buyers together in one assembly berhapan contract, while the purchase by mail and demand between sellers and buyers are not in a council contract.
Buy it by act (giving each other) or known by the term mut'atah that is taking and delivering goods without leave such as someone taking a cigarette that has been labeled its price tag, sold by the seller and then given the payment seller. The sale and purchase of suchdone without the goods is consent of the seller and the buyer, according to part of Syafi'iyah this is forbidden because he / she is part of the sale and purchase. But other Syafi'iyah, such as Imam Nawawi allows for the sale of daily necessities in that way, namely is ijab qobul.
While the Imam Hanafi divided the category of buying and selling based on syariat law into three. (Zuhaili, 2011) a. Buying and selling legitimate, sanctioned purchase is either fact or nature and has nothing to do with the rights of others. This sale and purchase law can affect directly. That is, there is a change in the right of ownership of goods and prices. The goods belong to the buyer, while the seller's property price is the same as ijab qobul b. Buying and selling avoid, is the buying and selling of unfulfilled peace and its object, whether or not legalized the fact and nature. That is to say, the perpetrator or the object of the transaction is considered lawful to do transactions. The transaction law is that religion does not consider it happening and does not create ownership rights.
The types ofinclude: 1. Sell something that does not exist (misconduct Bay 'Ma'dum). Bay 'ma'dum (sale and purchase of the goods does not exist) in which there is an element of uncertainty is falsehood. The scholars of fiqh agree to declare such a sale as invalid. For example, buying a fruit that his bud has not yet appeared on the tree or the calf that has not been there, even in his mother's stomach already exists.
2. The sale of the goods which can not be handed over to the buyer (bay 'ma'jus taslim) empat madzhab agreed to establish that in fact there is no sale and purchase of ma'jus at-tasliim when the body, the property or thing or thing is his own, such as trading birds flying from their owners. Although it can bring things at the expense of the contract, it is still not considered possible, because there are elements of falsehood. Canceling the contract may also occur when the price (substitute) is available, then the merchandise will be the property. Evidence: because the Prophet SAW prohibits the sale of possessions (the sale of goods where the buyer uses the pebbles in his buying and selling), the sale of gharar (sale and purchase of unknown items and their nature). And it shows uncertainty. From Abi Sa'id al-Qur'di ra The Messenger of Allah (PBUH) prohibited the buying and selling of slaves, buying and selling cattle that are still in the womb until birth, and selling their milk, and prohibiting the sale of goats until they are divided. Scholars Hanafiah argued that even though his immediate assignment was in the assembly (the place of the contract), it was still not allowed, because there was the element of falsehood.
3. Sale of debt (Bay 'Dain), the debt is like a substitute (price) of goods sold, showing loans, and dowries. in lieu of the costs of gains earned, and recommended against penalties and fines that are detrimental, and khulu ' and can not be disputed. Discrimination of the sale and purchase of debt sometimes at the time of contract or nasyi'ah (owing first). The sale and purchase of nasi'ah (in the first case) is a sale and purchase of credit with credit or debt with debt. the time is' (credit) is its ending, this dhal is forbidden and falsely according to ijma ', because there is a element of riba, as the hadith which Daruqutni ordered from Ibn' Umar: "The Messenger of Allah (may peace be upon him) prohibited by credit". In that provision it shows the fasidnya something prohibited (sale and purchase) even though the transaction is the person who owes or is not the person who owes it. Like selling lost items or selling pasta and flying in the air. This law is agreed upon by the entire fiqh scholar and is included in the category of 'al-gharar (fraudulent sale). The reason is the hadith of Ahmad ibn Hambal, Muslim, Abu Daud, and at-Tirmizi as follows: "Do not buy fish in the water, because this kind of sale is a deceptive sale". 4. Selling buy that contains the deceptive element, which at birth is good, but it turns out there are some trick elements, as in the words of the Prophet Muhammad SAW about trading fish in the water.
Selling Buy Najas And Mutanajas
The scholars agree on the absence of sale and purchase contracts for khomer, pigs, carcasses and blood. Because they are not included. He said the Messenger of Allah (SAW): "Verily, Allah and His Messenger have forbidden the sale of khamer, carcass, pork and idol". Jumhur ulama (other than Hanafiah) also include dogs in this sale and purchase order. Based on the hadith of Abu Mas'ud Al Anshory: The Prophet SAW prohibited the making of a dog as a tsaman. "Jumhur also abolished the sale and purchase of unclean goods, which can not be purified, such as oil, honey and samin in which there is a stool, when it is purified, like cloth. It is also not permissible for the jumhur to sell goods which are originally unclean as herbivore fertilizer (animal waste) according to Malikiyah. While according to Syafi'iyah and Hanabilah the stool (stool) and the bone of the carcass and its skin. Malikiyah allows for sale of cattle, goats, camels and the like. for the purpose of cultivating Land or the other which includes benefits.
6. (Advance Purchase Bay 'Urbun / DP Advance) is someone who buys something then handed over to the seller part of the price of the item in dirhams or the like with the note if the sale and purchase proceed, the down payment is calculated as part of the overall price, whereas if the sale is not resumed, the advance is given to the seller, in other words, if the sale and purchase transaction continues, the down payment as part of the price of the goods, whereas if the sale and purchase transaction does not continue, the advance is the gift from the buyer to the seller. The law of sale and purchase with advance payments (ba'i al-urbun) there are two conflicting groups, the groups that claim to be illegitimate and the group expressing legitimate.
Jumhur ulama argued that buying and selling system is a prohibited and illegal sale, the Ulama Hanafiah included in the fasid sale category, whereas Syafi'iyah and Malikiyah have sold the sale and canceled based on the hadith of Rasulullah SAW. "The Messenger of Allah (PBUH) prohibits the purchase of urbun (the advance payment system)." The sale of this kind also includes gharar sale, forbidden and including the eating of others property in a falsehood, other than that in the sale and purchase of the system contains two conditions that are the conditions of grant (payment of the advance) and the condition of returning the merchandise with the estimation of one of the disadvantaged parties. The Hanabilah scholar was able to admit the sale and purchase with no advances (may) based on the history of Abd al-Razaq in his Mushhaf from the hadith of Zaid bin Aslam that the Prophet (peace be upon him) said. "Asked about the down payment and he allowed it." And also the narration of Nafi bin al-Harits had bought a prison building for Umar from Shafwan bin Umayyah with four thousand dirhams, (with provisions) when Umar liked the goods sold for him, if Umar does not like, four thousand dirhams for Shafwan 7. Selling Water Buy (Bay 'Maa') 8. Water is sometimes mubah or ghoiru mubah. Mubah is the water that is owned by all humans and they benefit from it, such as: sea water and common rivers. Word of the Prophet "Muslims are in the three: water, grass and fire." 9. Ghoiru mubah or owned is water belonging to special ownership, individual or congregation. And water that contains specialization of ownership such as residents of a particular village and water maintained in vessels (packed). 10. The law of selling the purchase is possible, except in a state of emergency (danger). As: a thirst that can cause death, it is obligatory to give him water, if it still hinders it, then it is as if he kills him. 11. Jumhur allows the sale and purchase of water that is mubah, such as: well water, springs, and packed. Aligned with wood allowed by Rasulullah SAW in selling his beleq.
12. Madzhab Dhohiriyah does not justify the sale and purchase of water absolutely, because the Prophet SAW prohibited the sale and purchase of water.
13. Prohibition of selling it also occurs in special circumstances such as: if the sale and purchase of water is intended to fertilize the grass in the vicinity (wells) because the shepherd will need water for the pasture.
Analysis of Islamic Law Against Buying and Selling Sand-Land With a Liberation System in Mount Wurung of Mojokerto Regency
From the results of the implementation of the Buying and Selling Sand-Land With a Liberation System in Mount Wurung of Mojokerto Regency has been presented three cases as previously described. For the analysis of Islamic law in this chapter shall be made to the case selectively namely:
Analysis of Islamic Law Against First Case
The first case is the implementation of buying and selling Sand-Land with the slash between Somo Joyo with Supono, Somo Joyo has a Sand-Land a plot measuring 1400 m 2 . Before he sold Sand-Land, his he does not know in estimating the rate and quality of the sand that was inplot Sand-Land his because by selling the blow concerned about sand content material Sand-Land heris not known. The then Sand-Land content in the Sand-Land he sells to Supono for Rp.40.000.000,00. Supono is someone who has a profession as a contractor in the sale of matrial buildings. At the time of purchase farms (Land) owned Somo Joyo, it can estimate the Content of sand in the Sand-Land, of the existence of pricing in the sale. Based on the experience of 15 years in the trade with system the blow or the buying and selling with free system. From this view the problem is that farmers (sellers) do not know the content and quality of the sand contained in Sand-Land his or her own, only the area Sand-Land the size of the becomes a measure in determining the price. So the first case there gharar because the seller does not know the content and the quality of Sand-Land its and sellers can not estimate the price based on the rate and quality of Sand-Land. its while the buyer does not have gharar because it knows and can predict the content of sand in the garden he will buy, from that there is a pricing in the sale and purchase. Based on the experience of 15 years in the trade with system the blow or the buying and selling with free system.
From the sale of the first case there is an inadequate buying and selling item that is in terms of objects or goods purchased (the rate and quality of the sand in the Sand-Land) that have not looked good with quality and quantity that affects the price. From the knowledge of the owner of the garden can not assess the content of sand that is in the plot Sand-Land that he sells. while the other rukun has been fulfilled whether it is the people who are engaged (seller and buyer), the existence of sighat, and the exchange rate of substitute goods.
Analysis of Islamic Law Against the Second
Case The second case was the implementation of buying and selling Sand-Land with a system slash that occurred between Giman and Supono, Giman had threeof blocks Land in the Gunung Wurung area of Kuto Porang Village which he had previously worked on for farming. In 2007, he once bought and sold Sand-Land his to some one with a slash (rental) system. He sold one of piece Land at a price of Rp. 27,000,000.00, in the buying and selling of Sand-Land the mining settlement lasts 3 years. In 2013 Giman resumed buying and selling garden his with a named Supono on November 5, 2013. He sold two plots garden measuring 2830 m 2 Supono hers at a price of Rp. 95,000,000.00, they use buying and selling with a slash (rental) system. (Giman, 2015) From this explanation, it is explained that the farmer (seller) knows the level and quality of the sand and also from the width of theSand-Land he can determine the selling price of Sand-Land his. Then the second case there is no gharar because sellers know the level and the quality of sand garden hisand sellers can estimate the price based on the levels and quality of the sand andextensive orchards. his From here, buying and selling is in accordance with Islamic law because. From the second sale and purchase case, there are inadequate pillars of buying and selling, namely in terms of the object or item purchased (the level and quality of the sand in the Land) which does not appear to be good in quality and quantity that affects the price. From the knowledge ofowners, they Land can estimate the sand content in the plot of Land that he sells. while the other pillars have been fulfilled, either the presence of people who are mindful (seller and buyer), the existence of sighat, and the exchange rate of substitute for goods.
Analysis of Islamic Law Against the Third
Case The third case is the implementation of buying and selling Sand-Land with a system slash which occurred between Tisan and Supono, Tisan had one half plot of Land on Mount Wurung consisting of three belonging to his brother. He sold the Land with a slash (rental) system. So Land he sold theto Supono, there was a negotiation between him and Supono in determining the price, and they agreed Sand-Land to sell theto Supono at a price of Rp. 80,000,000.00, in cash. From this exposure explains that farmers (sellers) can estimate sand content and the type of material contained in Land, his based on knowledge of Sand-Land the neighbor's has been mined.
Then there is a third case because gharar both sides sellers and buyers can estimate levels and the quality of Sand-Land his and in purchasing. From here, buying and selling is in accordance with Islamic law. From the third sale and purchase case there are pillars of inadequate buying and selling, namely in terms of the object or item purchased (the level and quality of the sand in the Land) which has not looked good in quality and quantity that has an effect on the price. From the knowledge of owners, they Land can estimate the sand content in the plot of Land that he sells. while the other pillars have been fulfilled, either the presence of people who are mindful (seller and buyer), the existence of sighat, and the exchange rate of substitute for goods.
From the first, second, and third cases, they have met the conditions contained in buying and selling, including the contractor's requirements (having the ability to choose, not crazy, baligh, and not drunk), contract goods requirements (sacred, beneficial, belonging to people who carry out contracts, capable of being handed over by the contract agent, knowing the status of goods, quality, quantity, and type, goods can be delivered) Based on the explanation of the above case, the first case of buying and selling is gharar because in the sale and purchase of Sand-Land with the system slash , the seller (farmer) does not know the level and quality of the sand in determining the price, only determines the price based on the width of the Land it sells. not in accordance with Islamic law. In the second and third cases sellers and buyers make buying and selling Sand-Land with system slash complies with Islamic law because the seller and the buyer can determine the levels and quality of sand, garden his and bought. In addition, sellers and buyers can estimate prices based on the area, content, and quality of Sand-Land. Then the sale and purchase is in accordance with Islamic law. Madzhab Imam Hanafi allows the sale and purchase which contains a little gharar because the sale and purchase has been estimated based on experience that is close to the truth, and buying and selling is said to be buying and selling jizaf (buying and selling exceptions), whereas according to the Imam Syafi'i is not permitted because there is a potential for gharar that allows each of them to experience a loss or profit.
V. Conclusions
Based on the discussion about the practice of buying and selling blow sand gardener at Mount Wurung of Mojokerto Regency along with analysis of Islamic law has been stated in the previous chapter, thus it can be concluded that this journal: That the reason most people sell sand garden with the system blow because farmers can not afford toin mining sand, because of constraints unable in licensing and costs. Mining is very costly, therefore farmers prefer the Land to be sold slash to other people to mine the sand so that it becomes productive Land, the owners Land want the Land or the Land to become productive Land so that the Land can be used as plantation or animal husbandry. There are two types of systems used in buying and selling Sand-Land , namely by buying and selling freely and buying and selling slash or rent. In buying and selling with system slash this, there is proof of payment using down payment, installments, and payments cash given by the buyer to the seller (owner Land), proof of payment in the form of receipts given by both parties parties betweenowners Land and slicer to know each other after payment has taken place. The practice of buying and selling Sand-Land with the system slash takes place with the following mode.
a. The sand sold by the owner is in a plot of Land which is ± 1400 m 2 .
b. Buyers dig sand in thesoil Land within 2-3 years per plot. c. After the excavation was completed, the buyer redistributed the Land and handed it to the owner.
So it was revealed that the buying and selling of Sand-Land with the system slash in Mount Wurung of Mojokerto Regency. Two of the three cases Sand-Land with system slashes studiedin accordance with Islamic law because both sellers and buyers to assess the levels of early quality sand contained in the terraced Sand-Land while one case is not in harmony with Islamic law. Madzhab Imam Hanafi allows buying and selling that contains a little gharar because buying and selling has been estimated based on experience that is close to the truth, and buying and selling is said to be buying and selling jizaf (buying and selling exceptions), whereas according to Madzhab Imam Syafi'i is not allowed because potential gharar that allows each of them to experience losses or profits. | 2020-03-04T03:02:37.206Z | 2019-10-01T00:00:00.000 | {
"year": 2019,
"sha1": "9c8a8d9c5e68529a6158f9c327b53621aa8315f7",
"oa_license": "CCBY",
"oa_url": "https://e-journal.unair.ac.id/AIJIEF/article/download/15469/8390",
"oa_status": "HYBRID",
"pdf_src": "Anansi",
"pdf_hash": "939fa68032bfbbbc6baabb202c27bc7d0a8b8d8a",
"s2fieldsofstudy": [
"Agricultural And Food Sciences"
],
"extfieldsofstudy": [
"Business"
]
} |
237249964 | pes2o/s2orc | v3-fos-license | Researching of properties of expanded clay concrete on cement-ash binder. Investigación de las propiedades del hormigón de arcilla expandida sobre aglutinante cemento- ceniza
Ukraine is a state with a powerful industrial base,the waste of which is ash. The use of ash as a binder for the production of expanded clay concrete solves the problem of its disposal. Studies of expanded clay concrete on carbonates and and cement-ash binder were carried out.The research results showed that it is possible to manufacture building structures from this material, including load-bearing ones. Experiments have confirmed the effectiveness of using a cement-ash binder [Vilkov, 1984]. The article provides recommendations on the technology of production of expanded clay concrete mixture and determination of optimal composition. Were made and tested samples of expanded clay concrete on a cement-ash binder with varying degrees of reinforcement with dimensions of 100x150x1200mm. The bearing capacity, crack resistance and deformability of expanded clay concrete were determined by testing on hydraulic press. The compressive strength of the studied expanded clay concrete was tested by loading samples with dimensions of 10x10x10cm and 10x10x40cm. The perspectives of using expanded clay concrete on a cementash binder are analyzed. Sustainability, Agri, Food and Environmental Research, (ISSN: 0719-3726), 10(X), 2022: http://dx.doi.org/
Sustainability, Agri, Food and Environmental Research, (ISSN: 0719-3726), 10(X), 2022: http://dx.doi.org/ RELEVANCE For a country in which a large number of industrial enterprises are concentrated, it is important to have an effective technology for the disposal, processing or further use of industrial waste. As a result of the functioning of metallurgical, energy, glass-smelting and other enterprises, the technological process of which includes fuel combustion,a large amount of ash is generated. Ash is a good binding material, so it is advisable to use it in the manufacture of concrete, in particular expanded clay concrete on carbonate sand [Mikhailov et.al,1985;Stroyizdat, 1989]. After reviewing the literature, it was established that a large number of scientific studies and works are devoted to this topic, including at the Odessa State Academy of Civil Engineering and Architecture. The properties of expanded clay concrete on an earth-ash binder were studied in his dissertation work by prof. A. I. Kostiuk [1981]. This type of expanded clay concrete has not been fully studied and is only gaining popularity, which indicates the relevance of the researchtopic.
THE MAIN IDEA
To study the properties and characteristics of expanded clay concrete on carbonate sand and ash binder and to create a technological algorithm for its production, studies were carried out. For testing, prototypes of various sizes and configurations were made, depending on the type of test.The selection of rational compositions of expanded clay concrete on carbonate sand and cement binder with the addition of fly ash was carriedout. Investigations of the strength and deformative properties of expanded clay concrete and their change over time were carried out under short-term and long-term action of loads.
The determination of the critical stress intensity factor (SIF) for assessing the fracture toughness of expanded clay concrete has been carried out. The stress-strain state of bent reinforced concrete elements and structures was investigated under short-term and long-term action Sustainability, Agri, Food and Environmental Research, (ISSN: 0719-3726), 10(X), 2022: http://dx.doi.org/ of loads, including prestressed ones [Stroyizdat, 1981;Kostyuk,1992, Sukhanov, 1987.
The selection of the composition of structural expanded clay concrete was reduced to determining the most rational ratio between expanded clay, carbonates and, ash, cement and water. The composition was selected to ensure the workability and compaction of the mixture. For the rational selection of the composition, the method of mathematical planning of the experiment was used. The analysis of the obtained dependencies made it possible to obtain the boundary values for the used variable factors. This made it possible to prescribe compositions for concretes of classes C10/12.5 -C16/20 [Stroyizdat, 1986;Kostyuk,1992].
To test of centrally compressed elements expanded clay concrete on carbonate sand and a cement-ash binder test cubes with dimensions of 10x10x10cm and prism samples with dimensions of 10x10x40cm were made. All samples were tested after heat and humidity treatment at the age of 28, 180 and 360 days. After studying the strength properties, quadratic regression equations were obtained. Their analysis showed that the consumption of the cement-ash binder and the volumetric content of expanded clay gravel in concrete have a significant effect on the cube strength of steamed expanded clay concrete [Buzhevich,1978;Stroyizdat, 1989;Kostyuk,1992]. | 2021-08-19T19:56:44.597Z | 2021-01-01T00:00:00.000 | {
"year": 2021,
"sha1": "999917e8b9d7f041b2eeda983466f89bfc2dbfe1",
"oa_license": "CCBYNCSA",
"oa_url": "https://portalrevistas.uct.cl/index.php/safer/article/download/2570/2131",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "d4f94fe918b1daed5b8a6fa3d52e55a811afb5d1",
"s2fieldsofstudy": [
"Materials Science",
"Engineering"
],
"extfieldsofstudy": []
} |
245007464 | pes2o/s2orc | v3-fos-license | Study of bathymetry in determining Nuclear Power Plant Site in Gosong Beach Waters, West Kalimantan
Gosong Beach waters are located in Bengkayang Regency West Kalimantan, where the water conditions are influenced by the South China Sea. This study aimed to determine the shape of the bottom morphology profile (bathymetry). The bottom morphology of the water was used as initial data to determine the location of nuclear sites in the study area. Bathymetry measurements were conducted using a single beam echosounder corrected by tidal data to determine the chart datum value, then used to correct sea depth for accuracy. The measurement of tidal data was conducted for 15 days in March 2021 with an interval of 1 hour. The results showed that the type of tidal obtained was mixed tide mixed with the double daily slope with Formzhal (F) value 0.77. The depth of the waters at the study site ranges from 0.9 to 8.45 meters. The average value of the coastal slope shows that spatially there are variations in the slope class with the percentage value ranging from 0.13 to 2.29% which is included in the classification of flat coastal slopes.
Introduction
Gosong coastal waters Bengkayang Regency West Kalimantan is an area where the environmental conditions of the waters are influenced by the conditions of the South China Sea and include shallow sea waters. Mapping shallow marine water is important in studying the bottom morphology of the water, marine environment, and coastal resources management [1]. The use of space in coastal areas must be supported by oceanographic data and information, one of which is bathymetric surveys [2][3].
A bathymetric survey is a process of describing the bottom of the waters starting from field measurements, processing to visualizing the bottom of the waters to obtain information about the shape of the surface (topography) of the bottom of the waters [4]. One of the methods used in bathymetry measurements is using the acoustic method. This method has several systems, one of which is the single beam echosounder system, a water depth measuring instrument that uses a single beam system as a sender and receiver of sound wave signals [5]. This system measures the water depth directly from the survey vessel by considering the sound propagation process, sound characteristics, environmental or medium factors, and target conditions [6][7]. [8], researched bathymetry and seabed morphology studies in determining the path of laying subsea pipelines, which resulted in depth ranging from 1.09 to 13.78 meters with a very gentle slope. A similar study was also conducted by Satriadi [9] regarding bathymetry studies with the results obtained showing the depth of the waters in the study area between 0 -9 meters with slope values ranging from 0.18% -0.20%. Pangestu et al. [10] investigated those parameters for River Estuary. They revealed a depth of 0.5 to 4.2 meters and sloping coastal characteristics in which the cross-section in the river's direction is steep toward the sea. Many aspects of the ocean, spatial planning, marine environment, aquaculture require bathymetric data [11]. One of them is a feasibility study for the planned construction of a Nuclear Power Plant (PLTN).
The plan to build a nuclear power plant on the Gosong coast, Bengkayang Regency West Kalimantan, aims to meet national electricity needs, spur industrial growth, and improve community welfare. In connection with this nuclear power plant development plan, it is necessary to study the bathymetry and bottom morphology of the waters in the area to provide baseline data in preparation for the development of nuclear power plants in the Bengkayang Regency. The study aimed to determine the shape of the bottom water profile, which was used as an initial survey to determine the location of the nuclear footprint in the water in the study area.
Data Acquisition
This research was conducted in Gosong Beach, Bengkayang Regency, West Kalimantan ( Figure 1). Depth data retrieval (sounding) was carried out using a single beam echosounder Garmin GPSMap 585, then corrected with tidal data to determine the chart datum value. The datum chart is used as the starting point of the water level, which is adjusted to the data from the sounding results. This tidal data measurement was carried out for 15 days in March 2021 with an interval of every 1 hour.
Data analysis
Tidal data correction processing begins with tide data analyzed using the Admiralty method for obtaining the value of the components of the tides. According to Rawi [13], method Admiralty is one of the methods used in determining the value of the constant harmonic and determining the type of tidal sites. This analysis is used to get the value of the component tides ( , , , ). This tidal component is used to calculate the Formzahl number (F), which is used to determine the type of tide and the chart datum ( ) which will be used as correction of the sea depth data to obtain the actual sea depth. The equation used to determine the number Formzahl (F) [14][15] and the chart datum ( ) [16] is as follows: Furthermore, the bathymetric measurement data must be corrected to the sea level position at the time of measurement, and corrections are made to the transducer sinking distance so that the actual depth is obtained. The correction for the tides is formulated as follows: Where is the amount of reduction (correction) given to the results of depth measurements at time t, is the position of the actual sea level (true water level) at time t, MSL is mean sea level, and is the depth of the face of the tide below MSL After obtaining the reduction value between the depth during sounding and the tidal value, then determine the actual depth value: Where D and dT are the actual depth and the transducer corrected depth, respectively. The corrected sea depth data is interpolated using the kriging method to obtain depth contours. Furthermore, the slope of the coast can be calculated using equation [17]: Where is the beach slope ( 0 ), d is the water depth (m) and x is the distance from the shoreline to a depth d (m).
Tides in the coastal water of Gosong Beach Bengkayang
Based on the measurement results processed using the Admiralty method, the results of the tidal harmonic constants in the waters of Gosong Beach, Bengkayang Regency, West Kalimantan, as shown in Table 1. The constant harmonic data are used to determine the Formzahl number, Mean Sea Level (MSL), and chart datum. The value of the Formzahl number obtained is used to determine the type of tide at that location. From the value of the tidal harmonic constant above, the average sea level height obtained is 1.91 meters and the Formzhal (F) value 0.77. Based on the Formzhal (F) value that has been obtained, it is known that the tidal types in the waters of Gosong Beach, Bengkayang Regency is mixed tide prevailing semidiurnal tide. According to Wyrtki [13] that this type of tide occurs twice and two lows in one day, and sometimes there is one high tide and one low tide with different heights and times of occurrence. This tidal type is similar to the results of research conducted by Muhardi et al. [18] in the waters of Kura-Kura Beach, Bengkayang Regency, which is not far from the location of Gosong Beach. The pattern of the results of tidal measurements in the field that has been obtained is by the results of tidal predictions obtained from the Geospatial Information Agency (BIG) (Figure 2). According to Boggs [19], tides are one of the factors that affect sediment distribution patterns other than currents because at high tide and low tide, sediments carried by seawater can be carried away from or near the coast, so that these tides also affect changes in the morphology of the bottom of the waters. There is a study area.
Bathymetry in the coastal waters of Gosong Beach Bengkayang Regency
Based on the results of measurements in the field, the depth value varies from 0.9 to 8.45 meters. The map display as shown in Figure 3, shows that colour gradations and contour lines representing levels of detectable depth with depths less than 3.6 meters are visible along the coastline. According to Triadmodjo [20], the coastal area is an area of land that is affected by tidal movements, and the starting point of the zone starts from the lowest mean low tide level to the highest average tide level. Changes in depth conditions occur from the mouth of the river mouth towards the sea. The colour gradation appears to have changed to dark blue which indicates that the depth from the estuary to the sea has increased in depth. This happens because the sedimentation process that occurs in the estuary area causes a buildup of sediment at the mouth of the river mouth. The distance of adjacent contours shows the rate of addition of depth or steep topography [21] as at several points in the West and Southwest (3-8 meters).
Cross-section seabed morphology
The cross-section profile is a vertical view that describes the configuration of the earth's surface along a selected line on a topographic or bathymetric map [22]. The cross-sectional profile of the waters at this location is divided into four cross-sections (A-D) as shown in Figure 4. The average value of the coastal slope shows that spatially there is a variation of the slope class with the percentage value ranging from 0.13 to 2.29% ( Figure 5). Based on the value of the slope of the coastal slope, the coastal waters of Gosong are included in the classification of flat coastal slopes. According to Van Zuidam [23], the classification of flat slopes has a value range of 0 -3%. The flat coastal slope is thought to be related to the condition of the coastal area with wide intertidal areas as a result of abrasion along the coastal area and the supply of sediment that enters the sea through rivers [25]. | 2021-12-10T20:06:58.834Z | 2021-12-01T00:00:00.000 | {
"year": 2021,
"sha1": "0ff26a6a1828a0ee3843aea9fd270ed653ad4f9f",
"oa_license": null,
"oa_url": "https://doi.org/10.1088/1755-1315/944/1/012051",
"oa_status": "GOLD",
"pdf_src": "IOP",
"pdf_hash": "0ff26a6a1828a0ee3843aea9fd270ed653ad4f9f",
"s2fieldsofstudy": [
"Environmental Science",
"Engineering"
],
"extfieldsofstudy": [
"Physics"
]
} |
211472790 | pes2o/s2orc | v3-fos-license | Preparation of Scalable Silica‐Coated Iron Oxide Nanoparticles for Nanowarming
Abstract Cryopreservation technology allows long‐term banking of biological systems. However, a major challenge to cryopreserving organs remains in the rewarming of large volumes (>3 mL), where mechanical stress and ice formation during convective warming cause severe damage. Nanowarming technology presents a promising solution to rewarm organs rapidly and uniformly via inductive heating of magnetic nanoparticles (IONPs) preloaded by perfusion into the organ vasculature. This use requires the IONPs to be produced at scale, heat quickly, be nontoxic, remain stable in cryoprotective agents (CPAs), and be washed out easily after nanowarming. Nanowarming of cells and blood vessels using a mesoporous silica‐coated iron oxide nanoparticle (msIONP) in VS55, a common CPA, has been previously demonstrated. However, production of msIONPs is a lengthy, multistep process and provides only mg Fe per batch. Here, a new microporous silica‐coated iron oxide nanoparticle (sIONP) that can be produced in as little as 1 d while scaling up to 1.4 g Fe per batch is presented. sIONP high heating, biocompatibility, and stability in VS55 is also verified, and the ability to perfusion load and washout sIONPs from a rat kidney as evidenced by advanced imaging and ICP‐OES is demonstrated.
be preserved in this glassy or vitrified state indefinitely. [7] The availability of such cryogenically banked organs could increase organ transplant utilization, improve short-and long-term graft function, and increase overall patient survival. [4] Achieving the cryogenic state requires cooling organs fast enough to avoid ice crystal formation. This critical cooling rate (CCR) is a function of the type and amount of cryoprotective agents (CPAs) used. For instance, the CCR for VS55, a commonly used CPA, is −2.5 °C min −1 , an achievable rate for most systems as shown by Fahy et. al. with rabbit kidneys. [7,8] However, even if this is achieved, rewarming requires a much faster critical warming rate (CWR), such as 50 °C min −1 for VS55 [8c] to avoid devitrification (i.e., ice formation during the rewarming). [9] Convective heating by immersion in a warm water bath is adequate for samples with volumes below 3 mL and cells in suspension or small tissues in a cryovial or cryobag. However, convective warming is often too slow to achieve needed CWR in larger samples (>3 mL) due to the inability to quickly warm the center of the sample, thereby leading to devitrification. Furthermore, fractures and cracks within the tissue are also caused by thermal stress between the tissue edge and center. [10] Electromagnetic rewarming of cryopreserved systems at tens to thousands of MHz (including microwave), can be achieved by dielectric heating, [11] which has been applied with limited success to several systems (Table S1, Supporting Information). [12] Unfortunately, the strong temperature dependence of the dielectric properties can increase heating in any spot with an elevated temperature (i.e., center, edge, or other) while low thermal conductivity prevents heat spreading thereby leading to "thermal runaway" as a failure mode. [12d,13] Furthermore, nonuniformity in the field and its absorption (i.e., dissipation, distortion, and shape effects) can be further hindrances to uniform rewarming. For instance, optimal rewarming is predicted for only "small" spheres whose circumference is less than the wavelength of radiation due to the power dissipation and the field distortion. [13b,14] Moreover, dielectric property variation due to the heterogeneous components in organs [15] also make it impossible to rewarm vitrified organs solely by electromagnetic heating. [12b,c] To circumvent these problems, a method that uses heat generation that can be spread sufficiently uniformly in a large system is still needed.
Nanowarming is a new method of volumetric rewarming that theoretically has no size limit and has been demonstrated for volumes as high as 80 mL. [16] Briefly, nanowarming produces rapid and uniform heating in a sample through inductive heating of iron oxide nanoparticles (IONPs) within a radio-frequency coil (alternating magnetic field, with frequency of ≈100-400 kHz). With IONPs distributed throughout the sample, rapid and uniform warming occurs, which eliminated cracking and ice formation and results in improved recovery and tissue viability. [16] We note that hybrid heating methods, such as assisting electromagnetic heating with conduction heating [14,17] and magnetic nanoparticles [18] are also being investigated as potential solutions for large-volume rewarming (Table S1, Supporting Information). However, the success was only limited to large-volume (20 mL) cells suspensions so far, whereas nanowarming is theoretically fully scalable to L size organ systems. [16b] Heat-producing IONPs are thus an essential component of nanowarming. The heating properties of IONPs have been applied to clinical treatments, such as hyperthermia of glioblastoma and ongoing clinical trials for prostate cancer. [19] Cancer treatment requires a nanoparticle which is stable in biological fluids (i.e., blood, interstitial, or intracellular), has heating properties tuned to cell destruction, and a low collateral toxicity profile. In some cases, the nanoparticle is even designed for cancer cell uptake. [20] However, IONPs used for nanowarming need to have different properties than those optimized for cancer treatment. First, they should show minimal cellular association and uptake to allow maximum removal after usage. Second, organ cryopreservation requires the use of CPAs often formulated with organic solvents (i.e., dimethyl sulfoxide, glycols, etc.), sugars, and salts to achieve necessarily low CCRs; [8b,c] therefore, the IONPs should be colloidally stable in the CPA at high concentration (mg's Fe mL −1 ) such that they can maintain their heating ability and be perfused into the organ and distribute throughout the vasculature as a last step prior to cooling. Once in the vitrified state and sufficiently perfused with IONPs, the organ is theoretically stable for years but can be rewarmed at any point through inductive warming at rates that exceed the CWR of the CPA. [21] IONPs with varied coatings that have been tested for nanowarming are shown in Table 1.
EMG308, which demonstrated the first proof of principle of nanowarming in solutions, [16a] is inexpensive and easily obtained; however, EMG308 is not sufficiently biocompatible, is taken up in cells, and is unstable in CPA. Previously, we coated EMG308 with mesoporous silica (msIONP) to provide stability in CPAs and achieved the first biological demonstration of nanowarming with cells and simple tissues. [16b] However, msIONP synthesis, aging, deoxygenation, hydrothermal treatment, and purification is laborious and takes several days ( reaction temperature and removal of a toxic surfactant, cetyltrimethylammonium bromide (CTAB), from the pores, inhibited scaled-up production of msIONPs. [22] One batch of msIONP synthesis could produce 35 mg Fe msIONPs, which is far below the demand in organ nanowarming (Table S3, Supporting Information). Herein, we report the synthesis and characterization of silicacoated EMG308 (sIONPs), which eliminated the need for CTAB in synthesis and shortened the overall time frame (Table S2, Supporting Information). Currently, we can produce > 20 g (1.4 g Fe) sIONPs/batch in a 4L reaction vessel, allowing for the scale-up required for the current rodent organ nanowarming studies (Table S3, Supporting Information). This sIONP production can be further scaled up to larger quantity with a larger reaction vessel.
Our sIONPs were fully characterized using multiple analytical methods. For instance, cores and coating were assessed using transmission electronic microscopy (TEM), dynamic light scattering (DLS), zeta potential, inductively coupled plasmaoptical emission spectroscopy (ICP-OES), infrared spectroscopy (IR), X-ray photoelectron spectroscopy (XPS), nitrogen adsorption analysis, and thermal gravity analysis (TGA). Further, the physical properties that are important to nanowarming, especially heating and magnetic properties, were studied as a function of shell thickness and colloidal stability in VS55, a commonly used CPA. Biological assessments included measurement of cytotoxity and cellular uptake in human fibroblasts (HDFs, ATCC) and nanowarming of HDF. Finally, we also present, for the first time, a demonstration that an ex vivo rat kidney can be uniformly loaded and washed out with CPA and sIONPs. sIONP loading and washout were evaluated using microcomputed tomography (µCT) and magnetic resonance imaging (MRI). [23] In short, we demonstrate that sIONPs are an effective and scalable embodiment of IONPs for nanowarming use in organs.
Silica Coating of EMG308 (sIONP)
EMG308 is a commercially available IONP that heats well in water and is relatively inexpensive. [16a] However, due to aggregation, the heating ability of EMG308 is significantly lowered in complex media other than water (i.e., saline or protein solutions). [21] Previous successes in nanowarming of biological samples were performed with msIONPs due to the high stability endowed by their polyethylene glycol (PEG)/trimethyl silane (TMS) coating. [16b,22] However, the quantity of IONPs required for organ nanowarming is much greater than needed in previous use of nanowarming for arteries or cancer therapeutics (see Table S3, Supporting Information).
sIONP Synthesis and Morphology
The silica shell was coated onto EMG308 using a modified Stöber method. [24] After the silica shell, the surface was modified with PEG and a small hydrophobic ligand, TMS. PEG is well known as a biocompatible polymer that is antibiofouling and increases nanoparticle circulation time in the body, while the TMS serves as a spacer between the PEG to help fully extend PEG and therefore provide stability in the solutions. [22,25] Polyvinyl pyrrolidone (PVP) has been demonstrated as a universal surface modifier for coating colloidal particles with silica, [22,26] so our synthesis used PVP as an intermediate layer for silica coating on EMG308. Although other groups have reported direct silica coating on EMG304, [27] attempts to produce a silica shell on EMG308 without PVP resulted in free silica and bare EMG308 cores, likely due to the different surfactants on EMG308 versus EMG304 (data not shown).
The scheme for sIONP synthesis is shown in Figure 1a. The silica shell thickness can be easily tuned by varying the amount of silica precursor, tetroethoxysilane (TEOS), added to the reaction. The iron quantification through ICP-OES shows a linear correlation between the IONP core volume fraction and the iron weight percentage ( Figure S1, Supporting Information). The thickest silica shell produced with single-step TEOS addition was 45 nm (Figure 1b). Excess TEOS addition resulted in free silica in the product ( Figure S2a, Supporting Information). Although thicker silica shells could be achieved by multistep addition of TEOS ( Figure S2b,c, Supporting Information), the following characterization and application were focused on sIONPs formed with single-step TEOS addition. EMG308 is naturally polydisperse and contains small IONP agglomerates in solution. When the silica shell was thin, the silica was homogeneously coated on EMG308 cores regardless of their shape and aggregation (Figure 1b), and the resulting sIONPs were relatively polydisperse (polydispersity = 0.170 when shell thickness is 11 nm). When the silica shell was thicker, the core polydispersity was hidden and the resulting sIONPs become more spherical and monodisperse. [27] However, thicker shells increased the volume occupied per particle and therefore the total Fe that could be suspended in the solution. For instance, sIONPs with 45 nm silica shells could only be concentrated up to 5 mg Fe mL −1 in water, but 18 nm silica shells could be concentrated up to 40 mg Fe mL −1 . Nanowarming applications require high heating rates, so higher Fe concentration solutions are preferred. Thus, we chose sIONPs with 18 nm silica shell thickness which are monodisperse (polydispersity = 0.080), while still allowing high Fe concentration (40 mg Fe mL −1 ) in water. Further surface characterization, scale-up, and biological experiments were pursued with this 18 nm shell thickness sIONP embodiment.
Similar to the msIONPs, the sIONPs were not all singlecore nanoparticles (as shown in Figure 1b), with around 50% of sIONPs ( Figure S3, Supporting Information) multicored due to core aggregation prior to or during PVP coating. Niculaes et al. reported that small iron oxide nanocube clusters (dimers and trimers) increased specific absorption rate (SAR) values, while centrosymmetric clusters having more than four cores led to lower SAR values. [28] Moreover, 1D (chain) arrangement IONPs also increased heating ability due to the dipolar interaction effect. [28,29] Overall, the multicored sIONPs did not significantly affect the heating ability of sIONPs, which will be discussed later (see Section 2.2.3).
The porous structure in the silica shell was observed in TEM images ( Figure 1b; and Figure S4, a high-resolution TEM image, Supporting Information). We hypothesized that the porosity in silica was due to the addition of PVP. Direct comparison of the silica coating on EMG308 without PVP was impossible due to our inability to coat EMG308 with silica in the absence of PVP. Instead, a comparison was made with pure silica nanoparticles (SiNPs) synthesized with and without PVP. Similar to the sIONPs, SiNPs synthesized with PVP were less electrically dense and revealed a porous structure with TEM ( Figure 1c), while SiNPs without PVP were more electrically dense and solid. Although SiNPs prepared by the Stöber method are sometimes claimed as microporous, [30] incorporating molecules into the silica matrix ensures larger pore formation. [31] Macromolecules, such as tannic acid have been reported as a template for large mesopores. [32] Small molecules such as glycerol were reported as templates for microporous silica. [31] Fujita et al. reported the ability to create hollow silica nanoparticles using hydrophobic amines by using confined globular PVP composites as templates. [33] PVP was also used as a contemplate with mesoporous templates, such as CTAB and triblock copolymer to assist formation of hollow mesoporous silica spheres [34] and rod-shaped mesoporous silica [35] due to the association with micelle structures. Although PVP has been used to assist silica coating of a variety of nanoparticles, [22,26] it has not to our knowledge been previously reported as a porogen in silica.
sIONPs with and without surface modification were analyzed by nitrogen adsorption analysis. Both isotherms showed hysteresis at high relative pressure, indicating interparticle spaces due to the rough surface ( Figure 1d). The unmodified sIONPs showed a steep increase of adsorption at very low relative pressure (0.0037), and hysteresis at low relative pressure indicates microporosity. As a result, the pore size distribution calculated by Barrett, Joyner, and Halenda method showed micropores ≤2 nm within the silica shell of the unmodified sIONPs. Unfortunately, pore size distribution below 2 nm could not be obtained due to instrumental limitations. Moreover, surface-modified PEG blocked the pores, so no pore structure could be detected on modified sIONPs.
Scale-Up
One notable advantage of sIONPs is their simple synthesis method which allows scaled-up production to gram quantity in the lab. This requires the use of a probe sonicator (Q500, Qsonica, rather than bath sonicator as reported previously [22] ) and an overhead mechanical stirrer (rather than a magnetic stirrer [22] ). Unlike a bath sonicator, which is mild and does not have uniform power across the bath, the probe sonicator is at least 100 times more powerful and the amplitude is controllable and tunable. [36] By switching the probe diameter, we were able to sonicate solution volumes ranging from 1 mL to several liters. A magnetic stirrer was found to be inadequate for producing msIONPs in a reaction solution above 300 mL as this resulted in polydispersity of resulting nanoparticles and IONP cores shifting from the center to the edge of the silica coating. With an overhead magnetic stirrer (OS20-s, Waverly), Adv. Sci. 2020, 7,1901624 sIONPs can be produced in a 4L reaction vessel. The scaled-up sIONP synthesis protocol currently yields above 20 g (or 1.4 g Fe) sIONP/batch, which is more than 80 times the original msIONP synthesis yield (0.017 g Fe mL −1 ). [22] We believe sIONPs could be further scaled up in a larger reaction vessel in the near future with our industry collaborator. [37]
Surface Characterization
Besides direct observation of silica shells in TEM images, XPS confirmed the silica coating on the EMG308 in the presence of Si 2p binding energy and the disappearance of Fe binding peaks ( Figure S5, Supporting Information). The N1s peak observed is due to the impurity of residue ammonium (catalyst) used in the silica coating reaction. The bare and modified sIONPs show no difference in morphology (data not shown), while IR revealed organic ligands on the surface of modified sIONPs ( Figure S6, Supporting Information). Due to the low ratio of organic material to the bulk inorganic nanoparticle, the IR peaks were shallow yet still informative. The peaks arising in the modified sIONPs at 3450, 2880, 1460, and 1340 cm −1 were assigned to OH stretching, CH stretching, CH bending from PEG, and methyl groups on the surface, respectively. [38] The bands due to CO, COC, and COH modes overlapped with the SiO 2 peak in the region of 1300-900 cm −1 and could not be identified. [38,39] The small peaks in the region of 2200 were due to the CO 2 in the air.
TGA showed similar weight percentage losses of both bare and modified sIONPs ( Figure S7, Supporting Information). The major (≈12%) weight loss observed is caused by the imprinted PVP, which helps creates micropores in the silica shell. [40] PVP is considered a low-toxicity polymer [41] and has been added to CPA cocktails to alter the CCR; [7] therefore, the PVP in sIONPs was not a toxicity concern in our study. Moreover, the toxicity studies (see Section 2.3.1) indicate no impact in toxicity from imprinted PVP in sIONPs. The first weight loss (between 30 and 200 °C) was due to water evaporation. The second weight loss (between 400 and 500 °C) was due to the decomposition of organics, mostly PVP. [40] Although the weight loss from PEG and TMS decomposition was negligible compared to the weight loss due to imprinted PVP, the 1st derivative thermodecomposition temperature of the modified sIONPs shifted higher (423-454 °C) than the bare sIONPs, indicating the covalent bonding of organics on the sIONPs since the thermodecomposition temperature of nanoparticle-bonded polymers is higher than free ones. [42] Moreover, the decrease in sIONP zeta potential from −72 ± 4 to −40.29 ± 8 mV after modification also suggests attachment of PEG and methyl groups that partially neutralized the negatively charged silanol groups on the surface.
Colloidal Stability in CPAs
Aggregation of IONPs in CPAs leads to reduced heating, [21] heterogenous distribution of IONPs and, as we will show, blocking of the vasculature. Colloidal stability of IONPs in CPAs is more challenging than in other biological media (i.e., saline or phosphate buffered saline (PBS)) due to high viscosity and high concentrations of salt, sugars, and organics. For instance, IONPs that are stable in PBS such as PBG300, a PEGlyated IONP from Ferrotec, are not stable in CPAs (data not shown). We presume this is due to the PEG being adsorbed instead of covalently bound to the IONPs.
sIONPs and EMG308 were dispersed in VS55 and monitored for colloidal stability. Although we previously observed EMG308 crash out from VS55 within several hours, [16b] we now show EMG308 aggregation in VS55 by DLS measurements over time (Figure 2a). Using measured viscosity (3.26 cP) and refractive index (1.379) of VS55 at 23 °C (method described in the Experimental Section), we were able to calculate the hydrodynamic diameters of sIONPs in VS55 and found them to be identical to the one measured in water (104 nm in H 2 O and 106 nm in VS55). Interestingly, the aggregation started immediately after dispersing EMG308 in VS55. Within 30 min, the average hydrodynamic diameter of EMG308 aggregates grew to over 1 µm, and the polydispersity of the aggregates kept increasing. EMG308 was visually confirmed to crash out from VS55 in the 4th h ( Figure 2b). Moreover, the aggregated EMG308 remains stably aggregated even after intensive sonication (data not shown). On the other hand, sIONPs, due to their surface modification, were stable in VS55 for at least 6 months at room temperature (Figure 2b).
Heating Properties
Heating capability of IONPs is an essential factor for nanowarming applications. Figure 3a shows that sIONP specific absorption rates of unit iron weight (SAR Fe ) were constant at different Fe concentrations in water. In the same Adv. Sci. 2020, 7, 1901624 Figure 2. Colloidal stability of EMG308 and sIONPs. a) Hydrodynamic diameters of EMG308 and sIONPs, each measured using DLS in H 2 O and VS55 in the first 5 h following synthesis of the colloid. EMG308 aggregates in VS55, while sIONPs are stable in VS55. b) Photos of EMG308 and sIONP in VS55 for 0 h, 4 h, and 6 months. EMG308 completely crashes out from VS55 within 4 h, while sIONPs remain suspended in VS55 for at least 6 months.
conditions, however, EMG308 showed a significant decrease of heating ability as Fe concentrations increased and interparticle interference, which is known to influence locally induced magnetic fields, became more intensive. [21] It is likely that the silica coating acts as a steric spacer between the IONP cores, making sIONPs less affected by interparticle interactions. [43] This silica steric buffer, then, is an important advantage to maintaining high heating with sIONP at high (>10 mg Fe mL −1 ) concentrations during nanowarming.
Direct heating comparisons for IONPs described in the literature are usually carried out in water. However, for nanowarming applications, it is vital that heating capability be assessed within CPAs. For instance, when changing the carrier solution from water to VS55, the SAR of EMG308 was reduced by more than 50% due to aggregation (Figure 3b). Moreover, no heating was detected once EMG308 crashed out of the solution. Although the heating rates of the sIONPs in VS55 were higher than they were in water, due to the lower specific heat capacity of VS55, [44] the actual SAR in VS55 is lower than in water. Nevertheless, the SARs of sIONPs with various silica shell thickness held constant in both VS55 and water. The SAR of EMG308 and sIONPs with different silica shell thicknesses in water and in VS55 was obtained by measuring 1 mg Fe mL −1 samples. At this low Fe concentration, the interparticle interference was negligible.
One important question is whether the nanoparticles themselves change the thermal properties, or otherwise influence nucleation and/or crystallization in CPA solutions. Previous work in EMG308, the core IONP used in our sIONP formulation, shows negligible impact on the thermal properties of VS55 during cooling and rewarming. [16a,45] Indeed, one study even showed that EMG308 suppresses nucleation and stabilizes the glassy state. [45] However, other studies suggest that certain surface formulations can promote devitrification in VS55. [46] While we have seen no evidence to suggest relevant changes to thermal conductivity, vitrification and devitrification behavior of VS55 with sIONPs, ongoing work in our lab and that of our collaborator(s) is expected to answer this.
Magnetic Properties
Because heating arises from the magnetic properties of sIONPs, we undertook further characterization by magnetometry. Figure 4a shows the hysteresis loops of powdered EMG308 and sIONPs with various shell thicknesses. Measurements were performed at room temperature. All IONPs showed negligible DC hysteresis, indicating that they are all superparamagnetic at room temperature. The magnetization remains constant, [47] and the negligible variations between the samples were likely due to the measurement errors instead of silica coating.
MRI measurement of relaxation time constants (i.e., the longitudinal and transverse relaxation time, T 1 and T 2 , respectively) are valuable for evaluating sIONP distributions within an organ. [16b,23b] Low concentrations (<0.01 mg Fe mL −1 ) of IONPs, such as those observed after organ washout, can be measured Adv. Sci. 2020, 7, 1901624 Figure 3. Heating performance, measured as SAR per gram Fe (at 360 kHz, 20 kA m −1 ), of a) EMG308 and sIONP water suspensions versus increasing concentrations, **p < 0.01. b) EMG308 and sIONPs with different shell thicknesses in water and in VS55. Data of EMG308 in VS55 were acquired when EMG308 was in suspension following vortexing. When the EMG308 crashed out of the solution, SAR was negligible. The influence of silica shell to EMG308 magnetic saturation momentum is negligible. b) The longitudinal relaxation rate constant (R 1 ) at 9.4T of EMG308, msIONPs, and sIONPs with various shell thicknesses at 1 mg Fe mL −1 . For comparison, the R 1 at 1 mg Fe mL −1 is shown for EMG308 (sIONP with shell thickness = 0 nm) and msIONP. [23b] Error bars indicate the standard deviation across the sample and are not visible because they are smaller than the circular marker shown. The data show that silica coating hinders the water access to the core and reduces R 1 .
using a T 2 -based MRI method. [23b] Higher concentrations of IONPs (0.01-3 mg Fe mL −1 ), such as those present in the microvasculature of an IONP-loaded organ, can be measured using a T 1 -based MRI method with ultrashort echo time, such as sweep imaging with Fourier transformation (SWIFT). [48] Initial measurements were focused on the longitudinal relaxation rate constant R 1 (= 1/T 1 ) of a range of silica shell thicknesses at 1 mg Fe mL −1 in 1% agarose (Figure 4b). The R 1 values for all of the sIONPs are lower than those for uncoated EMG308 and msIONPs. [23b] The decrease in R 1 can be attributed to reduced water accessibility to the core. The R 1 is observed to barely change between 6 and 45 nm shell thicknesses. This is a deviation from Pinho et al., who reported a dramatic decrease in R 1 when silica shell thickness increased from 7.6 to 42.7 nm. [47] We hypothesize that the discrepancy in response is best attributed to further inhibition of water diffusion from the PEG coating. A similar impact on decreased pore accessibility due to the presence of PEG was shown in the nitrogen adsorption data (see Section 2.1.1). Additionally, the relaxation rate as a function of concentration or relaxivity (r 1 and r 2 ) and the heating rate (volumetric specific absorption rate, SAR v ) were measured on sIONPs with a 16 nm silica shell suspended in 1% agarose and VS55. The 1% agarose and VS55 were used to mimic the nanowarming environment. The reported r 1 is lower than previously published r 1 values for msIONPs and EMG308. [22,23b] A good correlation between R 1 and heat production (SARv) was observed ( Figure S8, Supporting Information). Furthermore, the lower R 1 allows for the ability to image the higher sIONP concentrations necessary for nanowarming.
Cytotoxicity
Although the expectation is that the majority of sIONPs will be washed out of the biological samples, complete removal of sIONPs is impossible and the toxicity of residual sIONPs is a concern. Thus, the toxicity of EMG308 and sIONPs were tested in a wide range of Fe concentrations (0.1-10 mg Fe mL −1 ) on human dermal fibroblasts (HDFs). EMG308 reduced cell viability starting from 0.5 mg Fe mL −1 as shown in Figure 5. The toxicity of EMG308 might be from a commercially added anionic surfactant and intensive uptake and association of EMG308 with the cells. In contrast, there was no statistically significant decline of cellular viability even at the highest Fe concentration tested for sIONPs (Figure 5a). This is consistent with silica being biocompatible and the PEG and the additional methyl group leading to little or no cellular interaction as reported for similar structures in the literature. [22] In summary, our biocompatibility tests demonstrated that sIONPs are nontoxic at least up to exposure concentrations as high as 10 mg Fe mL −1 for 24 h in HDF cells.
Cellular Interaction
IONPs that show minimal cellular interactions are preferred in nanowarming to reduce residual IONPs left within the organ after rewarming. Therefore, the associations between IONPs Adv. Sci. 2020, 7, 1901624 Figure 5. a) Viability of HDFs after incubation with EMG308 and sIONPs with increasing concentrations. EMG308 showed toxicity to HDFs when the incubation concentration was above 0.1 mg Fe mL −1 . sIONPs did not reduce HDF viability at all the tested concentrations. b) Fe quantification of HDFs exposed to EMG308 and sIONPs for 24 h by ICP-OES. c) TEM images of EMG308 association with HDFs, with red circles indicating EMG308. The zoom-in image on the right shows EMG308 up taken by the cell. d) TEM image of a cell exposed to sIONPs. The zoom-in image on the right shows a dark granule. ICP-OES and TEM both indicate EMG308 intensively associate with HDFs, while sIONPs showed minimal cellular association. and cells were evaluated. sIONPs and EMG308 were incubated with HDFs for 24 h at 37 °C. After washing with Hank's balanced salt solution (HBSS) buffer 5 times to remove the unassociated IONPs, the cells were collected by centrifugation and then embedded in resin for TEM imaging. The presence of EMG308 could be observed in the cell pellet by a dark brown coloration. The pellet from cell exposure to sIONPs had a slightly yellow color compared to the control, which was white. Ultrastructure images acquired with TEM (Figure 5c) show rare uptake and thus little sIONP association with cells. sIONPs were observed within the void spaces between the cells but did not attach to the cell membranes. High uptake and attachment to the cell membrane was observed in cells incubated with EMG308. This observation is consistent with cellular association comparisons made between EMG308 and msIONPs on LNCaP cells. [22] The cellular association between EMG308 and sIONPs was quantified by ICP-OES (Figure 5b). The small amount of iron detected with the sIONP-incubated cells (300 times less than EMG308) is consistent with TEM observations. The low cellular interaction of sIONPs showed promise that they will be easily washed out of organs after nanowarming.
Nanowarming of Cells
Initial nanowarming demonstrations with msIONPs were first demonstrated in a cell system. [16b] We repeated the experiment with sIONPs to show equivalency. Cellular systems are advantageous as initial nanowarming tests because the toxicity of each nanowarming component (CPA, IONPs, cooling, and heating) can be separately evaluated. Therefore, initial measurements assessed the toxicity of the CPA with increasing concentrations of sIONP exposures (Figure 6a). [16b] The CPA was loaded by exposing HDFs with increasing CPA concentration solutions and removed by exposing HDFs with lower CPA concentration solutions on ice (see the Experimental Section). The sIONPs were introduced to the cells with the highest concentration of CPA and removed during the CPA unloading steps.
The viability of cells that exposed to CPA and 10 mg Fe mL −1 sIONPs showed a slight decline from the negative control samples in culture medium (p < 0.01), while lower concentrations of sIONPs did not significantly affect the cell viability. Then, cell viability recovered from vitrification (cooling at ≈7 °C min −1 ) and nanowarming (≈130 °C min −1 ) was tested. A recovery viability of 85.3% was achieved with sIONP nanowarming (Figure 6b), which was comparable to the previous reported results using msIONPs (83.6%). [16b]
Loading and Washout of IONPs from Rat Kidneys
In order for nanowarming of organs to work, IONPs will need to be loaded prior to cryopreservation and eventually washed out after rewarming. Here we provide a first demonstration of the ability to load and washout IONPs in VS55 from an organ. First, rat kidneys were preloaded with VS55 in a stepwise manner (Euro-Collins, 18.7%, 25%, 50%, 75%, 100% VS55) [16b] at a constant flow rate of 3 mL min −1 . Then EMG308 or sIONPs (10 mg Fe mL −1 ) in VS55 suspensions were perfused into the rat kidney through the kidney infrarenal aorta and then washed out with stepwise decreasing VS55 solutions.
The kidney, once fully loaded with sIONPs, was imaged by µCT, showing distribution in the major vessels and capillaries. EMG308 agglomeration in the vasculature was indicated by the intensive contrast all around the kidney (Figure 7a). This agglomeration in the vasculature was also indicated by the high pressure (≈250 mm Hg) needed to overcome blockage of the vasculature, especially compared with the lower washout pressure (≈100 mm Hg) of sIONP-loaded kidneys ( Figure S9, Supporting Information). Moreover, the EMG308-loaded kidneys showed higher iron content (0.0163 mg Fe mg −1 dry weight) than sIONP-loaded kidneys (0.0115 mg Fe mg −1 dry weight) by ICP-OES (Figure 7b). The washed-out kidneys were analyzed by MRI and ICP-OES to assess residual iron. In the case of EMG308, loaded kidneys were visibly darker than negative controls (i.e., unloaded control kidneys), and kidneys after Adv. Sci. 2020, 7, 1901624 washout were still visibly dark (Figure 7b). According to ICP-OES results, 86% of the loaded EMG308 remained in kidneys after washout and produced artifacts within the MRI due to the high iron concentration ( Figure S10, Supporting Information). In contrast, the sIONP washed-out kidney was visually similar to the negative control (Figure 7a). However, T 2 -weighted MRI indicated the presence of sIONP residue post washout ( Figure S10, Supporting Information). The ICP-OES results show 0.0020 mg Fe mg −1 kidney dry weight, which is within the detection limits of T 2 -weighted MRI based on our r 2 measurements ( Figure S8, Supporting Information). The iron concentration remaining in the sIONP-loaded kidney is an order of magnitude lower than that remaining in the EMG308-washout kidney (0.0140 mg Fe mg −1 dry weight). Although sIONPs cannot be completely removed from the kidney, in vivo studies of mice i.v. tail vein injected with msIONPs showed slightly higher concentrations 24 h postinjection (0.0024 mg Fe mg −1 kidney dry weight), which was tolerated without adverse reaction for over a month. [23a] From these observations, we hypothesize that the residual sIONPs within the kidney are unlikely to induce toxicity.
Conclusion
With the aim of scaling nanowarming technology to organs, a new silica-coated iron oxide nanoparticle, sIONP, was produced and tested against existing IONPs, especially EMG308 and msIONPs, which were used for the first physical and biological demonstration of nanowarming, respectively. Here we show that sIONP synthesis is simple, fast, and less laborintensive than msIONP synthesis and eliminates the need to use toxic surfactants. As a novel nanoparticle, the morphology and surface of sIONPs were carefully characterized. We show that sIONPs' heating and magnetic properties remain constant regardless of shell thickness. Further, the sIONPs are shown to be biocompatible with minimal cellular interaction with HDFs. Finally, sIONPs and EMG308 in VS55 were perfused into rat kidneys and analyzed by µCT. The sIONPs distribute throughout the vasculature, while EMG308 showed severe agglomeration followed by perfusion pressure changes that indicate blockage. Further, more than 90% of loaded sIONPs were removed during CPA washout, while the majority of EMG308 remained stuck in the vasculature. ICP-OES showed that the remaining amount of sIONPs was below the known amount tolerated in kidneys after in vivo administration in a separate longitudinal study.
sIONP Synthesis: A prototype small-scale synthesis was conducted in a 150 mL Erlenmeyer flask using a magnetic stir bar for mixing. 0.6 g PVP10 was dissolved in water by sonication, then probe sonicated for 5 min. After adding 18 mg Fe stock EMG308 to the PVP solution and ensuring that the total amount of water was 5.4 mL, the mixture was probe sonicated for 10 min. The EMG308 PVP solution was then added to a flask with 40 mL ethanol, and probe sonication was continued for another 10 min. 2 mL ammonia was added to the reaction while stirring. 0.5-8 mL TEOS was added to the solution while stirring at room temperature. 0.25 mL PEG saline was added after 1 h. 0.0375 mL TMS was added after another half an hour. The reaction continued overnight to allow complete condensation. The resulting sIONPs were collected by ultracentrifugation at 30 000 rpm for 15 min and washed with ethanol, ethanol/water mixtures, and water for several cycles to remove unreacted reagents. The purified sIONPs were then redispersed in water and filtered to remove micrometer-sized impurities or aggregates.
sIONP Scale-Up: The scale-up synthesis was done in a 4L reaction vessel with an overhead mechanical stirrer used for mixing. For sIONPs with 18 nm silica shell, 48 g PVP10 was dissolved in water, and 1.440 g Fe EMG308 was added to preprobe sonicated PVP10 solution (water volume is 432 mL) and probe sonicated (Q500, Qsonica) for 45 min. Then the mixture was added to 3.2 L ethanol and probe sonicated for another 45 min while stirring. 160 mL ammonia was added to the 4L reaction vessel (LG-8082-104, Wilmad-LabGlass) while stirring by Adv. Sci. 2020, 7, 1901624 Figure 7. a) Micro-CT 3D projections for IONP-loaded kidneys corroborate aggregation in kidneys for uncoated EMG308 (left) compared to sIONPs (right). b) ICP-OES results of control, IONP-loaded and IONP-washed-out kidneys and the corresponding photos. The ICP-OES results indicate that stable sIONPs can be washed out more effectively than uncoated IONPs, which aggregate during CPA perfusion. Error bars are standard deviation.
www.advancedscience.com
an overhead mechanical stirrer (OS20-S Waverly). 80 mL of TEOS was added afterward while stirring. 20 mL PEG silane was added to the mixture after 1 h and stirring continued. 3 mL TMS was added after another 30 min. After the reaction, the reaction solution was concentrated by rotary evaporator, and sIONPs were collected and purified via repeat centrifugation.
sIONP Characterizations: sIONPs were characterized by DLS and Zeta potential measurements on a Brookhaven Zeta PALS instrument (Brookhaven Instruments Corporation) with a 635 nm diode laser at 15 mW of power. TEM was performed on a Tecnai T12 transmission electron microscope (FEI, OR) operating at 120 kV. ICP-OES quantitation of iron was performed on a Thermo Scientific iCAP 6500 dual-view ICP-OES with 1150 W power. XPS was measured on a PHI5000VersaProbeIII. IR was measured on a Thermo Scientific Nicolet iS 50Ft-IR using attenuated total reflection technique. Nitrogen adsorption-desorption measurements were performed on a Micromeritics ASAP 2020 surface area and porosity analyzer. The samples were degassed at 120 °C for 6 h prior to physisorption analyses. TGA were performed on a NETZSCH STA 409 PC Luxx system coupled with a NESLAB RTE-101 bath circulator. The sample chamber was purged with high purity nitrogen (20 mL min −1 ) for 4 h prior to the analysis. TGA was performed in nitrogen using a ramp rate of 10 °C min −1 from room temperature to 900 °C. A MicroMag Vibrating Sample Magnetometer (Princeton Measurements Corporation) was used to measure the hysteresis loop of IONPs in a powder form at room temperature. The viscosity of VS55 was measured by an AR-G2 rheometer at 23 °C. The reactive index was measured followed the procedure reported by An. [49] sIONP Heating Experiment Setup: 1 mL of each sample (EMG308 or sIONPs in water or in VS55) was placed in a 1.75 mL Eppendorf tube and heated in 1 kW Hotshot inductive heating systems with 2.75-turn, water-cooled copper coil (Ameritherm Inc., Scottsville, NY) at 360 kHz and 20 kA m −1 . The SAR was calculated based on linear regression of the first 30 s heating data (2-3 s lag time), with the heat generated from the container, water, or VS55 were subtracted. [16a] MRI Measurements: MRI measurements including images and relaxation values, were performed with a 9.4T-31 cm bore MRI scanner (Agilent Technologies, Santa Clara, CA). All images were acquired with a volume transmit/receive coil having an inner diameter of 3 cm (Agilent Technologies, Santa Clara, CA). Relaxation rate measurement of IONPs was performed in 1% agarose, following previously established protocols. [23b] A multislice T 2 -weighted spin echo sequence was used to measure T 2 -weighted images and R 2 maps. Each 2D image was acquired with a repetition time (TR) of 2.4 s, echo time (TE) of 12 ms, acquisition bandwidth of 50 kHz, and acquisition time of 2.56, a slice thickness of 5 mm, and a resolution of 417 × 417 µm. For R 2 determination, six time points were acquired with TE spaced exponentially between 12 and 800 ms. All 2D images were reconstructed using VnmrJ version 3.2.
3D T 1 -weighted images and R 1 maps were acquired using a Look-Locker method together with a MultiBand (MB)-SWIFT sequence for readout. [23c,48] The MB-SWIFT flip angle was 1°, acquisition delay ≈2 µs, acquisition bandwidth = 384 kHz, TR = 1.2 ms, gaps = 2, N spiral = 32, and N v = 4096, voxel resolution = 195 × 195 × 781 µm, and total acquisition time ≈7 min. [24,29] The field-of-view was 50 × 50 × 200 mm 3 with image matrix size = 256 × 256 × 256 × 64 (x,y,z,t). 64 time points were spaced linearly from 39.8 to 4596 ms. MB-SWIFT images were reconstructed using an in-house program written in MATLAB (2012b). [50] For each voxel, the time points were fit to the exponential curve using a three-variable fit. [23c] The relaxation rate constant (R 1 or R 2 ) was determined with least-squares fitting. The region of interest assessed for each tube was approximated as a cuboid with dimensions 2.73 × 2.73 × 25 mm 3 . The relaxivity (r 1 or r 2 ) was determined by performing a linear least-squares fitting for the relaxation rates as a function of iron concentration.
Microcomputer Tomography (microCT) Measurements: microCT measurements were acquired in a Nikon XT H225 (Melville, NY). The images were reconstructed using 3D CT pro, Nikon Metrology, imported as unsigned 16-bit float images. The reconstruction was corrected for beam hardening and denoising (75% Hanning filter). 3D maximum image projections were created in ImageJ.
Cell Experiments: Human dermal fibroblasts (HDF, ATCC) were cultured in Dulbecco's modified Eagle media (Gibco, life technologies) that contained 10% fetal bovine serum (Gibco, life technologies) and 1% penicillin streptomycin (Sigma) at 37 °C under 5% CO 2 . The nanoparticle toxicity test was conducted by incubating HDFs with various concentrations of EMG308 or sIONPs for 24 h, then evaluating the HDFs by Hoechst-PI assay. The cellular association experiments were done in T-75 flasks. HDFs were exposed to 1 mg Fe mL −1 EMG308 or sIONP for 24 h at 37 °C in an incubator. The cells were rinsed with HBSS five times to remove free IONPs. The cells were then collected and analyzed by TEM and digested for ICP-OES. The CPA and sIONPs exposure toxicity experiments were done by stepwise loading and unloading of VS55 and sIONPs (loading steps: Euro-Collins solution, 18.7% VS55, 25% VS55, 50% VS55, 75% VS55, 100% VS55 with 10 mg Fe mL −1 sIONP; removal steps: 50% VS55, 18.7% VS55, Euro-Collins solution, cell culture media) in 3 min steps. The cryopreserved HDFs were cultured in an individually cut 96 well and placed in a 1 mL cryovial for cooling and rewarming. The cooling was conducted in a home-made multilayer cooler by liquid nitrogen vapor (cooling rate at ≈7 °C min −1 ) and rewarmed in 1 kW RF coil at 360 kHz and 20 kA m −1 (rewarming rate at ≈130 °C min −1 ). A Hoechst-PI assay was used to evaluate cell viability.
Kidney Experiments: All animal experiments were approved by the University of Minnesota Institutional Animal Care and Use Committee (IACUC). Male Lewis rats 2-3 months old, weighing from 200 to 250 g, had general anesthesia induced with 4% of isoflurane and 1 L min −1 oxygen and maintained with 1.5% of isoflurane and 0.9 L min −1 oxygen. The adequacy of anesthesia was confirmed by toe pinch reflex. The hair of the abdominal area was shaved and abdominal skin disinfected by using 70% Ethanol solution. A long midline incision was made, and the abdominal organs retracted to the left side of the abdominal cavity. The aorta and Inferior vena cava (IVC) were mobilized from distal bifurcation up to the superior mesenteric artery (SMA) level. All the branches from IVC and Aorta were ligated by 6.0 silk ties. Care was taken to protect the left renal artery and vein. 3.0 silk loose ties were placed below the SMA (proximal aorta) and above the iliac bifurcation (distal aorta). 500 IU Heparin was given via the dorsal penile vein. 2 min later loose tie on the tied distal aorta. 20G IV catheter was inserted into the distal aorta and secured with 3.0 silk tie and connected to 30 mL syringe with cold Euro-Collins solution. The proximal aorta was tied and the IVC was transected below the renal vein. The left kidney was perfused with 30 mL cold Euro-Collins solution. Once flushed, the left kidney was immediately excised and transferred in Euro-Collins solution on ice and connected to a perfusion setup at a constant flow rate of 3 mL min −1 . The perfusion pressure was monitored and recorded during the perfusion. VS55 was loaded in a stepwise manner: Euro-Collins, 18.7% VS55, 25% VS55, 50% VS55, 75% VS55, 100% VS55 at 4 °C with each step lasting 15 min. After the VS55 loading, a 10 mg Fe mL −1 sIONP or freshly made EMG308 VS55 solution was loaded to the kidney at 1 mL min −1 for about 2-3 min or until the kidney turned to black. The resulting loaded kidneys were then sutured and imaged by MRI and µCT within 72 h of perfusion. After imaging, the kidneys were dried in a vacuum oven at 130 °C overnight for ICP-OES measurements. Then 75% VS55, 50% VS55, 25% VS55, 18.7% VS55, Euro-Collins were perfused for 15 min in each solution to remove IONPs and VS55. The kidneys after IONP washout were placed in Euro-Collins solution for MRI imaging within 72 h of perfusion and then freeze dried for ICP-OES measurements.
ICP-OES Sample Preparation: IONP samples were digested in a 0.3 m ascorbic acid and 0.3 m HCl solution at 60 °C for 3 h. The biological samples were first dried and then ground to fine powders. The powder sample (≈45 mg) was predissolved in a mixture of 0.6 mL concentrated HNO 3 and 0.3 mL H 2 O 2 overnight. After that, the samples were sealed in a 6 mL microwave digestion vessel inside a 60 mL microwave digestion vessel with 10 mL H 2 O in the larger vessel. The microwave digestion was performed in a domestic microwave at 50% power for 3 min, cool down, pressure release, 50% power for 3 min, cool down. The digested solution was then diluted to 10 mL with 2% nitric acid. The www.advancedscience.com resulting solution was then digested similar to the IONP samples in a 0.3 m ascorbic acid and 0.3 m HCl solution at 60 °C for 3 h.
Statistical Analysis: All the physical measurements and viability experiment were repeated at least three times. Statistical significance is indicated with asterisks: * p < 0.05; ** p < 0.01; *** p < 0.001 **** p < 0.0001. The error bars are standard deviations. The one-way analysis of variance (ANOVA) with Tukey's multiple comparison tests (GraphPad Prism, GraphPad Software, Inc.) was performed on viability data. A two-tailed t-test was performed on the analysis of physical data.
Supporting Information
Supporting Information is available from the Wiley Online Library or from the author. | 2020-01-09T09:13:13.335Z | 2020-01-07T00:00:00.000 | {
"year": 2020,
"sha1": "5b445996a9936b79a310927b4e14eca6e27c7c98",
"oa_license": "CCBY",
"oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1002/advs.201901624",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "5f474b6e8b6eaf017257857e2d5430ec344f347b",
"s2fieldsofstudy": [
"Biology",
"Engineering",
"Materials Science"
],
"extfieldsofstudy": [
"Materials Science",
"Medicine"
]
} |
10772071 | pes2o/s2orc | v3-fos-license | Neural stem cell transplantation and mechanisms for functional recovery
Abbreviations: CNS, central nervous system; NSCs, neuronal stem cells; EGF, growth factors; FGF, fibroblast growth factor; BDNF, brain-derived neurotrophic factor; SVZ, sub ventricular zone; SGZ, sub granular zone; MSC, mesenchymal stem cells; BrdU, bromodeoxiuridine; MCM2, mini chromosome maintenance protein2; PH3, phospho histone h; PCNA, proliferating cell nuclear antigen; DCX, doublecortin; PSA-NCAM, poly sialylated embryonic form of the neural cell; NeuN, neuron-specific nuclear protein; MAP-2, microtubule-associated protein 2; GFP, green fluorescent protein; GFPA, glial fibrillary acidic protein; FGF-2, fibroblast growth factor 2; RMS, rostral migratory stream; VEFG, vascular endothelial growth factor; HB-EGF, heparin-binding epidermal growth factor; SCF, stem cell factor; SDF-1, stromal cell-derived factor-1; UPA, urokinase-type plasminogen activator; ECM, extracellular matrix; CXCR4, cxc chemokine receptor 4; MHC, major histocompatibility complex; IGF-1, insulin-like growth factor; GDNF, glial cell line-derived neurotrophic factor; LIF, leukemia inhibitory factor; NGF, nerve growth factor; CNF, ciliary neurotrophic factor; TSP, thrombospondins; TrkC, tropomyosin receptor kinase c; FasL, fas ligand; Apo3L, apo3 ligand; IFN-γ, interferón gamma; TNF, tumor necrosis factor; TRAIL, tumor necrosis factor-related apoptosis-inducing-ligand; GITR, glucocorticoid-induced tnf receptor; LIF, leukemia inhibitory factor; STAT3, signal transducer and activator of transcription 3; CNF, ciliary neurotrophic factor; PD, parkinson disease; AD, alzheimer’s disease
Introduction
It was not until Altman et al. 1 findings back in the 1960s 1 that the concept of adult neurogenesis in the mammalian brain began to be built on and, with it, a new vision on the adult central nervous system (CNS)ability for endogenous repair and intrinsic plasticity. Two specific neural niches for generating new neurons after development have been identified: the sub ventricular zone (SVZ), 2,3 which extends along the wall of lateral ventricles, showing a rostral migratory stream (RMS) towards the olfactory bulb 4 and the subgranular zone (SGZ) of the dentate gyrus of the hippocampus. 5 In the SVZ, new granule and peri glomerular interneurons will be born, whereas the SGZ, which possess mainly a local activity, will generate new granular cells for hippocampal circuit renovation. Both zones are hypothesized to play an important role in brain function, cognitive flexibility and restoration after brain damage. 6,7 An important amount of what we know now on this regard was brought about by the nineties and early twenties advancements in microscopy and immune histological techniques. It was possible to proof the persistence of this phenomenon throughout life, albeit its slight decrease with increased age, 8 the regulation exerted by environmental and behavioral factors such as stress, 9 drugs 10 and exercise 11 (among others) and very importantly, the existence of this process in humans as well. 12 During this time, methods to isolate adult or embryonic neural stem cells by dissecting out adult or fetal tissue in order to culture, maintain, proliferate and differentiate them under specific artificial lab conditions (e.g. growth factors such as EFG (epidermal growth factor) or FGF-2 (fibroblast growth factor)) began to emerge. 13 In vitro techniques such as neurospheres, monolayer cultures and oncogenetic modifications (to mention but a few) in conjunction with studies in vivo gave way to important advances in understanding the biological aspects and the mechanisms involved in the regulation of these particular cells. They proved the functional and synaptical integration within the adult brain of new-born cells into the preexistent circuits and highlighted the importance of an adequate timing and neurogenic microenvironment for differentiation and migration. [14][15][16] In the following years, higher interest was casted on the potential regenerative therapy they could represent. 17 Transplanting stem cells expanded in vitro into the developing or adult brain (grafting) provided better results than anticipated. Not only by greatly contributing to our knowledge of the development of the CNS, neuronal migration and the intrinsic and extrinsic factors involved in differentiation; 18 but also by becoming a medium for gene therapy and repair in many diseases characterized by brain damage and neuro degeneration. 17 Neural stem cells (NCS) are cells with self-renewal capacity and, depending on their origin (embryonic, adult) and intrinsic factors, with pluri or multipotent capability for giving rise to different CNS cell linages as well as other tissues´ cell types (contrary to what was thought for a long time), 19 hence their promising use. However, because they have been subjected to different protocols in order to manipulate them, the results have been very diverse and in some cases even counterproductive. 20 Notwithstanding, research on this field has progressed and nowadays, although NSCs cell therapy still continues under intensive scrutiny, its promising potential has attracted even more attention due to accumulative evidence that suggests its participation in restorative processes goes beyond the replacement of loss cells, 21 encompassing as well immune modulation and neuro protection. 22 The purpose of this article is therefore to review the mechanisms known so far to intervene in the graft-host interaction after NSCs transplantation, both at the molecular and cellular level, and that consequently lead to behavioral effects at a major scale. By dividing discovered mechanisms into endogenous and exogenous, we hope to synthetize published research and provide the reader with a clearer, up to date and more integrative vision of what is known to happen posterior to the introduction of these cells. For this purpose, we will make a general introduction to the topic, addressing key issues and concepts, and will subsequently embark on the issue at hand. It does not mean to be exhaustive but rather to outline the different processes thought to occur within the brain, within the neuro pathological environment, posterior to the exogenous NSCs grafting and their functional consequences.
Neural transplantation of neural stem cells
The clinical use of neural transplantation is currently being explored as a strategy to promote recovery and repair of brain dysfunction provoked by pathologies such as Huntington's, Alzheimer's, amyotrophic lateral sclerosis and Parkinson's disease. 23 Other neurodegenerative or traumatic conditions of CNS, such as the autosomal neurodegenerative disorder and spinal cord injury follow next in the queue for clinical trials. Given this wide range of disorders, at present, many stem cell types and their derived progenitors are being used in cellular replacement therapies; among these, the NSCs are one of the ideal strategies used in these conditions. The effectiveness comes associated with their link with the development of the CNS. Their origin in the neuro ectoderm provides them with a particular molecular composition: SOX1 and Oct4. SOX1 is a transcription factor that takes part in neural determination and differentiation during the early stages of the embryonic CNS and shows a synergic interaction with other important transcriptor factors during neurogenesis, such as the III POU class. Together, they modulate the expression of nestin (a type VI intermediate filament protein of special importance in cell division) and therefore are of vital importance in the regulation of neural primordial cells. 24 Nestin and SOX1 are now recognized markers for NSCs in vitro and in vivo. 25,26 Another important POU family´s transcriptor factor, the Oct4, give these cells the property of pluripotenciality and high capacity of differentiation, 27 given its involvement in the self-renewal capability of undifferentiated embryonic stem cells. Stem cells are claimed to have the ability to develop phenotypes of different brain cell types (neuronal and glia) and, what is more, to migrate after grafting. 28,29 Some types of grafted cells have shown a selective migration to lesion sites, which is a very convenient characteristic.
All of the traits aforementioned in association with their capacity for secreting neurotrophic factors and modulating different mechanisms of regeneration and immune reactions 30 have made them a very suitable choice. Today, the therapeutic use of neural stem cells covers not only the "recovering or substituting cells lost to injury, disease or physiological turnover" explanation; but also includes a guided cell differentiation effect and the stimulation of host plasticity and its endogenous mechanisms for repairing. A fact that has rendered the initial idea of neural transplantation as a technique to "only replace dead neurons with implanted healthy neurons" obsolete.
Pre-clinical studies using animal models have demonstrated positive behavioral effects with the use of neural stem cell transplantation in pathologies with cholinergic dysfunction, 31-33 movement disorders, [34][35][36] stroke, 37,38 spinal cord injury, 39-41 brain tumors, [42][43][44] to name a few. Also, clinical trials have demonstrated symptomatic relief in humans that had undergone this cellular replacement therapy. 45,46 However, it is still necessary to resolve many challenging issues associated with the use of this strategic therapy. The best strategy to control and regulate the differentiation and cellular growth is yet to be established, conjointly with ways to promote migration and to improve and achieve an effective functional integration within the host. 47
Sources for neural transplantation
Different sources of NSCs are in use in the present age, 48 which is better or brings the greatest benefits is yet a matter of great debate.
Despite the potential advantages in their use, many ethical and practical restrictions come with the translational clinical application of neural transplantation with human embryonic stem cells (blastocyst and embryonic CNS-derived). On one hand, in several countries, the use of embryonic human tissue is forbidden or restricted and there are numerous moral and religious principles that usually impose firm opposition to the use of unborn human material, even for therapeutic ends. On the other hand, technical difficulties involved in the use of embryonic material (e.g. viability and hetero/homogeneity of donated material) represent serious problems for its widespread use. While there are problems with dissecting the embryonic material and ways to maintain the cells alive, other issues such as the amount of brains needed for each surgery, the immunological rejection post-transplant and the propensity to form teratomas represent barriers to overcome. Stem cells obtained from adult brain niches such as SGZ or SVZ have a proved capacity for plasticity 49 but have the limitation of a limited availability and therefore difficulty in obtaining sufficient numbers for clinical application.
Other less ethical constricted sources (blood, fat, skin, bone marrow, umbilical cord, etc.) come with the additional requirement of differentiation towards the neural linage, seemingly possible through specific in vitro manipulations/protocols and in vivo neurogenic cues. 50 These adult mesenchymal stem cells (MSC) provide a promising exciting solution to overcome immunological drawbacks as an autologous transplant becomes possible, they per se have shown interesting immunomodulatory properties 51 and as recently shown by Feng et al., 52 neural stem cells can be produced from them. Still, further evidence of their true therapeutic potential and standardized protocols are required.
Another alternative involves the possibility of generating a stock of expandable cells. They could be studied and analyzed in the laboratory and be safely prepared to supply for surgery at any time required, solving the issue of few limited number of donors. This is done by creating immortalized neural stem cell lines 53,54 via the introduction of an oncogene using a retroviral vector or the fusion with tumor cells. 55 The procedure enables the cell to proliferate in cell culture under specific artificial conditions and to differentiate into mature cells after implantation in the adult brain. The stumbling block: reported tumorigenic potential and unstable genotype.
All sources have been tested through different protocols and in different neurological conditions and thereupon results of varied value have been yielded; all possess advantages, disadvantages, and inherent technical and clinical challenging issues. The ongoing and future research will shed light on the alternative of higher beneficial for specific pathological conditions.
Identifying and tracing NSC and their progeny
A crucial step in the success of stem cell therapy is the ability to visualize, identify and track these cells fate both in vitro and in vivo. Traditional dying, immune histochemical techniques and division and phenotypic markers (thymidine analogs; antigens for antibody identification) are being complemented today by a series of genetic assays which primarily induce the expression of molecules in subclasses of stem cells and their progeny. 56 The formation of in vitro floating aggregates (neurospheres) after exposing dissociated NSCs to growth factors has provided a way to study, screen and select stem cells to be grafted. These threedimensional structures contain both undifferentiated and differentiated cells and subsequent marker identification and rigorous clonal and subclonal analysis should be carried out in order to analyze NSCs multipotency or self-renewal potential and, when selecting, avoid picking a heterogeneous cell population for transplanting. 57 Once transplanted, it is necessary to be able to tell exogenous cells from endogenous ones. PKH26, a stable and long-term membranebinding fluorescent dye, long used for cell therapy tracking, 58 has shown the possibility of transferring, both in vitro and in vivo, to nonlabelled cells, 59 thus creating the chance of mistaking host cells with transplanted cells and obtaining a misleading survival assessment. Therefore, techniques that are more reliable are frequently used now. Among the nucleotide analogs, BrdU (bromodeoxiuridine) is the most used. Initially developed as a strategy to assess the proliferation index of tumors, it rapidly became a marker for neurogenesis in situ because of its properties. By integrating into the DNA during the S-phase of the cell cycle, BrdU came to be recognized as a marker for DNA synthesis, a process majorly occurring during cell division. It can be readily identified by an antibody directed against the DNA strand containing it and through its co-labeling with other cell markers (see below), it has allowed phenotypic analysis. However, related toxicity, mitogenic, transcriptional and translational effects and, although to a lesser extent, the possibility of its incorporation through other cell processes (such as DNA repair, abortive cell cycle reentry and gene duplication without cell division) have made it primordial to design rigorous protocols to control its pulsings. This will help in avoiding misinterpretation of repairing or dying cells with new born ones. 60 From various studies over time, nowadays it is recognized a natural temporal and spatial expression of a sequence of cell markers (antigens) throughout the process of division and differentiation of stem cells into mature ones (Table 1), antibodies developed against them have allowed their identification and so that of the cells they are particular of . These are complemented by the generation of transgenic animals, through constitutive or induced (with tamoxifen or doxycycline) recombination (CreER lines), and virus-transmitted marker genes that create indelible artificial fluorescent cell marks (green fluorescent protein (GFP), cyan fluorescent protein (CFP), DsRed, mCherry, and tdTomato, among others) of different colors, timing of expression or localization). 56 All these techniques complement each other. Neither of them lacks limitations and neither is completely suitable for all kind of studies; it is up to researchers, by being familiar with these methods advantages, disadvantages and methodological limitations, to choose those of higher value for their specific research objectives and adequate their results interpretation to those. 161 Microtubule-associated protein Migrating neuroblasts persist in young post mitotic neurons. PSA-NCAM -polysialylated embryonic form of the neural cell adhesion molecule 162 Homophilic binding glycoprotein Tuj-1 163 Neuron-specific class III betatubulin Early postmitotic and differentiated neurons and in some mitotically active neuronal precursors.
Neuro D 164 Transcription factor Early cells of the neuronal lineage. Precedes PSA-NCAM.
Migrating mitotic cells and early immature neurons. Co-express with DCX and PSA-NCAM Vimentin 166 Intermediate filament Radial glia and immature astrocytes
Mature cells
NeuN-Neuron-specific nuclear protein 167 Nuclear protein Most neuronal cells.
Prox1 168 Transcription factor Postmitotic young and mature neuronal cells in the DG MAP-2 a and b isoforms 169 Microtubule-associated protein Neuronal cells S100beta 166 Calcium-binding protein Astrocytes GFPA-Glial fibrillary acidic protein 170 Intermediate filament Mature astrocytes, adult neural stem cells
Mechanisms of graft action
The success in using cellular replacement therapy in many traumatic and neurodegenerative diseases involves three independent mechanism that need to be studied and understood more deeply. These are the survival of injected cells, their ability to migrate to the site of injury, to influence the underlying microenvironment and their capacity to be integrated into the host´s neuronal networks. 61 A cross talk between grafted cells and host cells is thought to mediate these and so the functional effects seen with this therapy.
It is well-known that not only one mechanism is primarily responsible for recovery; different levels of reorganization may occur in different graft paradigms, neural systems and time intervals which might, in turn, exert an influence that could have been retarded without grafting. Some mechanisms such as chronic secretion/release of neuro chemicals into the neuropil as a response to grafting, or the reconstruction of the host brain circuitry as a process of selfrepair and reciprocal reinnervation might play crucial role in graft effects, 62 which can range from deleterious to fully reconstructing and neuroprotective. Besides, migration or other mechanisms involved in host-graft communication during the initial moments after grafting may trigger irreversible processes that promote plastic changes in the host brain and reconfigure its cell projections. Processes such as formation of glial scaffolding for migration and differentiation guidance 63 and trophic factor release 64 would be of vital importance therein.
Thus, neural stem cells grafted into the damaged brain may promote the activation of one or more of these mechanisms and, at some stage, interact with the host cells to exert effects leading to functional recovery. As it is, only a glimpse of what actually happens is known and further investigation on this matter will be of crucial importance in revealing the whole panorama, but it is meaningful to summarize and stand out key aspects of this process.
Albeit, when talking of intermediating action mechanisms, a conceptual division can be drawn (endogenous versus exogenous), the reader should be bear in mind that as they interplay in complex and reciprocal ways, a truly functional separation is more difficult to be made. Table 2 resumes this conceptual dichotomy and (Figure 1) represents it schematically.
Endogenous mechanisms (host-dependent)
When the host exerts an action triggered by the implantation or the host environment per se has an influence on exogenous stem cells function. This is crucial because as we know, not just the genotype governs cell function, external cues also intervene and mediate important aspects of cell behavior, and ergo the host environment is of prominent relevance for cell therapy.
Environmental signals for differentiation and survival: Today it is clear that the neurogenic niche actively participates in stem cell maintenance, activation and differentiation. 65 Surrounding blood vases, astrocytes, cerebrospinal fluid and a complex set of extracellular molecules secreted by the cells in the niche are the perfect set up for a dynamic interaction to occur within; here, cell to cell communication takes place and many diffused signals and molecules, soluble or embedded in the extracellular matrix and associated blood vessels, coact spatially and temporally for regulating stem cell biology. 66 This is reflected by the fact that in-vitro cultured stem cells respond and are largely controlled by the neurogenic microenvironment of the host, where they might show different multipotency to that observed when cultured in vitro and vice versa.
Examples of this can be found. Stem cells obtained from the neocortex (non-neurogenic region) where they only develop into glial cells, show neuronal and astrocytic lineage differentiation when exposed in vitro to growth factors such as FGF-2. 67,68 On the other hand, stem cells harvested from neurogenic zones show very limited neurogenesis when implanted into non-neurogenic locations. 69 Cord blood derived NSCs respond to signals from an in vitro neurallike microenvironment, which promoted different phenotypes in accordance with different cell to cell intercommunications. 70 Human skin-derived stem cells AC133(+) when engrafted in the mouse brain showed neuronal and abundant astrocytical differentiation, 71 a phenomenon equally seen with neural progenitor specific types (Thy1 and Sca1 positive) derived from the murine bone marrow. 72 In vitro techniques where skin derived cells where exposed to a medium containing postnatal hippocampal-astrocyte-derived signals produced stable neuronal cells with preserved physiological responses, 73 highlighting the role that mature hippocampal astrocytes and their signals at the niche possess in neurogenesis, neuronal fate and survival and synaptogenesis. 74,75 In a 6-OHDA Parkinson disease model, light stimulation of endogenous (transgenic) and optogenetically modified astrocytes (co-transplanted with embryonic neural stem cells) in the substantia nigra elevated bFGF secretion and, with it, a significant increase in dopaminergic differentiation of the transplanted cells was observed; this was traduced in a functional improvement measure by the apomorphine-induced rotational behavior test. 76 Other factors present at the surrounding niche, such as the vascular endothelial growth factor (VEGF), the nitric oxide or the brain-derived neurotrophic factor (BDNF) have also been shown to modulate neurogenesis. 77 Endothelial cells are crucial elements of this neurogenic stimulus; neurogenesis and angiogenesis have been tightly linked and influence each other reciprocally. As mentioned earlier, NSC reside in a vascular niche where intimate contacts are made with neighbor endothelial cells; this close relationship has been hold as an important regulator of endogenous NSC proliferative and differentiative properties, together with the formation of new vessels. Therefore, it is not out of line to think they may as well play an influence in transplanted NSC therapeutic effects. Literature on the latter is still a bit scarce; however, some studies conducted on the matter have pointed positively towards the role of endothelial cells in the maintenance of self-renewal and pluripotency capacity of transplanted stem cells. A study conducted with a mice model of stroke 79 found an increased proliferation, survival and neuronal differentiation of stroke-induced NSC when co-transplanted with endothelial cells. Within a cell culture, 79,80 these cells seemed to favor the differentiation to neuronal precursors (an effect greater than that observed with astrocytes co-cultures) and that depended, apparently, on physical contact as much as diffused mediators such as VEGF, BDNF, FGF-2 or IGF-1. Furthermore, when co-transplanted with endothelial and astrocytes, the NSCs showed better performance at improving memory ability in the water morris maze of a rat model of stroke. 81 The metabolic, structural and trophic support these cells provide seems thus of decisive importance as they can promote the survival of the transplanted cells as well as their migration and neuronal differentiation within the sites of lesion.
Further evidence that local factors are determinant in cell fate comes from the fact that, independently of the brain site where stem cells were harvested from, the phenotype of their mature cells mirror those from the target region of implantation and not those from the origin. For example, stem cells from embryonic forebrain regions gave origin to glial and neuronal cells with specific morphologic features of targeted zones (striatum, hippocampus or cortex). 82 Likewise, migration, axonal projection and connectivity of newborn cells are also driven by regional factors in the developing brain. Cells from grafts in the striatum send projections along the internal capsule down to the mesencephalon, whereas those from SVZ grafts do so through the RMS. Hippocampal grafts show local migration with integration into neighbor circuits. 29 Alike biological features favors functional incorporation in host circuitries. 83,84 New generated cells are indistinguishables from resident cells and, surprisingly, recent studies have shown that some NSCs could even fusion with local mature elements to form hybrids at a rate which varies between different cell subtypes, 85 a phenomenon which may as well be dependent on microglial copresence and multiple ligand and molecules intercellular interactions. 86 Migration to places of injury: An important role of the niche is ensuring the right balance between quiescent and activated stem cells under specific tissue conditions (maintenance, injury, disease), so when needed, stem cells could be recruited, activated and mobilized to the sites where they are most functionally needed. 87 The capacity of stem cells to migrate parenchimally across great distances to places of injury in the brain is of major value for stem cell therapy 88 and has been associated with processes of neuro inflammation, reactive gliosis and angiogenesis occurring at the injury site. The complex molecular mechanisms governing this mobilization and homing are yet being elucidated and just a tip is known.
After lesion, astrocytes, microglia and immune cells are activated (reactive gliosis); this leads to a change in cell to cell communication and ECM conformation and with them, the signals that control different biological responses such as regeneration and plasticity. 89 Different chemokines, adhesion molecules and growth factors play a stem cells chemo-attractant and regulatory role, both during development and posnatally damaging conditions; 90 in some cases, components that resemble stem cells niches are re-expressed and endothelial cells involved in processes of local angiogenesis interact and modulate NSCs behavior. This way, the conditions for their migration, survival and differentiation are set. 89 It is worth mentioning that each type of CNS injury will alter ECM composition in a very particular way and so the array of signals the transplanted NSC will encounter when implanted will be disease-specific as well. This might account for the varied therapeutic results across different pathologies, to what extent, it is still uncertain.
Examples of this. In models of cerebral ischemia, FGF-2, Heparinbinding epidermal growth factor like growth factor (HB-EGF) and the chemokine stem cell factor (SCF) through its c-kit receptor, seem to be some of the trophic factors which stimulates neurogenesis in this condition. 91,92 Hypoxia also mediates NSCs attraction to brain tumors, where they could be used as a tumor-targeted drug delivery therapy 43,44 by up regulating stromal cell-derived factor-1 (SDF-1), urokinase-type plasminogen activator (uPA), and VEGF, 93 factors also implicated in processes of angiogenesis during tumor growth and invasion. 94 Other proteins (Laminin and tenascin-C) in the tumor-produced extracellular matrix (ECM) are also highly permissive for NSCs migration. 95,96 Inflammatory chemo attractant SDF-1 through its CXC chemokine receptor 4 (CXCR4) also enhances proliferation, promotes migration and transmigration of quiescent NSCs towards an infarcted zone 97 and of transplanted NSCs towards both spiral ganglion neuron-degenerated cochlear and immune-mediated demyelination microenvironments. 98 NSCs intravascular-delivered have been able to reach ischemic zones within the brain via a transendothelial recruitment mediated by the chemokine CCL2 and its receptor CCR2 99 and endothelial adhesion molecules ICAM-1 and VCAM-1. 100 This evidence supports the premise that the site of injury, through the creation of an environment with a chemoattractive gradient, plays a vital part in NSCs recruitment and further modulatory exogenous actions. As said before, each pathological disruption will be characterized by a particular ECM composition and lesion microenvironment and different molecular and biological interactions will occur in singular ways. Characterizing these microenvironments and their interactions would be critical for a further understanding of NSCs properties and therapy in each particular case. Immune system modulation: The tight relationship between NSCs and immune system is no longer a presumption, more and more evidence piles up supporting the mutual regulatory system these cells engage in. Having this in mind, it is no surprise that factors released by immune cells have an impact on NSCs behavior and self-renewal capacity, phenotyping and progeny survival; in fact, as addressed before, many inflammatory signals serve as NSCs attractants and immunodeficient mice have shown impaired neurogenesis. This effect could be both positive or detrimental at different stages of response and patterns of cell activation, as succinctly reviewed by Kokaia et al. 101 By expressing toll-like receptors, NCSs have the potential to respond to different inflammatory soluble signals and change their fate.
Pro-inflammatory Th1 cytokines (IL-1 alpha, IL-6, TNF-alpha, IFN-gamma), predominant in bacterial infections or ischemic environments for example, are mostly associated with negative effects on proliferation and neurogenesis, whereas Th2-related (IL-4, IL-10) cytokines possess the opposite effect. IL-4 for instance, facilitates microglia activation and with it, proneurogenic factors are secreted. Neurotrophic factors and interleukins secreted by migroglia (e.g. insulin-like growth factor (IGF)-1, BDNF, IL-15) also show a protective profile. 102 Conversely, by shown in a murine model of cortical ischemia, Glucocorticoid-induced TNF receptor (GITR) activation on CD4+ t cells leads to a stronger inflammatory response at the site of lesion with a consequent apoptosis-related reduction on the number of endogenous stem cells/progenitors recruited. 103 In some cases, such as multiple sclerosis, oligodendrogenesis is of therapeutical value; high quantities of INF-gamma have shown detrimental effects on this process, whereas low-doses in conjunction with IL-4 signal, have proven to promote it. 101 Kinin-B2 (bradikinin) is also involved NSCs differentiation, it favors neuronal linage over glial linage differentiation and have displayed anti-inflammatory processes (through astrocytic action and decreased TNF-alpha production my activated microglia) within the brain at later stages of ischemic injury. 104 Rejection process: The brain, traditionally considered an immunologically privileged site for transplantation, 105,106 is no longer held as that privileged. Despite not exhibiting frequent flow-blown graft rejection processes, some rejection-like reactions have been reported. In a subset of patients with Huntington´s disease, who were implanted with fetal grafts, biological signs of alloimmunisation to donor antigens (anti-HLA antibodies) were observed. 107 In a rat model of Parkinson disease, xenotransplanted neural progenitor cells elicited a coordinated immune response in the host at different time points, which ranged up to 60 days post-transplantation. 108 The understanding of the rejection processes involved in neural transplantation can provide fundamental insights into the handling of foreign antigens in the brain and the most efficient way of using neural stem cells to obtain better results. It is addressed here because it is a host response that might influence to some extent NCS therapy results. As an example, a modified dopaminergical neuronal xenograft lacking major histocompatibility complex (MHC) molecules survived in vivo for a longer period when compared to a wild-type xenograft. 109 The major source for graft rejection is the recognition of the MHC I and II alloantigens present on the surface of cells. Acute or immediate, withal chronic or long-term rejection reactions, are mediated by CD8positive and CD4-positive T-lymphocytes, which recognize these antigens in exogenous cells. Other minor H antigens can trigger this rejection response too, at a slower pace though. 110 Although stem cells and their differentiated progeny have been reported to express MHC antigens after exposure to inflammatory cytokines, 111 these stem cells display low levels of immunogenicity 112 and require, if it indeed does, only temporary immunosuppressive cotreatment when transplanted. 113 What seems curious is the fact that albeit not generating a strong immunological rejection, the host immune system is indeed aware of the grafted cells. This is showed by the formation of aggregates of mononuclear cells surrounding grafted cells nearby blood vases, 113 by the fact that certain brain inflammatory conditions could lead to the expression of different immunoestimulatory antigens on the NSCs 114 and an elevated expression of IL-1β, IL-4, and IL-6 in response to transplantation. 115 The functional reason behind these perivascular clusters is still elusive, but digging deeper into this might reveal further interesting facts of the immunomodulatory properties (see below) and tropism of NSCs. Inner technical and traumatic variables during grafting also have been evaluated as possible sources of rejection; none has showed a bullet-proof evidence. 113 As such, there is still no consensus regarding immunosuppression. 116 Some studies have pointed out longer cell survival with different protocols (cyclosporine alone or in combination with methylprednisolone) in phylogenetically discordant transplants, 117,118 with detection of transplanted cells up to the forth month posttransplantation, and a study blocking IL-6 production reduced immune recruitment and promoted neuronal differentiation in a similar fashion to Cyclosporine A. 115 Some other studies have failed to prove a beneficial effect. [119][120][121] It is worth considering that some studies have found that immunosuppressive medications could alter neural precursors proliferation capacity, 122,123 an effect that, although failed to be replicated in a in vivo model, 124 deserves further research.
Exogenous Mechanisms (graft dependent)
When the implanted cells have an action of their own. 64 The actions mediated by these cells surpass the cell-turnover function, long-hold as prime recovery mechanism, and now incorporate other bystander mechanisms of neuroprotection, immunomodulation and neuroplasticity. Through them, the NCSs prevent further tissue damage, rescue degenerating host cells and influence revascularization and processes of neuroregeneration.
Immunomodulation and neuroprotection: As seen before, immune and stem cells engage in a rich talk and form a close network that persists into the adulthood. 104 This close relationship mediate the NCSs´ protective (to some extent) and immunomodulatory effects seen with this therapy, which ultimately safeguards the brain from inflammatory damage. All CNS pathological disorders are related, at different degrees, to a particular inflammatory process, hence NSCs, by expressing and secreting different factors that will affect immune behavior and repairing systems, will ultimately modulate different dysfunctional mechanisms and potentially guarantee their own survival and functional integration into the host neural circuitry.
Many researchers [125][126][127] had acknowledged that NSCs have a direct action on the immune system by participating in the immunosuppression of macrophages, dendritic cells and T cell activation and proliferation. In vitro, this suppression has been related to nitric oxide and prostaglandin E2 production. 128 Additionally, the release of growth factors such as Neurotrophin 3 (which modulates myelination and development of CNS) participate in the regulation of the Th1/Th2 balance (through its tropomyosin receptor kinase C (TrkC)) and processes of neuroprotection, remyelinization and neuronal replacement. 129,130 Finding perivascular cuffs of undifferentiated NSCs, reactive astrocytes, endothelial cells and T cells after intravenous injection of NSCs in a model of chronic neuroinflammation, and the proapoptotic action of these surrounded NSCs on blood-derived Th1 cells, prompts towards a potential major putative therapeutic NSCs mechanism on chronic neuroinflammatory diseases such as sclerosis multiple. 131,132 NSCs expression of death ligands (FasL, TRAIL and Apo3L) and the secretion of factors (nitric oxide, IFN-γ, glial cell line-derived neurotrophic factor (GDNF) and leukaemia inhibitory factor (LIF)) have been hold responsible for this. 101 The process of antigen presentation and so T-cells activation also seem impaired by means of LIF production. In an experimental model of autoimmune encephalomyelitis, exogenous NCSs injected subcutaneously, hindered the activation of antigen-presenting dendritic cells in lymph nodes and so that of T-cells. 127 Release of IL 4 and IL 10, and the participation of the latter in remyelination processes, 133,134 also partake of immune regulation and neuroregeneration by the NSCs. In late phases of spinal cord lesion, transplanted cells have provoked a shift in the cytokine profile and fewer inflammatory cell recruitment. In fact, NCSs implantated in a severely contused spinal cord, during the subacute phase, stayed undifferentiated and established modulatory contacts with peri-lesional phagocytic cells, which lead to a change in the local inflammatory cell repertoire and improvement of motor function. Models of ischemia have also shown down regulation of markers of inflammation, glial scar formation and neuronal apoptotic death by NSCs influence at gene level. 135 Horie et al. 136 identified that the transplantation of NSCs grown as neurospheres elevates the release of VEGF in animal models, a factor involved in neovascularization and perfusion in stroke, leading to a consequent functional recovery. 136,137 In fact, transplanted embryonic NSC have been shown to protect endothelial cells against ischemic-related death 80 by means of, both in vitro and in vivo, VEGF-related vasculotrophism and downstream activation of the phosphatidylinositol 3-kinase (PI3 kinase)/Akt pathway. Data that supports a bidirectional influence of these two cell types in physiological and pathological conditions. Interestingly, interleukin 6 (a proinflammatory cytokine involved in the pathogenesis of several neurological disorders and associated with lower NSCs proliferation) prove to be neuroprotector in the ischemic brain after reperfusion via restoring the activity of STAT3. This signal transducer and activator of transcription 3 promote gene expression during angiogenesis and reperfusion after an ischemic event. 138,139 Neurotrophic activity and circuit rewiring: The constitutive capacity of stem cells to produce neurotrophic factors and other molecules characteristic of the developmental stage is well known. The fact that this trophic expression changes with differentiation and environmental conditions 140 makes its full characterization in vivo pretty complex. Exogenous stem cells modify injury mediated trophic expression, 141 which is also specific to the underlying pathological process.
In vitro, NSCs promoted axonal outgrowth and showed a protective effect against glutamate-mediated excitotoxic damage through the secretion of GDNF, nerve growth factor(NGF), BDNF and Neurotrophin 3. 142,143 Factors also involved, in conjunction with ciliary neurotrophic factor(CNF), in host axonal growth and functional improvements in models of spinal cord injury and neurodegenerative diseases. 144,145 After improving NCSs survival by conferring them antioxidative properties, the paracrine factors released by them (VEGF, GDNF) in an animal model of intracerebral hemorrhage increased the survival of neurons within the striatum, reducing its the atrophic deterioration. 146 In many conditions, the capacity to rescue neurons in peril and reshape surviving circuits is paramount. Embryonic NSCs and Immortalized neural progenitor cell lineRN33B when grafted in neonatal hippocampus and/or cortex produced regional-like pyramidal neurons which exhibit normal electrophysiological properties and made functional connections with appropriate neighbor and contralateral regions, integrating in the host circuitry. 83,84 NCSs, through the secretion of VEFG, thrombospondins (TSP) 1 and 2 (normally secreted by immature astrocytes during development to promote synaptogenesis) and SLIT (important in axon guidance and cell migration), were able to rescue axonal transport and generate axon sprouting in a model of cerebral ischemia. The subsequent rewiring from non-lesioned places coincided with functional recovery. 147 Similarly, hippocampal NCSs transplanted 2 days poststroke in a photo thrombotic mice model, reduced the infarct size by a little bit over of 15% and provided functional recovery on the rotarod test and limb strength, the neuronal differentiation nearly one month post-transplant was confirmed by immune cytochemistry analyses. 37 The differentiation towards the neuronal lineage within the ischemic region was also seen in a middle cerebral artery occlusion and reperfusion rat model. 38 In a toxic animal model of Parkinson disease (PD), functional improvements were seen following the implanting within the striatum of olfactory bulb-derived stem cells, 34 of which 50% showed neuronal differentiation at 8 week post-transplant; cells that have obtained similar results in a model of Alzheimer disease when transplanted in the lesioned hippocampus. 32 Likewise, beneficial effects were obtained by favoring the dopaminergic differentiation of implanted NSCs within the substantia nigra. 35,76 In models of AD. Immortalized stem cells MHP36 implanted at lesioned basal forebrain and hippocampus improved the performance on spatial learning and memory tasks following their implantation, subsequent migration to, mainly, the striatum, thalamus and basal forebrain regions and final neuronal differentiation. 148 NCSs modified to over-express the gen coding for the enzyme choline acetyltransferase differentiated into neurons after migration to the hippocampus and striatum, and helped to recover the learning and memory deficits seen in AD. 31 A last example of neuronal replacement comes from a model of spinal cord injury, 149 in which functional recovery was linked to the presence of transplanted NCSs and their neuronal differentiation and posterior formation of synaptic connections with host neurons.
Endogenous neurogenesis boosting:
After an injury, endogenous stem cells can be recruited in order to compensate for tissue loss, a compensation of tissue-specific and age-dependent regenerative potential. Unfortunately, in most cases, this backup system is insufficient to restore function completely. 150 A continuous aggression and progressive cell loss could affect, both directly and indirectly, NSCs endogenous niche, and with it, the whole restorative capacity. If the niche can affect transplanted cells behavior, the opposite is also possible. The use of exogenous NSCs comes in as a therapy with the cellular ability to recover and potentiate this endogenous repair mechanism. Studies in animal models of Alzheimer's disease (AD), spinal cord injury, stroke and Parkinson disease contribute some evidence on this respect. In a mice model of AD, human NCSs transplanted bilaterally within the hippocampus produced an improvement on spatial memory that was related to an increase in the number of DCX positive in the dentate gyrus, 151 a stimulating effect on endogenous neurogenesis also seen when transplanted in the lower spinal cord. 152 A rat model of stroke also showed a higher number of proliferating cells and migrating neuroblasts in the SVZ in the group with the intrastriatal grafts when compared to that receiving a vehicle, even at 14 weeks post-transplantation. 153 Trophic and modulatory mechanisms reviewed previously may not only occur at lesioned sites, they can reach neurogenic niches to exert modulation. The, multiples neurotrophic and regulatory secreted factors would be the intermediaries of this boosting capacity. [153][154][155][156][157][158][159][160][161][162][163][164][165][166][167][168][169][170]
Conclusion
The therapies with NSCs hold a great potential for many neuro pathological conditions. Neural stem cells appear naturally endowed with the appropriate machinery required to express an otherwise silent genomic potentiality in response to an appropriate pattern of stimulation. The long-believed prime mechanism of cell substitution is slowly backing away from the spotlight to give way to new exciting bystander mechanisms by which neural transplantation could prompt a functional recovery. In order to be successfully translated into the clinical setting, a fully understanding of stem cells properties and the complexity of their interaction, upon transplantation, with the pathological cellular and molecular microenvironment they encounter, is needed.
The mechanisms involved in a successful neural transplantation are multivariate, the exogenous and endogenous here reviewed interplay in manifold and heterogeneous ways. This host-graft reciprocal modulation impacts not only on differentiation, migration and survival of the implanted cells, but also mediates to a great extent the neuroprotective, immunomodulatory, neurogenic and neuroplasticity fostering-effect they are now recognized to possess. Partially conflicting in vitro and in vivo data on the role of the milieu-NSCs transplant communication warrant further research on the topic. More so, if we believe that different signaling molecules could act on completely opposed ends based on the particular pathological microenvironment they are embedded in. This insight will allow NSCs-based therapies to flourish in the new era. | 2019-04-01T13:12:25.731Z | 2016-03-24T00:00:00.000 | {
"year": 2016,
"sha1": "af41664a9a297e85d56dde56cc2d5d563e5db8cc",
"oa_license": "CCBYNC",
"oa_url": "https://doi.org/10.15406/jsrt.2016.01.00012",
"oa_status": "HYBRID",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "79175401b84d4b69c8493b4e48b108a09a39efea",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Biology"
]
} |
110388880 | pes2o/s2orc | v3-fos-license | DAQ Development for Silicon-On-Insulator Pixel detectors
We are developing DAQ for Si-pixel detectors by using a Slicon-On-Insulator (SOI) technology. This DAQ consists of firmware works on SEABAS (Soi EvAluation BoArd with Sitcp) DAQ board and software works on PC. We have been working on the development of firmware/software. Now we accomplished to speed up the readout (~90Hz) and to add a function for frame rate control. This is the report of our development work for the High Speed DAQ system.
SOI Pixel detectors are being developed by a SOIPIX collaboration led by KEK. They are based on a 0.2 um CMOS fully-depleted (FD-) SOI process of Lapis Semiconductor Co., Ltd [1]. A detector's structure image is shown in Fig 1. SOI detector consists of a thick and high-resistivity Si substrate for sensing part, and a thin Si layer for CMOS circuits [1]. An SOI detector has no bump bonding, therefore the application has low capacitance, low noise, high gain, and low material budget. It can run fast with low power. For Examination of DAQ system, used the integration type SOI pixel detectors, named INT-PIX4 [2]. The pixel size is 17µm squares, a number of pixels are 832 × 512, and a sensitive area is 14.1×8.7mm 2 . This detector consists of 13 blocks (64 × 512 pixels / block) and each blocks has independent channels of analog output for parallel readout. Photo of INTPIX4 is shown in Fig 2. INTPIX4 is read out through the board called SEABAS 2 (Soi EvAluation BoArd with Sitcp 2 [3]). SEABAS 2 is the 2nd Generation of SEABAS board. Fig 3 is the photo of SEABAS 2. This board has FPGAs for Gigabit Ethernet and a user circuit, and 12-bit ADC for convert detector's analog output. When SEABAS2 transfer INTPIX4's output after ADC conversion, total data size is 6,815,744 bit (ADC 12bit + padding 4bit = 16 bit per pixel) per frame. DAQ system consists of firmware work on SEABAS2 user circuit FPGA, and software work on PC (shown in Fig 4). Between PC and SEABAS2 are connected by Gigabit Ethernet and communicate TCP/UDP protocol. The schema of SEABAS DAQ is shown in SOI Pixel sensor outputs a large quantity of data at a time. Fig 6 is a simple schema of readout from sensor. For example, 6,815,744 bit/frame in the case of INTPIX4. Therefore, fast data transfer is important for quick and high resolution measurement. And if use DAQ for temporal response measurement, stability of transfer rate is also important.
Methods
Our approach for high speed & stable transfer DAQ is total 3 points in software and firmware. "Abstraction and Hierarchization" is the one of the refine approach in software region. The schema of this refine is shown in Fig 7. In existing system, all functions are controlled from GUI directly. And many function's codes are written without classified. This situation causes confusion in software development, such as "Spaghetti Code". Therefore, we refined this software structure. We separated function's codes without directly related to GUI, and function's codes were classified and concentrated to some classes and functions under DaqEnv class. All classes completes in its own as much as possible. DaqEnv is the abstraction class for absorbing all environment difference. GUI access every functions via DaqEnv class.
Abstraction and Hierarchization
This refine gives 2 merits.
• Easy to add/delete function. We only have to fix related class's code. Other codes are needless to touch.
• Easy to apply for any SOI Sensors.
To apply this software for another sensor, only few codes need modify. "Multi Thread Processing" is the other one of the refine approach in software region. The schema of this refine is shown in Fig 8. In existing system, software working on single thread processing. In the case, data taking job this job is data taking from SEABAS and data store job are sequential. So next data taking job have to follow previous data store job. In result, DAQ whole efficiency is reduced.
Multi Thread (MT) Processing
Therefore, we refined this software structure by using multi thread processing. To implement multi thread, we use WIN32API [4] and Posix Thread [5].
Refined software's structure consists of 2 threads, one is data taking job thread, and the other is data store job thread. These threads are working in parallel, so data taking don't have to wait the other job. To pass data between 2 threads, use "First In, First Out (FIFO)" buffer based on std::list[6]. This refine make it possible to take data at the maximum speed. From measured value, data transfer rate of refined software is three times as speedy as that of existing one. Figure 9: The Schema of Trigger rate control function.
Trigger rate control (TRC)
"Trigger rate control" is the refine approach in firmware region. The schema of this refine is shown in Fig 9. In existing system, frame data from sensor is taken one after another, without rate control. Because of this, sometime frame rate stability is broken when data transfer is delaying. To refine this problem, the flow control function is required.
Therefore, we implemented some low frequency clock as "Trigger Control Clock (TCC)" to firmware. Role of this clock is rate control.
The flow of the data taking sequence with trigger rate control is this way.
1. When TCC raise up while waiting trigger, start data taking. 2. While data taking and transfer, issuing of next trigger is postponed. 3. After the end of data transfer, DAQ system will return to wait next trigger state. 4. Repeat 1-3 until all frames are taken.
When TCC is set lower than maximum data transfer rate, frame rate will be stabilized in fixed (synchronized to TCC) rate. We developed the new DAQ software refined from previous software. Introduced function, "Abstraction and Hierarchization" and "Multi Thread", was implemented.
New DAQ Software
This software is using some libraries, Qt 5.4, OpenCV2.4.11 and picojson. Recent version of Windows (Vista, 7, 8, 8.1, both 32 bit and 64 bit) are supported. Linux will be supported soon.
This software's GUI was completely redeveloped based on Qt library. This GUI is compatible with previous GUI. Thus function's codes can be transplanted each other between new GUI and old one.
New Software's working image is shown in Fig 10.
X-ray Imaging
This is results of X-ray imaging data taken by new DAQ. The Specification of DAQ PC is shown in Table 1. Fig 11 is X-ray imaging data of the dried anchovy. And Fig 12 is X-ray imaging data of the red pepper. We can see clearly the structure of sample. When we take this, frame rate is 65Hz, this is almost 94% of estimated maximum. And we can confirm new DAQ can take data correctly.
Trigger rate control Test
We tested trigger rate control function. DAQ PC is same PC as X-ray imaging.
The contents of the test is take a movie of blinking LED (2.4Hz, duty 56%), and count frames in one blink period. When taking data from sensor, set TRC function enable with 4 preset TCC clocks (10, 25, 50, 75 Hz, and Full Speed is noncontrolled). Frame's count is related to the real frame rate. If TRC works correctly, frame's count will change depending on selected clock. Fig 13 is result of the test for check relation between selected clock and real frame rate. Figure's horizontal axis is selected clock rate, vertical axis is frame's count.
We can see good relation between frame's count in one blink period and selected frame rate.
Conclusions
• Developed DAQ firmware & software for high-speed & stable readout. • Applied Abstraction and Hierarchization for DAQ software.
• Implemented Multi Thread Processing for DAQ software.
• New DAQ can high-speed data taking. (∼ 90Hz, 94% of maximum in X-ray Imaging) • Implemented Trigger rate control for new DAQ, and this function seems work correctly. • We confirmed new DAQ can take data correctly. | 2015-07-17T12:24:11.000Z | 2015-07-17T00:00:00.000 | {
"year": 2015,
"sha1": "8566291cd0bb2a8263b108721743666909cd3428",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "8566291cd0bb2a8263b108721743666909cd3428",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
1291297 | pes2o/s2orc | v3-fos-license | Integrative Computational and Experimental Approaches to Establish a Post-Myocardial Infarction Knowledge Map
Vast research efforts have been devoted to providing clinical diagnostic markers of myocardial infarction (MI), leading to over one million abstracts associated with “MI” and “Cardiovascular Diseases” in PubMed. Accumulation of the research results imposed a challenge to integrate and interpret these results. To address this problem and better understand how the left ventricle (LV) remodels post-MI at both the molecular and cellular levels, we propose here an integrative framework that couples computational methods and experimental data. We selected an initial set of MI-related proteins from published human studies and constructed an MI-specific protein-protein-interaction network (MIPIN). Structural and functional analysis of the MIPIN showed that the post-MI LV exhibited increased representation of proteins involved in transcriptional activity, inflammatory response, and extracellular matrix (ECM) remodeling. Known plasma or serum expression changes of the MIPIN proteins in patients with MI were acquired by data mining of the PubMed and UniProt knowledgebase, and served as a training set to predict unlabeled MIPIN protein changes post-MI. The predictions were validated with published results in PubMed, suggesting prognosticative capability of the MIPIN. Further, we established the first knowledge map related to the post-MI response, providing a major step towards enhancing our understanding of molecular interactions specific to MI and linking the molecular interaction, cellular responses, and biological processes to quantify LV remodeling.
Introduction
Myocardial infarction (MI) is a prominent cause of mortality and morbidity worldwide [1]. MI is defined as the death of cardiac myocytes due to prolonged ischemia. As a result of myonecrosis, molecules from injured myocytes are discharged into the blood circulation, and the list of injury markers includes myoglobin, cardiac troponins T and I, creatine kinase-MB, and lactate dehydrogenase [2]. Molecular interactions within the myocardium activate a cascade of cellular responses, including a robust inflammatory response. The cellular responses within the LV are integrated by the extracellular matrix stimuli that bind to surface receptors. As such, the ECM coordinates the healing response to MI [3,4,5,6,7,8].
Through the last 4 decades, there have been tremendous research efforts towards understanding the immediate myocyte response to ischemia, with the goal of identifying diagnostic indicators as well as targets to preserve myocyte viability. These have resulted in the implementation of several therapeutic strategies, including reperfusion and the use of angiotensin converting enzyme inhibitors [9,10]. Currently, 30 day post-MI survival rates approach 90%, and the immediate prognosis is excellent for those patients who receive timely and effective treatment. The number of patients who will go on to develop congestive heart failure, in part as a consequence of this success, however, has increased. While much is known about the events that occur immediately before and after MI, much remains to be mechanistically elucidated regarding the effects of MI on longterm survival. A knowledge map that explores the regulatory relationship among ECM, cellular responses, and biological pathways post-MI is still lacking.
Over a million abstracts can be retrieved from PubMed using a keyword search for [''myocardial infarction'' or ''cardiovascular diseases''], and massive amounts of genomic and proteomic data and molecular profiles have been deposited in public databases [11,12,13]. High-throughput protein microarrays have provided efficient procedures to investigate and measure a vast number of protein-ligand interactions in a single experiment. Protein-protein interaction network (PPI) analysis using large-scale databases has been one of the most promising computational approaches to integrate experimental data at the molecular and cellular levels [14,15,16,17]. Due to the growing availability of such large-scale datasets, PPIs have been applied to analyze numerous human diseases including lung cancer, breast cancer, and myocardial infarction [18,19,20].
The reported data which have largely been obtained with different experimental conditions, protocols, species, and research teams are embedded in the literature and distributed in disparate databases. The ability to integrate data from such heterogeneous resources will allow us to extract relevant information and identify knowledge gaps to direct future research efforts. To address these challenges, we report here an integrative computational approach including compiling a MI-specific PPI database through mining PubMed and UniProt to establish a knowledge map for LV remodeling post-MI [21,22]. This MI-related knowledge map is the first major step towards enhancing our understanding of molecular interactions specific to MI and linking the molecular interaction, cellular responses, and biological pathways.
Results
The MI-specific protein-protein interaction network (MIPIN) is strongly connected MI-related proteins were first obtained from the Online Mendelian Inheritance in Man (OMIM) database, PubMed Gene, and PubMed Protein databases by using ''myocardial infarction'' as the keyword and further refined by our cardiac clinicians (RAL and RJC) and cardiac biologist (MLL), producing a list of 38 seed proteins for humans [23]. With these seed proteins and their interacting proteins, we constructed a MI-specific PPI network with a total of 613 proteins (vertices) and an associated 4443 interactions (or edges) ( Figure 1A). Detailed procedures to establish the MIPIN are provided in the Methods.
We observed that the MIPIN was strongly connected, in that there was always an edge between any two proteins in the MIPIN. Of the 613 proteins, 70 proteins had only 1 or 2 edges, 121 had 3 to 5 edges, and the rest had .5 edges. The degree distribution of MIPIN closely followed a power law distribution (Kolmogorov-Smirnoff test, p-value = 0.97, see Methods for details), where the degree of a vertex in a network was defined as the number of direct links incident upon that vertex ( Figure 1B). The power law distribution indicated that the MIPIN was a scale-free network, which displayed robustness against disruptive failures of random vertices [24].
We performed two statistical tests to evaluate the specificity of the MIPIN. First, interactions were shuffled based on the Erdos-Renyi model, such that the 100,000 randomly generated networks each had 613 vertices and 4443 edges, which was the same number as the MIPIN [25]. Compared to the Erdos-Renyi model of random networks, the MIPIN had a lower average value of betweenness centrality while having higher average values of closeness centrality, clustering coefficient, and eccentricity (empirical p-value,0.001), indicating that proteins in the MIPIN were much more closely related to each other than would occur by random chance, and these proteins might have functional relevance.
In the second more stringent statistics test, we randomly picked the same number of seed proteins (n = 38) from 14969 human proteins and created 100,000 random networks in the same manner we constructed the MIPIN. Each random network had different number of vertices and edges. Compared to the randomly generated networks, the MIPIN had a higher mean value of closeness centrality and eccentricity (empirical p-value,0.05) and displayed a distinct distribution of closeness centrality (Figure 2A). We observed a Gaussian-like distribution for closeness centrality in the MIPIN, while closeness centrality distribution in the random networks resembled the Delta function with few vertices having very low value of closeness centrality, regardless of their number of vertices and edges ( Figure S1). We also noticed that the vertices within a small range of degrees in the MIPIN had a larger variance of closeness centrality ( Figure 2B), while the closeness centrality remained fairly constant with an increasing number of direct interactions in the random networks ( Figure S2). Figure 2B shows that vertices in the first group [26] displayed substantial differences in closeness centrality with small changes of degree (natural logarithm of closeness centrality of the red group had a variance of 8.25610 23 ). On the other hand, as the degree of a vertex increased, the closeness centrality exhibited minor variation (natural logarithm of closeness centrality of the red group had a variance of 1.37610 23 ).
The overall structure of the MIPIN demonstrated that it was a strongly-connected and scale-free network, indicating that we captured a solid network of protein interactions from the human PPI that was highly specific. Further statistical tests allowed us to evaluate the significance of several MIPIN network properties, including betweenness centrality, closeness centrality, clustering coefficient, and eccentricity. The larger mean values of closeness centrality and eccentricity in MIPIN indicated that the randomly generated networks had more orphan sub-networks in contrast to the single strongly-connected MI network, suggesting proteins in MIPIN were significantly more closely related to each other and have more specific function than would occur by random chance.
Proteins in the MIPIN are localized primarily in the extracellular matrix regions and plasma membrane The localization of MIPIN proteins was determined using Gene Ontology (GO) enrichment analysis by DAVID [27,28]. GO is a
Author Summary
Heart attack, known medically as myocardial infarction, often occurs as a result of partial shortage of blood supply to a portion of the heart, leading to the death of heart muscle cells. Following myocardial infarction, complications might arise, including arrhythmia, myocardial rupture, left ventricular dysfunction, and heart failure. Although myocardial infarction can be quickly diagnosed using a various number of tests, including blood tests and electrocardiography, there have been no available prognostic tests to predict the long-term outcome in response to myocardial infarction. Here, we present a framework to analyze how the left ventricle responds to myocardial infarction by combining protein interactome and experimental results retrieved from published human studies. The framework organized current understanding of molecular interactions specific to myocardial infarction, cellular responses, and biological processes to quantify left ventricular remodeling process. Specifically, our knowledge map showed that transcriptional activity, inflammatory response, and extracellular matrix remodeling are the main functional themes post myocardial infarction. In addition, text analytics of relevant abstracts revealed differentiated protein expressions in plasma or serum expressions from patients with myocardial infarction. Using this data, we predicted expression levels of other proteins following myocardial infarction.
controlled vocabulary of terms that characterizes gene products in terms of their cellular components, biological processes, and molecular functions in a hierarchical structure from the most general to more specialized terms. The cellular components ontology describes locations at the levels of subcellular structures and macromolecular complexes. We focused on classification by cellular components to provide suggestions on the underlying physiological protein functions.
More than 65% of the seed proteins were localized in the extracellular region, including vascular endothelial growth factor (VEGF), transforming growth factor beta-1 (TGFb1), and tissue inhibitor of metalloproteinase-1 (TIMP1) (Figure 3). VEGF, TGFb, and TIMP1 were also localized to platelet alpha-granules that have been known to play an important role in thrombosis, hemostasis, inflammation, atherosclerosis, wound healing, and angiogenesis [29]. In addition, VEGF, TGFb, and TIMP1 were localized to the ECM, cell surface, and cytoplasmic membranebounded vesicle lumens in many cell types, suggesting active roles in multiple pathologies. A list of GO cellular components of the seed proteins were shown in Table S1.
The inclusion of interacting partners of seed proteins in the MIPIN allows us to explore additional potential biomarkers for MI response. These proteins added 57 cellular components to the initial 19 locations (Figure 4). In addition to the extracellular region, the plasma membrane and cytosol were two preferred sites for most of the proteins in the MIPIN. We also identified a The histogram shows that the degree distribution of MIPIN followed a power law function, indicating that MIPIN is a scale-free network robust to disturbance. The degree ranged from 1 to 366, with polyubiquitin-C being an outlier with the highest degree and not included in the plot. doi:10.1371/journal.pcbi.1003472.g001 number of macromolecular complexes, including the TGFb receptor complex, interleukin-1 (IL1) receptor complex, deathinducing signaling complex, origin recognition complex, lipopolysaccharide receptor complex, fibrinogen complex, integrin complex, and transcription factor complex. These complexes strongly suggest the presence of an inflammatory response. The signaling pathway of the lipopolysaccharide receptor complex has been linked to activation and deactivation of macrophages by Vertices having from 1 to 5 degrees displayed substantial differences in closeness centrality (red); on the other hand, as the degree of vertices increased, closeness centrality exhibited minor variation (green). These graphs demonstrate the clear differences between MIPIN and random networks (see also Figure S1 and S2). doi:10.1371/journal.pcbi.1003472.g002 lipopolysaccharide, a major cell responding to inflammation [30]. Activated macrophages secrete many different inflammatory cytokines, including IL1 and TGFb. IL1 receptor complex and TGFb receptor complex are essential factors in the inflammatory response post-MI [31,32].
Transcription activity, ECM remodeling, and inflammatory response are main functional themes of the MIPIN We found 993 enriched GO biological process terms associated with MIPIN using DAVID. To glean functional insight from the large number of enriched GO biological process terms, we adapted a method from Louie et al. to extract the most meaningful biological processes, in terms of specificity [33]. In the GO structural hierarchy, the biological processes can be traversed from the root/parent node (GO:0008150:''biological process'') to narrower and more specific definitions in the child nodes, such as from the parent node ''regulation of blood coagulation'' to its child terms: ''positive regulation of blood coagulation'' and ''negative regulation of blood coagulation''.
The function specificity for the GO terms was evaluated based on four measures: number of ancestor terms, offspring score, proportion of terms, and information content. Higher values of these measures indicate higher specificity. A broader, more general term has less number of ancestor terms and more offspring when compared to a narrower, more specific definition. The broadest term ''biological process'' had no ancestors, since it is the root node in the biological process branch, as the parent of all other GO biological process terms. The offspring score for a GO term was calculated based on the number of offspring for a node such that a higher score represents more specific function. GO proportion described the ratio between numbers of ancestor and offspring terms, with 0 indicating non-specific and 1 indicating the highest specificity. In addition, we considered the probability of observing a GO term because more specific terms annotate less number of genes, and thus were less likely to be found enriched in a dataset. Information content (IC) was a normalized score of this probability such that the root node has an IC of 0, and more specific terms have higher IC.
We obtained very different distributions of the 993 biological process GO terms for each of these measures ( Figure 5). The number of ancestors followed a power-law distribution while information content followed a Gaussian-like distribution. These four evaluations illustrated that only a small number of 993 GO terms were specific. Among the most specific GO terms with regards to the number of ancestors, the top 20 terms were related to kinase and transcriptional activities, suggesting the significant signaling in the MIPIN (Table S2). We obtained 80 enriched GO terms that had only one offspring in the GO dataset while the offspring of the 80 GO terms were not enriched Table S3). These 80 GO terms were the most specific biological processes we could identified for MIPIN. These terms also emphasized the role of kinase signaling, cell apoptosis/necrosis, migration, differentiation, cell-matrix adhesion, ECM remodeling, and inflammatory response. Top 20 GO proportion evaluation resulted in significance of kinase activity and inflammatory responses (Table S4). The top 20 biological processes with the highest IC score highlighted inflammatory and immune responses (Table S5). The top two terms in the IC list were ''negative regulation of L-glutamate transport'' (p-value,0.01) and ''regulation of L-glutamate transport'' (p-value,0.05). Currently, there are very few studies on the role of L-glutamate post-MI. Lofgren et al. found that L-glutamate provides cardioprotection in the same manner as classical ischemic preconditioning [34].
We listed the most significant GO biological process terms based on the four specificity measures and noticed that transcription activity, response to inflammation, and ECM remodeling accounted for the most significant processes (p-value,0.0001, Table 1). ''Positive regulation of JUN kinase activity'' (p-value,0.01) had the highest GO proportion as of 0.987, the most number of ancestors (81) and only one child term, and a relatively high IC score as of 7.96, therefore, we identified it as one of the most enriched GO terms in the MIPIN. ''Positive regulation of interleukin-6 biosynthetic process'' and ''positive regulation of interleukin-12 biosynthetic process'' (p-value,0.005) ranked among the top GO terms with highest number of ancestors, GO proportion and IC score. These two processes represent inflammatory response post-MI. Additionally, three other inflammatory functions ''activation of plasma proteins involved in acute inflammatory response'', ''connective tissue replacement involved in inflammatory response wound healing'' and ''wound healing involved in inflammatory response'' (p-value,0.0001) were ranked high in the top 20 IC list, further confirming the importance of inflammatory response post-MI. These pathways are also important for wound healing. Together with collagen fibril organization and cell-matrix adhesion GO terms, we identified ECM remodeling as another key component post-MI.
Integrating experimental results to predict protein expressions post-MI with the MIPIN Based on GO biological process information and MIPIN structure, we predicted protein expression levels in the MIPIN and validated with published results obtained from MI patient data. We automatically text-mined plasma and serum protein expression levels in post-MI patients reported in articles published between Jan 1, 2005 and May 31, 2013. We chose plasma and serum measurements here for an easier clinical study in the future. Abstracts studying association of MI with diabetes, or coronary artery diseases without MI, or protein concentrations being measured after percutaneous coronary intervention post-MI, were not considered. R and Java programs were written to perform XML parsing and text mining on relevant PubMed abstracts (see Methods). From a total of 4326 abstracts, we obtained 21 highly confident up-regulated proteins, and 1 down-regulated protein (Adiponectin), each with expression results confirmed by at least 2 citations (Table S6).
We used a semi-supervised learning method to predict expression changes in other proteins in the network. With the available expression levels on 22 ''labeled'' proteins as the training set, we predicted 14 up-regulated proteins ( Table 2). To validate the computational predictions, we examined reported literature from 1990 till current and found that 11 of the 14 predicted proteins have supporting experimental evidence. Stromelysin-1 (matrix metalloproteinase-3 [MMP3]), neutrophil elastase (also named as Human leukocyte elastase, HLE), thrombospondin-1 (TSP1), and fibronectin [35] increased in plasma from patients post-MI [36,37,38,39,40]. In mouse models of MI, CD44 increased in LV by 6 hours, C-C motif chemokine 7 (CCL7) increased in ischemic myocardium after 24 hours, ELAV-like protein 1 [41] increased as well as matrilysin (MMP7) [42,43,44,45]. Inhibition of collagen XVIII (COIA1) was found to impair LV remodeling and heart failure in rat MI model [46]. While there was no available expression data on complement factor H (CFAH) and matrix metalloproteinase-17 (MMP17) in plasma from patients post-MI, the CFAH polymorphism Y402H has been inversely associated with the risk of coronary heart disease (CHD) among women but not men, and MMP17 was found to be overexpressed in atherosclerotic vessels [47,48]. We did not find any information regarding TIMP3, TNF-receptor associated factor 6 (TRAF6), and brevican core protein (PGCB) in the setting of MI either for human or animal studies, although TIMP3 was down-regulated in patients with ischemic cardiomyopathy (ICM) and dilated cardiomyopathy (DCM) [49]. Further experimental measurements on these proteins are needed to validate our predictions post-MI.
The interactions among the 36 proteins were shown in Figure 6. All 14 predicted proteins and 22 labeled proteins are well connected, except two labeled proteins (ADIPO and ANFB). Since Adiponectin (ADIPO) was the only down-regulated protein post-MI, we did not have sufficient evidence to predict other down-regulated proteins. Also, we could not use natriuretic peptides B (ANFB, also named as BNP for gene name) to predict any proteins because none of its direct neighbors were connected to proteins with known quantifications, hence having low predictive confidence.
Establishment of MI knowledge map
Although the GO biological process revealed the overall underlying molecular functions, it could not capture the regulatory dynamics and dependencies required to completely describe a pathway. To have a better understanding of MI pathology, we examined the 613 proteins in the MIPIN and found 48 highly enriched pathways from Biocarta (http://biocarta.com/; Figure 7). These pathways covered broad categories, including adhesion, apoptosis, cell activation, cell cycle regulation, cell signaling, cytokines/chemokines, developmental biology, expression, hematopoiesis, and immunology.
We clustered the 48 enriched Biocarta pathways with respect to their Kappa similarity matrix into 10 functional groups including 4 groups of Kinases Pathways, Angiogenesis, Hypoxia, Acute MI, 2 groups of Inflammatory Responses, LV Remodeling, and other Signaling Pathways (Figure 8).
Each row and column in Figure 8 represented an enriched Biocarta pathway for MIPIN. The sequence of pathways in rows and columns are the same. The row sequence of pathways was shown from the top to the bottom in Figure 8. Each cell in the figure represented the intersection between a row and a column and the color of a cell represented the similarity between two pathways. The color legend denoted the similarity between two pathways with the red representing high similarity and light color representing low similarity. The strongest similarity was the selfsimilarity and the color blocks with deepest red color were located on the diagonal of this symmetric figure.
It was shown that the acute MI group (block AMI) shared high similarity within the block and relative low similarity with only two pathways h_sppaPathway in block angionenesis (block A) and h_p53hypoxiaPathway in block hypoxia (block H). h_sppaPathway denoted ''aspirin blocks signaling pathway involved in platelet activation'' and h_p53hypoxiaPathway denoted the role of p53 and hypoxia in the cardiovascular system. Interestingly, by checking the color of the intersections of h_p53hypoxiaPathway and h_sppaPathway, the similarity between these two pathways were very low, suggesting no proteins in common in these two pathways and these two pathways could independently contribute to acute MI.
Kinases (KP) and signaling pathway (SP) blocks shared high similarity with more pathways in general since they transmitted spatial signals to trigger pathways related to cellular functions, which was illustrated by the appearance of light yellow boxes in the rows/columns representing KP and SP blocks. Specifically, kinases pathway blocks KP1, KP2, and signal transduction pathway SP were closely related to inflammatory response IR1. Kinases pathway block KP3 was closely related to hypoxia block H. Kinases pathways block KP4 was closely related to angiogenesis block. As an example, platelet activation (h_sppaPathway) was one of the pathways that shared similarity with the most number of pathways Figure 8. It shared higher similarity with kinases pathway block KP1 and low similarity with inflammatory response block IR1 (as shown in the 3 rd column from the right or 3 rd row from the bottom). Meanwhile, KP1 and IR1 shared high similarity, suggesting a cause-effect relationship from platelet activation, kinases pathway KP1 to inflammatory response IR1 cascade. Platelet activation pathway also shared high similarity with KP3, KP4, and angiogenesis (A) blocks, suggesting a possible regulation between platelet activation and angiogenesis.
Although there was no specific pathway named LV remodeling in Biocarta, we defined the Inhibition of Matrix Metalloproteinases pathway (h_reckPathway) as part of LV remodeling in our knowledge map since the pathway was closely related to ECM degradation. There are 9 proteins listed in pathway by Biocarta, including MMP-2, -9, TIMP-1, -2, -3, -4, reversion-inducingcysteine-rich protein with kazal motifs (RECK), v-Ha-ras Harvey rat sarcoma viral oncogene homolog (RAS) and all of them were included in our MIPIN. This pathway did not show high similarity with any other pathways in Figure 8 though illustrating low similarity with h_pmlPathway in KP1 block, h_bcrPathway and h_pyk2Pathway in KP4 block, and 7 pathways in angiogenesis block, suggesting possible regulation among LV remodeling, inflammatory response, and angiogenesis. To better understand Figure 8, pathways clustered in each functional group were listed in Table 3, and 160 proteins with specific regulatory relationship in each functional group were listed in Table S8. This forms the basic knowledge map for MI response that links proteins to specific pathways and functional groups. Combining functional information for all 613 potential MI related proteins extracted by MIPIN, including cellular components, biological processes, and specific pathways, we established the knowledge map for MI ( Figure S3). Essentially, the knowledge map summarizes important spatial and temporal aspects of the static MIPIN; it describes the progression of MI and involvement of different proteins in three major phases: Development of MI (hypoxia and acute MI), response to MI (signaling pathway, kinases pathway, and inflammatory responses), and tissue remodeling (left ventricle remodeling and angiogenesis). Table 3, Table S8 and, Figure S3. doi:10.1371/journal.pcbi.1003472.g008 Table 3. Biological processes enriched with the clustered pathways using Kappa similarity matrix.
Discussion
The goal of this study was to establish a framework to 1) automatically extract the information embedded in MI-related PubMed abstracts and reported data through a PPI network, 2) integrate the information into a knowledge map for MI response, and 3) cluster proteins in the knowledge map based on their functions. In this study, we started from the seed proteins for MI and PPI databases at molecular level, extended to cellular components of the proteins at cellular level, and further mapped the information to functional responses and specific pathways to illustrate a complete framework that integrates molecular, cellular, and functional analysis.
There are three major contributions of this study. First, we established a MI-specific PPI network and confirmed its specificity with two different statistical analyses. We predicted expression levels of 14 proteins in the MIPIN based on the up/down regulations of 22 proteins. The predicted protein expressions from computational analyses agreed well with reported experimental measurements. Second, we illustrated the importance of inflammatory and ECM remodeling responses in LV remodeling post-MI. Most proteins in the MIPIN were localized primarily in the extracellular regions and the plasma membrane. Additionally, transcription activity, ECM remodeling, and inflammatory response were the main functional themes of the MIPIN. In fact, almost half of the 22 highly confident proteins were inflammatory or extracellular proteins, demonstrating that these two phases are very crucial in determining the outcome of MI. Third, we established the first knowledge map for MI response based on the clustered pathways. This is the first knowledge map constructed by integrating our knowledge obtained from molecular, cellular, and functional factors via PPI, cellular components, biological processes and pathways. In addition, the knowledge map illustrated the temporal response from development of MI to tissue remodeling and the related proteins at each stage. The approach to establish the knowledge map for MI could also be applied to other diseases.
Our results illustrated that using the structural property of the PPI network is a promising technique to distinguish functional specific networks from random networks. However, individual structure property alone may not be sufficient to identify significant markers. Degree centrality provides independent evaluation of direct links of a vertex. Intuitively, a hub protein with higher degree may represent a significant marker. However, this cannot be confirmed with current clinical practice. For example, cardiac troponin I (cTnI) is a well-known biomarker for MI but cTnI only has a degree of 3 in our network [50]. Additionally, MMP9 and TIMP1 have been reported as key regulators of LV remodeling post-MI in a number of publications, while MMP9 had a degree of 36 and TIMP1 had 12, the average degree of MIPIN was 15 [51,52]. Another structure property, betweenness, denotes how frequently a vertex or edge is used while walking through the network with shortest path. The combination of different structural properties might be a promising way to identify key markers. For example, a vertex with small degree and high betweenness denotes a protein that is frequently used to transmit information in the network, suggesting its significance as a bottle neck of the network or cross talk between biological processes. More accurate analysis of such evaluation scheme will be conducted in our future research.
Our results highlight the influence of the early inflammatory response initiated after tissue hypoxia. Following hypoxia, upregulation of RAS, focal adhesion kinase 1 (FADK1), paxillin (PXN), and p53 simultaneously induce at least four major cellular activities, including cell proliferation, migration, apoptosis and necrosis. Proliferation of endothelial cells increases the production of nitric oxide (NO), which plays an important role in the later phase of LV remodeling and wound healing. Fibroblasts and myofibroblasts deposit a network of collagen at the infarct site, preparing for the formation of tissue granulation. Collectively, cell proliferation, migration, apoptosis and necrosis contribute to angiogenesis parallel to scar formation.
In summary, we report here the establishment of the first MIspecific PPI network that can be used as a foundation to interrogate the literature for candidate biomarkers of adverse remodeling post-MI.
Selection of seed proteins for MIPIN
In order to acquire a list of proteins related to MI, we initiated a keyword search for ''myocardial infarction'' in three different databases including OMIM, PubMed Gene and PubMed Protein, resulting in an initial pool of 658 genes from OMIM and PubMed Gene and 2319 protein sequences from PubMed Protein databases. Because the obtained genes were retrieved using both animal and clinical studies, all the genes and proteins retrieved from OMIM, PubMed Gene, and PubMed Protein databases were matched for human protein names in UniProt, yielding 709 proteins (Table S9). By evaluating the description of the genes obtained from OMIM, terms not related to MI response were revealed (e.g., stroke, arrhythmogenic, cardiomyopathy, and arterial calcification). These genes were removed from our list. We also removed proteins directly related to myocytes, since these proteins reflect more the pre-MI or acute MI instead of post-MI response. From this, we were left with 22 MI response related genes.
Searching PubMed Gene and Protein databases provides a candidate list of genes and protein sequences potentially associated with MI; however, this search strategy does not provide any description of the retrieved genes and proteins. We verified additional 16 seed proteins associated with MI using genome wide disease association databases, GENERIF and PubMed. This led to a total of 38 seed proteins including the major ones previously identified in our experiments, including collagen, MMP9, TIMP1, TNFa, TGFb, and monocytes chemotactic protein-1 (MCP1). All seed proteins were associated with MI in at least 2 independent manuscripts, as shown in Table 4.
Consistent with a strong role in the wound healing response, a significant portion of the seed proteins were localized to the ECM. To verify whether our selection of seed proteins was biased, we checked cellular localization of all MI related proteins obtained from OMIM, PubMed Gene, and PubMed Protein databases and encountered a similar result; most of the proteins were localized in the extracellular region and plasma membrane (Table S10). These results indicate that ECM proteins are more likely play a key role in MI response and suggest that our seed protein selection was not biased.
Construction of MIPIN
From the seed protein list, we searched for all proteins interacting with seed proteins and interactions among the extended proteins through ConsensusPathDB-human, which integrates protein-protein interactions in Homo sapiens from different databases such as Intact, DIP, MINT, HPRD, BioGRID and MIPS [22]. Subsequently, we constructed the MIPIN using 'igraph' in R [53]. Each vertex of the network represents a protein and each edge between two vertices represents a protein-protein interaction. The resulting MIPIN consists of 613 vertices and 4443 edges.
The degree distribution of MIPIN was examined by the procedure proposed by Clauset et al. and implemented in R [54]. Parameters were estimated based on the theoretical cumulative distribution, P x ð Þ~x xmin {az1 where x, in this case, was degrees of MIPIN vertices. The degree distribution was fitted with x min = 31 and a = 3.52 (Kolmogorov-Smirnoff test, p-value = 0.97). Additionally, the Kolmogorov-Smirnoff test was performed to examine how well the estimated power law distribution fitted MIPIN vertex degrees. If the Kolmogorov-Smirnoff p-value,0.05, we reject the hypothesis that the original data is drawn from the fitted power-law distribution. Otherwise, the higher the Kolmogorov-Smirnoff p-value is above 0.05, the better the estimated power-law distribution fits the data.
Statistical evaluation
There were several different measures used to characterize the properties of the network, including betweenness centrality, closeness centrality, clustering coefficient, degree centrality, eccentricity, and graph density. The betweenness centrality characterizes the direct and indirect influences of vertices at distant network sites [55]. Closeness centrality measures how many steps are required to access every other vertex from a given vertex [55]. The vertex with the largest value of closeness centrality performs the least amount of steps to sequentially spread information to other reachable vertices from that vertex in the network. Clustering coefficient describes the connectivity of the neighborhood of a vertex [56]. Higher clustering coefficient means more neighbors are connected to each other. Eccentricity of a vertex measured the shortest path distance from the farthest vertex in the graph [57]. We compared the value of six aforementioned measures of MIPIN with the average measurements of randomly generated networks. The empirical p-values for each measure were then calculated by counting the number of random networks whose average measures were equal to, greater or smaller than the corresponding values from MIPIN.
Functional annotation analysis
We examined the functional organization of MIPIN with enriched GO terms using DAVID Functional Annotation Tool [27]. In DAVID, we set the count to be 2 and 0.05 for EASE, a modified Fisher Exact P-Value. We further adapted the method proposed by Louie et al. to measure the specificity of the enriched GO terms for the MIPIN [33]. We computed four measurements to describe the function specificity of enriched GO term lists.
i. Number of ancestors. This measurement was calculated by counting the number of ancestor terms for a given GO term up to and including the root term (GO:0008150 : biological processes).
ii. Offspring score. Similarly, the number of offspring nodes that a node t has was counted as offsp(t): An adjusted measurement of GO offspring was calculated as, where A was the number of offspring of the root node, which was 23877 for the current version of GO.
iii. GO proportion. In order to take into account both the number of ancestor and offspring nodes for a particular GO term t, we defined the GO proportion as, The GO proportion will range from 0 to 1, where 0 indicates nonspecific function and 1 indicates high specificity. iv. Information content. The information content (IC) was calculated as follows, where Pr(t) was the ratio of the number of proteins assigned to the term t to 14673 human proteins annotated with GO Biological Process. Pr(t) was understood as the probability of observing a term t in GO dataset. For example, the root term GO:0008150 would have Pr of 1 and IC value of 0. An IC value of NA indicates that the GO term is either obsolete or not available in the current gene2go database. A GO term with higher IC value represents more specific function.
Integration of experimental results and predictions of protein changes
We searched the key word ''(myocardial infarction) AND (plasma OR serum)'' on PubMed with ''Homo Sapiens'' as species from Jan 1, 2005 until May 31, 2013. This search resulted in 4326 abstracts. To reduce laborious manual effort, we developed a data mining program written in R using available XML parser and text mining software [58,59].
The program required two input files, a list of protein aliases and a dictionary of words. We took advantage of a feature offered by UniProt in which users can submit a list of proteins and receive their full names and aliases in XML format. In order to obtain the full names and aliases of MIPIN proteins, we wrote a Java program to parse downloaded UniProt XML files and extract relevant information. The Java program can also be used to retrieve other protein features such as protein structures, domain, and citations in PubMed. The dictionary of words contained commonly used word indicating protein changes such as, ''elevate'' or ''up-regulate'' for positive change ''UP'', or ''downregulate'' or ''inhibit'' for negative change ''DOWN'' (Table S7). Abstracts in ''txt'' format were initially broken into separate sentences. If words of change and names of any proteins were found in the same sentence, we recorded the protein names with the associated words, and PubMed ID of the abstracts. The final output was manually checked to ensure complete and accurate reporting of available protein concentrations. This program significantly reduced the reading time of 4326 abstracts to extracted key sentences.
From these abstracts, we retrieved a small number of proteins with quantified concentrations in plasma or serum post-MI and assigned as labeled proteins in the MIPIN. A large number of MIPIN proteins did not have quantified concentrations and were assigned as unlabeled proteins. We applied semi-supervised learning to predict unlabeled proteins with the labeled protein set. The key component of this method is defined in the similarity matrix. The similarity matrix represents pair-wise similarity or dissimilarity between pairs of vertices. In this case, we combined graph structure similarity matrix evaluated using Jaccard coefficients and functional similarity matrix evaluated using Wang's method [60].
The Jaccard similarity matrix J of a graph G is a |V(G)|6|V(G)| square matrix, where |V(G)| denotes the number of vertices in the graph G. The Jaccard similarity coefficient of two vertices/proteins i and j was defined as, where N i and N j represented the set of direct neighbors of vertex i and j, respectively [61]. It follows that the diagonal of matrix J is 1. Besides structural information embedded in the Jaccard similarity matrix, we also integrated biological functions obtained from GO terms by calculating GO biological process similarity matrix GS also of size |V(G)|6|V(G)|.
The pairwise functional similarity between protein i, annotated by GO biological process term sets GOBP i = (gobp i1 , gobp i2 ,…,gobp im ), and protein j, annotated by GO biological process term sets GOBP j = (gobpj 1 , gobpj 2 ,…,gobp jn ), is defined as, where Sim(gobp i ,GOBP j ) was defined as the maximum semantic similarity between term gobp i and any of the terms in set GOBP j , with m and n represented terms in the i th and j th GOBP term sets, respectively [60]. The semantic similarity between a pair of GO terms can be determined based on their locations in the directed acyclic GO graph and their semantic relations, which can be 'is-a' or 'part-of', with their ancestor terms. The GS matrix is symmetric. We chose the Wang method, because the measurement algorithm offered two advantages. First, it only depends on the relationship of the GO terms within a specific ontology, which is the biological process in this case. Second, it avoids the effect of shallow annotation on the semantic relationships between child and parent terms (i.e., with the same parent, a pair of terms near the root should have larger semantic differences than a pair of terms far away from the root). Thus, the algorithm provided a consistent semantic similarity measurement between a pair of GO terms. We combined Jaccard similarity matrix J and GO biological process similarity matrix GS to produce the final similarity matrix W. The ij element of final similarity matrix W was defined as, Let L denote the labeled proteins and U denote the unlabeled proteins. The similarity matrix W could be partitioned as Let S~D {1 W , where D was the diagonal row sum matrix of W, and Y L was a binary vector describing the concentrations of labeled proteins post-MI with 1 for positive change ''UP'' and 0 for negative change ''DOWN''. Then the predicted concentration vectorŶ Y can be computed using the fits algorithm, The predicted concentrations were further updated with the sequential predictions algorithm to drive the estimates towards global point estimates. The algorithm ranked the unlabeled data into k number of regions, such that the unlabeled set connecting to the most number of labeled proteins was employed first with the fits algorithm, and penalized unlabeled proteins farther away from labeled proteins with inverse regularization penalty l. It was reasonable to initialize the fits algorithm with the protein having the highest labeled connectivity, and repeat with each subsequently ranked protein. We assigned the number of regions k to be the number of unlabeled proteins. Since we wanted to maintain a moderate regularization, the inverse regularization penalty l was set to be 2. The prediction process was implemented with the package 'spa' in R [62].
Pathways classification
A total of 48 enriched Biocarta pathways were retrieved from DAVID using 613 proteins in MIPIN with 'Count' set to be 2 and EASE set to be 0.05. The relationships between proteins and associated pathways could be simplified to a binary matrix of M rows and n columns, where M was the number of enriched pathways and n was the total number of associated proteins with enriched pathways (Table 5). If a protein was involved in a pathway, the corresponding score was denoted as 1, otherwise 0. Based on the pathway matrix, we used Kappa statistics to evaluate pathway pairwise similarity matrix based on the belief that pathways sharing common proteins might be related to one another [63].
Considering two pathways I and J (I?J; I, J = 1, 2,…, M), we could determine the number of proteins annotated by both pathways, the number of proteins annotated by pathway J but not I, the number of proteins annotated by pathway I but not I, and the number of proteins not annotated by neither pathway among the union of proteins annotated by all pathways, denoted as a, b, c and d, respectively. Kappa score k was defined as kP r agree ð Þ{Pr random ð Þ 1{Pr random ð Þ , where Pr(agree) was the observed percentage agreement and Pr(random) was the overall probability of random agreement. A high Kappa score indicated that two pathways share many common proteins and vice versa. The observed percentage agreement Pr(agree) could be calculated as, To calculated the overall probability of random agreement Pr(random), we noted that pathway a annotates azc azbzczd and pathway b annotates azb azbzczd of total associated proteins.
Thus, the probability that both pathways randomly annotate the same proteins was azc ð Þ azb ð Þ azbzczd ð Þ 2 , and the probability that neither pathway randomly annotate the same proteins was bzd ð Þ czd ð Þ azbzczd ð Þ 2 : Thus, the overall probability of random agreement Pr(random) could be calculated as, Kappa score k could be rewritten as, Supporting Information Figure S1 Histograms of log of closeness centrality of vertices in 25 random networks, which resembled delta function. Random networks tend to have outliers. (EPS) The relationships between proteins and associated pathways were represented as a binary matrix of size M6n, corresponding to M enriched pathways and n associated proteins. The absence and presence of a protein in a pathway were denoted as 0 and 1, respectively. doi:10.1371/journal.pcbi.1003472.t005 Figure S2 Plot of closeness centrality against degree. Degree centrality of a vertex in random networks does not have any impacts on its closeness centrality. (EPS) Figure S3 MI knowledge map. The MI-specific protein-protein interaction network is depicted with important proteins, enriched biological processes, and cross-talk between different processes. Important proteins are represented as round rectangles with lighter shades. Color codes correspond to different pathways as in Figure 8. For example, VEGFA was involved in both hypoxia and angiogenesis pathways; therefore, it is located in the hypoxia box and colored as green for angiogenesis. The inflammatory response component contains two subgroups, as clustered in Figure 8. The kinase pathways component contains four subgroups. Teal nodes represent proteins which need to be further studied in the future since they are not found enriched in current Biocarta pathways. Details on protein memberships can be found in Table S8. (EPS) Table S1 GO cellular component terms of MIPIN seed proteins. We did not combine child terms into their parent terms since they showed more specific functions. (XLSX) | 2016-05-12T22:15:10.714Z | 2014-03-01T00:00:00.000 | {
"year": 2014,
"sha1": "78f42250f31592c2e60f8adedbc2059996468976",
"oa_license": "CC0",
"oa_url": "https://journals.plos.org/ploscompbiol/article/file?id=10.1371/journal.pcbi.1003472&type=printable",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "78f42250f31592c2e60f8adedbc2059996468976",
"s2fieldsofstudy": [
"Computer Science",
"Medicine"
],
"extfieldsofstudy": [
"Computer Science",
"Medicine",
"Biology"
]
} |
212665097 | pes2o/s2orc | v3-fos-license | Gut Microbial-Derived Metabolomics of Asthma
In this review, we discuss gut microbial-derived metabolites involved with the origins and pathophysiology of asthma, a chronic respiratory disease that is influenced by the microbiome. Although both gut and airway microbiomes may be important in asthma development, we focus here on the gut microbiome and metabolomic pathways involved in immune system ontogeny. Metabolite classes with existing evidence that microbial-derived products influence asthma risk include short chain fatty acids, polyunsaturated fatty acids and bile acids. While tryptophan metabolites and sphingolipids have known associations with asthma, additional research is needed to clarify the extent to which the microbiome contributes to the effects of these metabolites on asthma. These metabolite classes can influence immune function in one of two ways: (i) promoting growth or maturity of certain immune cell populations or (ii) influencing antigenic load by enhancing the number or species of specific bacteria. A more comprehensive understanding of how gut microbes and metabolites interact to modify asthma risk and morbidity will pave the way for targeted diagnostics and treatments.
Introduction: Microbiome-Metabolome Associations in Asthma
Asthma and other allergic diseases have well known associations with early life environmental exposures that modify the gut microbiota, such as living on a farm, mode of delivery, breastfeeding status and having a dog in the home [1]. As mounting animal and human data point to a prominent role of the gut microbiome in asthma development [2], relevant metabolomic mechanisms behind this association are beginning to be elucidated [3]. Integration of metabolomic data with gut microbiome data has been particularly fruitful in understanding the gut-lung axis as it pertains to asthma. Here, we review a set of metabolites and metabolite groups that appear to link the gut microbiota with asthma development and pathophysiology and immune system ontology. Some of these classes, such as short chain fatty acids, are relatively well-studied and understood, while others, including the sphingolipids, include more numerous metabolites with less straightforward relationships to asthma and allergy. While we focus here on the metabolite classes most prominently discussed in today's literature, future unbiased studies of asthma metabolomics are likely to identify additional important pathways.
Short Chain Fatty Acids
Short chain fatty acids (SCFA) are produced by a wide variety of intestinal microbes through fermentation of dietary fiber. The most abundant SCFA are acetate, propionate and butyrate. SCFA exert effects on host physiology by ligation of G-protein coupled receptors including GPR41, GPR43 and GRP109A, and epigenetic modification by inhibition of histone deacetylase [4]. Early evidence was stronger for histone deacetylase inhibitory activity of propionate and butyrate, but a recent study showed that acetate can also inhibit histone deacetylase [5]. SCFA have important immune-modulating properties including induction of T regulatory cell differentiation in mice [6][7][8][9], reduction of eosinophil trafficking and survival [10] and promotion of mucosal antibody production [11].
Accordingly, SCFA are protective against allergic diseases in mouse models including models of pulmonary allergic inflammation and food allergy [12][13][14]. Multiple observational studies in humans have found that reductions in fecal SCFA during infancy are associated with asthma and allergy later in life. In two cohorts of infants, one Canadian and the other Ecuadorian, fecal acetate at age 3 months of age was lower in subjects who later developed atopy and wheeze [15,16]. In another study, European infants in the highest percentile groups of fecal butyrate and propionate abundance had reduced risk of subsequent atopy and asthma [17]. Some murine experimental data and human observational data even suggest that fecal acetate during pregnancy can influence risk of asthma and atopy in offspring [18][19][20]. Microbial metabolism of SCFA may be relevant locally in the airway as well: one study of the bronchial microbiome found an increased predicted capacity for SCFA metabolism in association with asthma [21]. Together, these findings highlight the role of SCFA in the development of asthma and atopy, and suggest that SCFA-directed treatment could be an effective preventive strategy.
Polyunsaturated Fatty Acids
The major polyunsaturated fatty acid (PUFA) families are omega-3 fatty acids, including α-linolenic acid and its metabolites: eicosapentanoic acid (EPA) and docosahexaenoic acid (DHA); and omega-6 fatty acids, including linoleic acid and its metabolite arachidonic acid. Because omega-6 fatty acids give rise to inflammatory eicosanoids [22] and omega-3 fatty acids displace omega-6 fatty acids in cell membranes and give rise to anti-inflammatory pro-resolving mediators [23,24], a high omega-6 to omega-3 fatty acid ratio is thought to be pro-allergic. Though high-quality evidence is limited with regard to postnatal omega-3 fatty acid supplementation to prevent asthma or allergies [25,26], a promising 30.7% reduction in wheeze at age 3 years was recently reported in offspring of mothers randomized in a clinical trial to receive omega-3 fatty acids during pregnancy [27].
While dietary intake is the dominant source of essential omega-3 and omega-6 fatty acids and PUFA are not synthesized by members of the human microbiota, accumulating evidence points to the importance of PUFA interactions with microbes in asthma pathogenesis. In multiple human studies, fecal PUFA in early life have been inversely associated with asthma and allergy. At age one month, fecal omega-3 docosapentaenoic acid was reduced in infants at risk of atopy or asthma [28] and in a cross-sectional analysis of 3 year-old children, several highly-correlated fecal omega-3 and omega-6 fatty acids were inversely associated with asthma or recurrent wheeze [29]. In a trial of probiotic supplementation with Lactobacillus rhamnosus GG in infants at high risk of asthma, fecal levels of omega-3 fatty acids including docosapentaenoic acid and docosahexaenoic acid were higher in healthy controls and those who had been supplemented with L. rhamnosus GG in comparison to those at risk of asthma who had not received supplementation [30]. In this study, probiotic supplementation appeared to have tolerogenic effects: fecal sterile water samples from 6 month-old infants who had received L. rhamnosus GG induced increased T regulatory cell differentiation and IL-10 production compared to samples from infants who had received a placebo [30].
Several human studies have found that dietary omega-3 fatty acid intake alters intestinal microbiota composition [31][32][33][34][35], and taxa increased in association with omega-3 fatty acid intake have been observed to include producers of SCFA [31][32][33]36]. In addition to potentially increasing SCFA production, PUFA can be metabolized by human gut microbes to produce metabolites including conjugated linoleic acids (CLA) [37,38]. Interestingly, SCFA producers that increase with omega-3 fatty acid intake, such as Bifidobacterium, Lactobacillus and Roseburia spp., are among the most active at metabolizing PUFA to CLA, suggesting a prebiotic effect of omega-3 fatty acids that could include selection for SCFA producers [37,[39][40][41][42]. Indeed, CLA consumption or microbial production has also been linked to increased intestinal SCFA [43,44], demonstrating a biochemical link between short and long chain fatty acids.
A few small (n = 28-40 subjects) randomized controlled human trials of CLA supplementation for asthma or allergy provide promising evidence that CLA itself may improve control of existing disease. CLA supplementation in overweight mild asthmatics resulted in weight loss and improved airway hyperresponsiveness [45]. In adults with birch pollen allergy, CLA supplementation reduced sneezing, production of TNF-α, interferon (IFN)-γ and interleukin-5, and release of eosinophil-derived neurotoxin [46]. In children age 6 to 18 years with mild asthma and allergic sensitization, CLA supplementation did not improve pulmonary function or symptoms, but was associated with lower plasma eosinophil cationic protein and peripheral blood mononuclear cell production of IFN-γ and interleukin-4 [47]. Potential mechanisms by which CLA may reduce airway inflammation include peroxisome proliferator-activated receptor-γ (PPARγ) activation [48], GPR40 activation [49] and/or reduction of inflammatory eicosanoid production [50,51].
12,13-dihydroxy-9-octadecenoic acid (12,13-diHOME) is another potentially important metabolite of omega-6 linoleic acid that has been associated with asthma. Peritoneal injection of 12,13-diHOME in mouse models of allergic airways disease led to increased circulating and pulmonary 12,13-diHOME with accompanying airway inflammation, increased IgE and decreased lung T regulatory cells [38]. Effects of 12,13-diHOME appear to be at least partially due to ligation of PPARγ in dendritic cells [38]. In human studies, 12,13-diHOME was elevated in the airways of birch-allergic adult asthmatics after birch challenge [52], and in feces of infants at high risk of subsequent asthma and atopy [28]. This metabolite was linked to microbiota composition via shotgun metagenomics analysis of infant stool samples [38]. Bacterial, but not human, genes encoding epoxide hydrolase enzymes, which catalyze production of 12,13-diHOME, were present at higher abundances in samples from infants at high risk of asthma and atopy. Specific bacterial species were identified that harbored the relevant enzymes and therefore have the capacity to produce 12,13-diHOME. Feeding E. coli engineered to overexpress these epoxide hydrolases led to reduced lung T regulatory cells in a murine allergic airway disease model [38].
To summarize, PUFA impact fecal microbiome composition and have been associated with increased production of several metabolites and metabolite classes that impact asthma disease risk, including SCFA, CLA and 12,13-diHOME ( Figure 1). However, PUFA have many effects that do not rely on the microbiome, and other factors including dietary intake and genetic variation in PUFA pathway genes such as FADS1/2 have major impacts on PUFA bioavailability [53]. Future studies will need to determine the extent to which microbial metabolic pathways mediate associations between PUFA and asthma development and pathophysiology while also accounting for the roles of dietary and genetic factors. Metabolites 2020, 10, x FOR PEER REVIEW 4 of 13
Bile Acids
Primary bile acids cholic acid and chenodeoxycholic acid are synthesized in the liver, where they may be conjugated by taurine or glycine and are then secreted into the duodenum. Most bile acids are absorbed distally in the gut and returned to the liver via enterohepatic circulation. Gut microbial enzymatic activity results in production of secondary bile acids such as deoxycholic acid and lithocholic acid [54], and bile acids themselves have antimicrobial activity and can influence microbial composition in the gut [55,56].
In vitro and mouse studies show protective effects of bile acids on allergic airway inflammation via multiple mechanisms, including some bile acids that are produced via microbial modification. Ursodeoxycholic acid, a microbially modified bile acid, prevents eosinophilic inflammation in primary biliary cirrhosis [57] and reduces eosinophilic airway inflammation in OVA-sensitized mice via ligation of dendritic cell nuclear farnesoid X receptors [58]. Chenodeoxycholic acid, a primary bile acid, similarly reduces murine allergic airway disease via farnesoid X receptor agonist activity in the lung [59]. Conjugated bile acids, which have not undergone microbial modification, significantly decrease allergen-induced airway inflammatory responses, mucus metaplasia and airway hyperresponsiveness [60]. These effects have been attributed to inhibition of the inflammatory unfolded protein response [60]. Additionally, it was recently found that depletion of gut microbial bile acids leads to reduced gut RORγ+ regulatory T cells via a mechanism involving bile acid activity at the vitamin D receptor [61], though it is not clear if this has an effect on distant organs such as the lung.
Limited evidence from human studies has linked bile acids to asthma. In a birth cohort study, urinary sulfated bile acids glycolithocholate, glycocholenate, and glycohyocholate were elevated and tauroursodeoxycholate was decreased at age 3 months in children who had atopy and wheeze at age 1 year [15]. In a comparison of fecal metabolites in 35 children with asthma and 20 non-atopic controls, significant differences in abundances of taurochenodeoxycholate, taurocholate and glycocholate were found, and there were additional differences in fecal bile acid abundances between subjects with asthma and those with food allergy [62]. Finally, plasma bile acids (taurocholate and glycodeoxycholate) were higher in asthmatic adults than healthy controls and in particular in those with high fractional exhaled nitric oxide, a marker of Th2-high asthma [63]. Nitric oxide itself increases hepatic production and microbial metabolism of bile acids, suggesting that bile acids may serve as biomarkers of the Th2-high asthma endotype [63]. Future studies will be valuable in clarifying the most relevant mechanisms by which bile acids and their modification by microbes impact asthma and whether the bile acid pathway is pertinent to prevention of asthma, morbidity in the setting of existing asthma, or both.
Bile Acids
Primary bile acids cholic acid and chenodeoxycholic acid are synthesized in the liver, where they may be conjugated by taurine or glycine and are then secreted into the duodenum. Most bile acids are absorbed distally in the gut and returned to the liver via enterohepatic circulation. Gut microbial enzymatic activity results in production of secondary bile acids such as deoxycholic acid and lithocholic acid [54], and bile acids themselves have antimicrobial activity and can influence microbial composition in the gut [55,56].
In vitro and mouse studies show protective effects of bile acids on allergic airway inflammation via multiple mechanisms, including some bile acids that are produced via microbial modification. Ursodeoxycholic acid, a microbially modified bile acid, prevents eosinophilic inflammation in primary biliary cirrhosis [57] and reduces eosinophilic airway inflammation in OVA-sensitized mice via ligation of dendritic cell nuclear farnesoid X receptors [58]. Chenodeoxycholic acid, a primary bile acid, similarly reduces murine allergic airway disease via farnesoid X receptor agonist activity in the lung [59]. Conjugated bile acids, which have not undergone microbial modification, significantly decrease allergen-induced airway inflammatory responses, mucus metaplasia and airway hyperresponsiveness [60]. These effects have been attributed to inhibition of the inflammatory unfolded protein response [60]. Additionally, it was recently found that depletion of gut microbial bile acids leads to reduced gut RORγ+ regulatory T cells via a mechanism involving bile acid activity at the vitamin D receptor [61], though it is not clear if this has an effect on distant organs such as the lung.
Limited evidence from human studies has linked bile acids to asthma. In a birth cohort study, urinary sulfated bile acids glycolithocholate, glycocholenate, and glycohyocholate were elevated and tauroursodeoxycholate was decreased at age 3 months in children who had atopy and wheeze at age 1 year [15]. In a comparison of fecal metabolites in 35 children with asthma and 20 non-atopic controls, significant differences in abundances of taurochenodeoxycholate, taurocholate and glycocholate were found, and there were additional differences in fecal bile acid abundances between subjects with asthma and those with food allergy [62]. Finally, plasma bile acids (taurocholate and glycodeoxycholate) were higher in asthmatic adults than healthy controls and in particular in those with high fractional exhaled nitric oxide, a marker of Th2-high asthma [63]. Nitric oxide itself increases hepatic production and microbial metabolism of bile acids, suggesting that bile acids may serve as biomarkers of the Th2-high asthma endotype [63]. Future studies will be valuable in clarifying the most relevant mechanisms by which bile acids and their modification by microbes impact asthma and whether the bile acid pathway is pertinent to prevention of asthma, morbidity in the setting of existing asthma, or both.
Tryptophan
Tryptophan is an essential amino acid and can be metabolized via four major pathways to either produce kynurenine derivatives (the major pathway), serotonin derivatives, to be utilized in protein synthesis, or to be metabolized by fecal microbes [64]. Tryptophan metabolism is complex and pathways by which tryptophan is utilized vary by body site and context [64]. Gut microbes are major participants in tryptophan metabolism; an estimated 90% of the serotonin in the body is produced by intestinal microbes [65].
There is evidence that tryptophan metabolites play a role in the pathophysiology of asthma ( Figure 2). Indoleamine 2,3-dioxygenase-1 (IDO) metabolizes tryptophan to produce kynurenine derivatives in antigen-presenting cells and other cells resident in lymph nodes and inflammatory tissue. Expression of IDO is induced by IFN-γ and inhibited by Th2 cytokines including IL-4 and IL-13 [66,67]. IDO activity and kynurenine metabolites have anti-inflammatory and tolerogenic properties including reducing T cell inflammation by reducing tryptophan availability [68] and promoting T regulatory cells [69,70]. Interestingly, IDO inhibits growth of intracellular pathogens. So, IDO can both be induced by bacterial motifs via TLR ligation and IFN-γ induction, and in turn can inhibit microbial growth [71,72].
Tryptophan metabolites also impact immune homeostasis via interactions with the aryl hydrocarbon receptor, a ligand-activated transcription factor that senses exposures including polyaromatic hydrocarbons and environmental toxins and impacts transcription of a broad range of genes [73]. The aryl hydrocarbon receptor is expressed in immune cells, gut epithelial cells, and others [73]. Tryptophan metabolites including indole-3-acetate, indole-3-aldehyde, indole, and tryptamine are known to activate the aryl hydrocarbon receptor [74] and many of these metabolites are produced by microbes resident in the human gut [64,75]. Aryl hydrocarbon receptor activation promotes tolerogenic dendritic cells [76], Th17 and T regulatory cell differentiation [77] and impacts innate lymphoid cell (ILC) homeostasis in the gut by stimulating ILC3 cells to produce IL-22 and suppressing ILC2 function including expression of IL-33 receptor, IL-5, IL-13 and amphiregulin [78]. It also boosts gut epithelial barrier function, including response to IL-10 [79,80], though there is some conflicting evidence on this point [81]. Like IDO activity, aryl hydrocarbon receptor activation influences, and is influenced by, microbial composition [64].
Multiple lines of evidence support a protective effect of tryptophan metabolism via the IDO pathway and aryl hydrocarbon receptor activation in asthma. In murine models of asthma, IDO expression induced by activation of TLR9 by bacterial DNA motifs reduce airway hyperreactivity [82] and activation of aryl hydrocarbon receptors reduce airway inflammation and hyperresponsiveness [83,84]. Human studies also support a tolerogenic role for IDO and a reduction in IDO activity in people with asthma. In a study of 205 children, tryptophan and kynurenine levels were higher and IgE and IDO activity lower in those with asthma and allergic rhinitis [85]. In another pediatric population, IDO activity in peripheral blood and induced sputum was lower in children with allergic asthma than healthy controls [86]. This result was more pronounced in children with high FeNO levels. In a study in which subjects with and without asthma were experimentally infected with rhinovirus, though IDO activity was not induced by infection, baseline pulmonary IDO activity was lower and circulating tryptophan and quinolinic acid, a metabolite of the kynurenine pathway, were elevated in asthmatic subjects [87].
Linking these findings to the gut microbiome, in a screen of products produced by probiotics, D-tryptophan was identified as a metabolite produced by Lactobacillus rhamnosus GG and Lactobacillus casei W56 that, when fed to mice, increased lung and gut T regulatory cells and reduced allergic airway disease [88]. In this study, allergic airway disease was associated with reduced gut microbial diversity, and diversity was increased by administration of D-tryptophan. Unlike its enantiomer L-tryptophan, D-tryptophan is a nonproteinogenic metabolite and is produced by numerous bacteria. In addition to having activity at host cell receptors including GPR109B, D-tryptophan can be metabolized by IDO to produce kynurenine metabolites, which may account for its tolerogenic effects [88]. Further supporting a role for gut microbial tryptophan metabolism in allergic disease, human metabolomics studies have linked reductions in fecal tryptophan metabolites to food allergy [62,89]. Additional research is needed to determine the impact of gut microbial tryptophan metabolism on both asthma development and morbidity.
to food allergy [62,89]. Additional research is needed to determine the impact of gut microbial tryptophan metabolism on both asthma development and morbidity. Figure 2. Schematic of mechanisms whereby tryptophan metabolic pathways influence asthma pathophysiology. As indicated by the microbe icon, microbial exposure induces interferon-γ production, and gut bacteria participate in metabolism of tryptophan to indole, tryptamine and other metabolites with activity at the aryl hydrocarbon receptor.
Sphingolipids
Sphingolipids are bioactive eukaryotic lipids with roles in cell growth regulation, cell-cell interactions, and other cellular functions [90]. Some sphingolipids, especially sphingosine-1phosphate, have well defined roles in immune function. Specifically, sphingosine-1-phosphate concentration gradients control T cell egress from lymph nodes into circulation [91]. Sphingosine-1phosphate promotes allergic airway inflammation in mouse models and is elevated in the airways of asthmatic humans after allergen challenge [92][93][94].
While sphingosine-1-phosphate appears to promote asthma, other sphingolipid metabolites may be protective. The enzyme encoded by the ORMDL3 gene in the 17q21 region, which is the most replicated childhood asthma genetic locus, inhibits the first step in de novo sphingolipid synthesis [95,96]. A mouse model that overexpresses ORMDL3 exhibits increased airway remodeling and responsiveness and IgE levels [97]. Either administration of myriocin, which, like ORMDL3, inhibits the serine palmitoyltransferase enzyme that initiates sphingolipid synthesis, or heterozygous knockout of the serine palmitoyltransferase gene in mice results in decreased de novo sphingolipid synthesis and increased airway reactivity [98].
A few human studies corroborate preclinical evidence of a link between sphingolipids and asthma. In a sample of pediatric asthmatics, sphingolipids were reduced in those with high-risk variants in the 17q21 locus that promotes expression of ORMDL3 and in those with non-allergic asthma in comparison to those with allergic asthma or healthy controls [99]. De novo sphingolipid synthesis was also lower in children with asthma than controls [99]. In another human study, circulating sphingolipids were inversely associated with childhood asthma and recurrent wheeze, and those with high-risk ORMDL3 expression-promoting genetic variants exhibited limited benefit from vitamin D supplementation in comparison to those with low-risk variants [100]. These results suggest that vitamin D may influence sphingolipid metabolism with protective effects on childhood asthma. In another study of allergen challenge in humans allergic to house dust mite, lung function Figure 2. Schematic of mechanisms whereby tryptophan metabolic pathways influence asthma pathophysiology. As indicated by the microbe icon, microbial exposure induces interferon-γ production, and gut bacteria participate in metabolism of tryptophan to indole, tryptamine and other metabolites with activity at the aryl hydrocarbon receptor. Abbreviations: AHR = aryl hydrocarbon receptor; IDO = Indoleamine 2,3-dioxygenase-1.
Sphingolipids
Sphingolipids are bioactive eukaryotic lipids with roles in cell growth regulation, cell-cell interactions, and other cellular functions [90]. Some sphingolipids, especially sphingosine-1-phosphate, have well defined roles in immune function. Specifically, sphingosine-1-phosphate concentration gradients control T cell egress from lymph nodes into circulation [91]. Sphingosine-1-phosphate promotes allergic airway inflammation in mouse models and is elevated in the airways of asthmatic humans after allergen challenge [92][93][94].
While sphingosine-1-phosphate appears to promote asthma, other sphingolipid metabolites may be protective. The enzyme encoded by the ORMDL3 gene in the 17q21 region, which is the most replicated childhood asthma genetic locus, inhibits the first step in de novo sphingolipid synthesis [95,96]. A mouse model that overexpresses ORMDL3 exhibits increased airway remodeling and responsiveness and IgE levels [97]. Either administration of myriocin, which, like ORMDL3, inhibits the serine palmitoyltransferase enzyme that initiates sphingolipid synthesis, or heterozygous knockout of the serine palmitoyltransferase gene in mice results in decreased de novo sphingolipid synthesis and increased airway reactivity [98].
A few human studies corroborate preclinical evidence of a link between sphingolipids and asthma. In a sample of pediatric asthmatics, sphingolipids were reduced in those with high-risk variants in the 17q21 locus that promotes expression of ORMDL3 and in those with non-allergic asthma in comparison to those with allergic asthma or healthy controls [99]. De novo sphingolipid synthesis was also lower in children with asthma than controls [99]. In another human study, circulating sphingolipids were inversely associated with childhood asthma and recurrent wheeze, and those with high-risk ORMDL3 expression-promoting genetic variants exhibited limited benefit from vitamin D supplementation in comparison to those with low-risk variants [100]. These results suggest that vitamin D may influence sphingolipid metabolism with protective effects on childhood asthma. In another study of allergen challenge in humans allergic to house dust mite, lung function and airway hyperreactivity correlated with sphingosine-1-phosphate plasma concentrations, which increased after allergen challenge in subjects who developed both early and late phase symptoms [101]. Meanwhile, sphinganine, which is produced in early steps of de novo sphingolipid synthesis, was only increased after allergen challenge in subjects who did not develop an asthmatic response. Together, these findings support the concept that sphingosine-1-phosphate promotes asthma while sphingolipids early in the de novo synthesis pathway may be protective.
Sphingolipids may be dietary, host-derived, or produced by a limited number of microbial taxa, particularly those of the phylum Bacteroidetes. Of interest, bacteria that produce sphingolipids are among the dominant residents of the human gut [102]. Sphingolipids produced by Bacteroides fragilis, in particular, may have relevance to human health as they are ligands of the invariant natural killer cell receptor and modulate invariant natural killer cell recruitment and proliferation in the colon [103]. Accordingly, low fecal sphingolipids in early life have been linked to food allergies in two human studies [62,89]. However, this effect of Bacteroides-derived sphingolipids on invariant natural killer cell homeostasis appears to be limited to the colon, with no effect on asthma susceptibility [103]. Further research is needed to ascertain whether microbial-derived sphingolipids may affect asthma pathophysiology via other mechanisms.
Conclusions
Microbiome-metabolite interactions are pervasive in the human body and have relevance to many human diseases, including asthma. Among microbial-derived metabolites, the evidence is strongest that SCFA, PUFA and bile acids contribute to asthma pathophysiology. Sphingolipids and tryptophan metabolites are worthy of future research as potentially important pathways. Of these classes, some including SCFA and 12,13-diHOME appear to play a role early in life before the onset of disease, while others including CLA and tryptophan metabolites of the kynurenine pathway are best studied in the context of existing asthma. These findings provide a rationale for the development of microbe-and metabolite-targeted treatments for asthma and other diseases. Strategies include probiotics, prebiotics and other dietary modifications, supplementing or inhibiting microbial-derived metabolites, and fecal microbiome transplant [104]. As understanding of metabolite-microbe interactions continues to grow, we expect additional insights to guide precision medicine approaches to health and disease. Funding: Kathleen A. Lee-Sarwar is funded by NIH grant K08 HL148178. Rachel S. Kelly is funded by NIH grant K01 HL146980. Additional funding came from grants R01HL123915, R01HL141826 and ECHO grant UH3OD023268.
Conflicts of Interest:
AAL has received author royalties from UpToDate, Inc. and consultant fees from AstraZeneca, LP. STW has received royalties from UpToDate, Inc. JL-S is a consultant to Metabolon Inc. KL-S and RSK have nothing to disclose. | 2020-03-12T10:55:43.547Z | 2020-03-01T00:00:00.000 | {
"year": 2020,
"sha1": "1dea9298d2ced3e9f1b94a85a5f55f7426a7d727",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2218-1989/10/3/97/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "512ee5b0d85e26b5adbaaaf27015b55eb964e2fa",
"s2fieldsofstudy": [
"Medicine",
"Environmental Science",
"Biology"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
} |
253523088 | pes2o/s2orc | v3-fos-license | What Can Algebraic Topology and Differential Geometry Teach Us About Intrinsic Dynamics and Global Behavior of Robots?
Traditionally, robots are regarded as universal motion generation machines. They are designed mainly by kinematics considerations while the desired dynamics is imposed by strong actuators and high-rate control loops. As an alternative, one can first consider the robot's intrinsic dynamics and optimize it in accordance with the desired tasks. Therefore, one needs to better understand intrinsic, uncontrolled dynamics of robotic systems. In this paper we focus on periodic orbits, as fundamental dynamic properties with many practical applications. Algebraic topology and differential geometry provide some fundamental statements about existence of periodic orbits. As an example, we present periodic orbits of the simplest multi-body system: the double-pendulum in gravity. This simple system already displays a rich variety of periodic orbits. We classify these into three classes: toroidal orbits, disk orbits and nonlinear normal modes. Some of these we found by geometrical insights and some by numerical simulation and sampling.
Introduction
The traditional approach to robot motion generation is to first plan trajectories on a kinematic level and then develop controllers for tracking the planned trajectories. The robot hardware, and therefore its dynamics, are considered to be given a priori. As the robot is understood as a universal motion generation machine, the ideal controller must track any trajectory to the best extent possible, leading to the ideal of a fully decoupling controller. This compensates the intrinsic dynamics and leads to simple error dynamics; for example, fully decoupled, second-order linear differential equations in computed torque and operational space control [1], [2]. Theoretically, one could control an elephant to jump like a flea this way. Despite the huge success of this approach in * This is a preprint of the following chapter: Alin Albu-Schäffer and Arne Sachtler, What Can Algebraic Topology and Differential Geometry Teach Us About Intrinsic Dynamics and Global Behavior of Robots?, published in Robotics Research, edited by Aude Billard, Tamim Asfour, and Oussama Khatib, 2023, Springer reproduced with permission of Springer Nature Switzerland AG. The final authenticated version is available online at: https://doi.org/ 10.1007/978-3-031-25555- 7 32. robotics, the limitations are also obvious and well known: actuator saturation, model errors, and unmodelled dynamics lead, in the extreme case, to severe performance limitations, robustness problems up to instability, and high energy consumption.
An alternative perspective has been taken in robotics for a long time as well, leading to minimalistic and passivity-based control [3], [4], bio-inspired design, embodied intelligence, passive walkers [5], [6], and locomotion template anchoring [7]. When operating a robot on trajectories matching its natural dynamics, one only needs very small control action. Such trajectories correspond to geodesics with respect to the Jacobi metric. Also research in neuroscience suggests that humans operate their arms on geodesics [8], [9]. Especially, for robots that are supposed to perform quite specific, periodic, or quasi-periodic motions most of the time, such as in legged locomotion, linking the design of the robot and its intrinsic dynamics to its desired task promises benefits in terms of performance and energy efficiency. Paraphrasing Rodney Brooks [10], we would say Elephants don't play hopscotch either. Also industrial robots, when used in large production lines, perform mostly very repetitive motions. Energy efficiency becomes relevant here as well, in the context of CO 2 neutrality, while maximizing speed and/or force is always the central concern.
In this paper we contribute some insights into understanding intrinsic robot dynamics as methodologies to generate highly efficient motions. To this end, we go back to the roots of mechanics, taking a closer look at the principles of least action and interpreting them geometrically. Although these principles are dating back to Maupertuis, Euler, Lagrange, Jacobi, and Hamilton, almost three centuries of developments in differential geometry, algebraic topology, and of numerical methods, make it worth taking a fresh look at their meaning and implications. We address motions that can be performed at constant total energy, in particular periodic motions. Although our intuition of frictionless, constant energy behavior of robot dynamics is that of chaotic or at least very complicated motions, it turns out that there are even more periodic, regular motions than in general linear systems. As an example, we will present the zoo of intrinsic periodic motions of the double pendulum, i.e., of the most basic, 2 DoF robot dynamics.
A Very Short Primer on Robot Dynamics
The classical way of deriving the equations of motion of mechanical systems is based on the Lagrange formalism [11]- [14]. One defines a Lagrangian L(q,q) = T (q,q) − U (q) (1) as the difference of kinetic energy T (q,q) and potential energy U (q), with q ∈ Q being configuration variables and Q the configuration space. We integrate the Lagrangian over candidate trajectories q(t) using the Hamiltonian action integral The action integral is a functional: it takes an input function of a single variable and outputs a real number. For the Hamiltonian action integral, this input is a function q : R → Q of time and the output is the total action of the mechanical system on q(t) from t 1 to t 2 . Then we take Hamilton's Principle of Least Action to select a true system trajectoryq(t) out of the canditates: The system takes a trajectoryq(t) between q 1 = q(t 1 ) and q 2 = q(t 2 ), that extremizes S H (q) over all possible smooth paths satisfying the given boundary conditions.
Using variational calculus, the extremizer for (2) locally satisfies the Euler-Lagrange equations d dt The resulting equations of motion can be, in absence of external generalized forces, written in the well-known standard form with mass matrix M (q), potential forces g(q) and Coriolis and centrifugal forces C(q,q)q. From this step on, one classically only uses (4) for analyzing the dynamics of the multi-body system. The power and large variety of applications make us not spend too many thoughts on the meaning of the initial action integral (2), which is merely considered a creative intermediate step needed to arrive at the Euler-Lagrange equations. We would like to go one step back and introduce some classical results directly derived from an alternative version of the principle of least action: we look at the Maupertuis-Euler-Lagrange-Jacobi formulation. We will take advantage of this in gaining new insights into the intrinsic dynamics of conservative mechanical systems, especially regarding the existence and numerical computation of periodic trajectories of various types. Although this body of work, leading to some remarkable insights, is still today a topic of active research in mathematical physics and theoretical mechanics, it seems to be largely unknown to the robotics community. We believe that the theoretical results of the last decades as well as the powerful numerical tools and the computation power available today might lead to many applications in robotics.
Mauptertuis' Principle of Least Action
If the Hamiltonian H(q,q), which in robotics is identical to the total energy, stays constant during motion, time can be completely eliminated from Hamilton's principle of least action, leading to Maupertuis' principle where p = ∂L /∂q is the generalized momentum, expressed as a function of position along the trajectory of constant energy [13]. These two principles of least action can be derived from each other in the case of constant energy [15]. The elimination of time, and thus of velocities, has two major implications: 1. the search for trajectories of the systems can be performed in the n-dimensional configuration space instead of the 2n-dimensional phase space; and 2. one can access a huge body of results from Riemannian geometry and algebraic topology.
If parametrizing the curve by time, (5) will take the form Let's compare: in Hamilton's principle of least action we fix the endpoints q 1 , q 2 and the corresponding times t 1 , t 2 , but we do not fix the total energy. We find trajectories of the system using this principle. In Maupertuis' principle we fix the endpoints q 1 , q 2 and the total energy, but do not care about times. We find configuration paths only, without velocity information. We can, however, reconstruct time and velocity from the configuration path considering the fixed total energy. For a purely geometric formulation of conservative motions, geodesics play a central role. We introduce them now.
Geodesics on Riemannian Manifolds
The notion of geodesics is one of the most basic concepts in differential geometry [16]. Let (M, g) be a Riemannian manifold 1 with metric g ij . A geodesic is the straightest curve between two points [16]. Let γ : [s 1 , s 2 ] → M be a parametric curve. It will be called geodesic if it extremizes the arc-length integral 2 with γ = dγ /ds. In general the extremum does not need to be a minimum. For example, on a sphere, geodesics are both segments of the great circle passing through two given points, which is unique if the points are not antipodal. Fig. 1 illustrates the principle in the Euclidean space (R 2 , δ ij ). For this example, the blue straight line is the globally shortest path and the only geodesic. Geodesics, Figure 1: Geodesic on Euclidean space. Blue shows a geodesic and orange non-geodesics. and thus extremizers of (7), satisfy the geodesic equation [18], [19] where Γ a bc are Christoffel symbols of the second kind derived solely from the metric where g ai is the inverse of g ai , i.e., g iα g αj = δ i j .
A Purely Geometric Perspective on Lagrangian Mechanics
Starting from Maupertuis' principle of least action (5), (6), it can be shown that trajectories of constant energy between two points are geodesics with respect to the Jacobi metric [13], [20], [21] where m ij is the inertia tensor. The derivation of the Jacobi metric is based on this insight: if energy is constant (T (q,q) + U (q) = E), then velocity can be expressed as a function of position on any trajectory. So let us parametrize a motion along the curve γ instead of time t by arc length s, which is in bijective relation s(t) to time. With the notation q = dq /ds we have: (11) This relates the differentials dt and ds Using 2T = 2(E − U ) allows to remove T from (6). We also substitute t with the curve parameter s and get which is indeed exactly the arc length with respect to the Jacobi metric (10). This is remarkable! Isoenergetic trajectories of the multi-body system in a potential field are geodesics on the manifold (Q, J g), where Q is the configuration space. This eliminates velocities from the problem -it is purely about curves on the configuration space. The solutions describe only the path itself, not the timing along it. Velocities are obtained by scaling the tangent according toq = q ds /dt: we need to scale the tangent to satisfy the constant energy condition. We can also reconstruct time by integrating (12). For potential-free systems the metric is proportional to the inertia tensor J g ij = 2Em ij and the constant factor can be ignored when searching for geodesics. Thus, for potential-free rigid body systems, trajectories are geodesics w.r.t. the inertia tensor. In this case, the paths are independent of the energy, varying the energy only changes the speed used to trace out the geodesics in configuration space. In contrast, for systems in potential fields, the geodesics generally vary with the total energy E, as the Jacobi metric is energy-dependent. The necessary and sufficient conditions for the paths to be independent of energy also in presence of potential energy have been derived in [22].
Algorithms: Shooting versus Extremizing Arc Length
How can we use the insights presented so far to find intrinsic paths of dynamic systems? The basic example is still the path of constant energy between two points ( Fig. 1), which can be, however, easily extended to periodic paths of line topology and to closed paths. The approach mostly used in robotics for finding a path between the points q A and q B in Fig. 1 is to choose an initial guess for the velocityq A and "shoot" from the initial state (q A ,q A ), i.e. simulate the robot dynamics under additional constraints of time, energy, etc. A measure of the amount by which the point q B is missed by the path is fed to the optimization algorithm, which will adapt the initial velocityq A until it will hopefully hit the point q B . Obviously, for long paths, the problem is not very well conditioned and there exist many improvements, for example by multiple shooting algorithms. For periodic trajectories, the Poincaré map gives a similar procedure for optimization, based on simulating the system dynamics. As a basic principle, one optimizes in the space of curves which are feasible solutions of the differential equation, trying to satisfy the boundary conditions.
In contrast, by using the principle of least action, one optimizes in the space of curves satisfying the boundary conditions, i.e. passing through the points q A and q B , but which are not necessarily solutions of the system's differential equations (yet). The algorithms then make these curves system trajectories by zeroing the amount by which the curves fail to satisfy the geodesic equation. Equivalently, the algorithm extremizes the arc length in the corresponding Jacobi metric, which can be intuitively thought of as contracting rubber strings on the manifold. We look in more detail at this algorithm in the following.
String Relaxation
Imagine you take a string and fix the two ends to two distinct points. Now think of the string as a rubber band: it will naturally contract to the (locally) shortest possible path between the two endpoints.
Suppose we have a not (yet) geodesic curve γ(s, t), which we would like to converge to a geodesic as t → ∞. We take the geodesic equation and make γ(s, t) satisfy it more and more over time by the PDE Let's discretize γ(s, t) in space and time and write it as γ t (k) = γ(k∆s, t∆T ) (Fig. 2). We use central differences for the first derivatives and also discretize the second derivative. This results in the update rule .
Note that we show here an explicit Euler scheme for discretizing (14). This is not what one would implement in practice, but serves to illustrate the idea. Convergence is rather slow for the explicit scheme as small time steps must be chosen. For this kind of relaxation dynamics much faster convergence can be obtained by switching to implicit solvers allowing to crank up ∆T a lot [23].
Example: Dynamical System
Let us next try string relaxation on a dynamical system. We consider the configuration space of a double pendulum and choose a total energy E. Assume we want a trajectory from a configuration q A to q B . For constant energy, we can fully capture its dynamics in the Jacobi metric and make the problem purely geometric. We fix a simulated string at the two configurations q A and q B and let it contract under the Jacobi metric. Fig. 3 shows one such example. The dashed blue line in Fig. 3 shows the initial string. Over the iterations the string converges to the orange curve. At the same time, the Riemannian length of the string decreases (blue in Fig. 3b). In orange we show the convergence velocity of the relaxation measured by v(t) = k ||γ t−1 (k) − γ t (k)||.
Once the string has converged to a geodesic γ(s) we compare it to a forward simulated system. One last step is to determine the initial velocityq A . We scale the tangent of γ(s) to match the physical energy using the inverse of (12). We scale the tangent of γ(s) such that the total energy is preserveḋ Starting from the state (q A ,q A ) we simulate the double pendulum using a Runge-Kutta integration scheme for some time and obtain the curve shown in orange in Fig. 3c. We observe that the simulated trajectory follows the relaxed string in configuration space. As we still have energy at q B we pass by it and continue.
In the example we have fixed the string at two fixed end-points q A and q B . This allows us to find geodesics connecting two configurations. But nobody stops us from closing the string by connecting the first and the last vertex on the string. Then, the string has no boundary conditions that would hold it in place. Often it will collapse to a point, but sometimes the topology of the space prevents this collapse -we will look into this phenomenon in the next section. In that case we can use the string relaxation to find periodic orbits.
Topological Insights into Periodic Orbits
Besides providing algorithms to find intrinsic trajectories, the principle of least action leads to the possibility of making general theoretical predictions about the types and numbers of special trajectories, such as periodic orbits. To get a flavor of the approaches, one needs some topology, in particular algebraic topology. We will only show a brief summary of the most essential concepts here. The main interest of algebraic topology is to classify manifolds into certain equivalence classes and find invariant quantities which uniquely characterize them [24]. Manifolds are homotopy-equivalent if they can be smoothly deformed into each other by a homotopy [25], [26]. In Fig. 4, the doughnut and the mug are homotopy-equivalent and this is, intuitively speaking because they are both 2-dimensional, closed, unbounded surfaces with one hole.
An effective way to classify n-dimensional manifolds is to count their number of holes of dimensions 0, ..., n, which are described by the Betti numbers b 0 , ..., b n . For 2-dimensional surfaces, a zero-dimensional hole is a gap between two path-connected components. So, for example, b 0 (S 2 ) = 1 for a sphere S 2 and b 0 = k for a manifold composed of k disjoint spheres.
One-dimensional holes are found and counted by classes of closed curves (1-cycles) on the manifold, which cannot be shrunk to a point. For example, on a sphere S 2 all closed curves can be shrunk to a point (Fig. 5), so b 1 (S 2 ) = 0. On a torus T 2 , there are two distinct classes of curves that cannot be shrunk to a point, as shown in Fig. 5 in orange. Therefore, on the torus b 1 (T 2 ) = 2. In each of the two classes, there are infinitely many curves, which can be continuously deformed into each other. However, note that the curves from class a cannot be continuously deformed into curves of class b, the two sets are disjoint. Finally, both the sphere and the torus have one 2-dimensional hole (whose boundary is a closed surface), and therefore, b 2 (S 2 ) = b 2 (T 2 ) = 1.
For the analysis of periodic, closed paths of a robot, the first Betti number, related to families of closed curves, is of particular interest. Consider a double pendulum (2 DoF vertical manipulator with gravity). Its configuration space is the 2-torus T 2 . The two distinct types of cycles a and b form a basis (independent generators) for the group of all possible cycles. The neutral element of the group is the zero cycle, i.e., the set of all curves which can be shrunk to a point. The composition a + b of two elements of the group means that the cycles are just followed one after the other, and, e.g., 2a + 3b means that a curve winds two times around the first joint and three times around the second joint. The order does not matter, the group is considered abelian (commutative). This group is called the homology group H 1 (T 2 ) and its rank is indicated by the Betti number b 1 . Any closed curve on the torus can be generated starting from αa + βb with α, β ∈ Z and through a homotopy (continuous deformation). Note, again, that for different α, β, the curves cannot be continuously deformed into each other, the classes are disjoint. This will be used in Section 4.1 to directly show that there are infinitely many periodic closed orbits for the double pendulum, at least one for each element of the homology group H 1 (T 2 ).
A Case Study: Periodic Orbits of the Double Pendulum
In this section we will discuss the large variety of periodic trajectories of conservative robot models based on Fig. 6. We will classify the type of periodic orbits into three classes: toroidal orbits, disk orbits, and brake orbits. Toroidal orbits are the ones directly predicted by algebraic topology; they are due to the toroidal structure of the configuration space. These orbits turn at least one joint in full cycles. Disk orbits happen completely within a chart of disk topology, i.e., we do not need the wrapping of angles for them. Finally, brake orbits are extensions of normal modes of linear systems. Most types of trajectories discussed in this section will be present also for robots with arbitrary degrees of freedom. Toroidal orbits are only possible as soon as one of the joints allows full turns.
Toroidal Orbits
The dynamics of the conservative double pendulum in a gravity field is known to display chaotic behavior [28]. This is due to the inertial couplings and the upper bounded potential, having unstable equilibria at the upright configurations. If the total energy is high enough to permit full turns of the joints, i.e. E > U max (q), the system has, however, also infinitely many periodic orbits. This is a direct consequence of the topology of the torus and its homology class H 1 (T 2 ). As shown in Sec. 3, there are infinitely many disjoint classes of closed curves, obtained by integer linear combination and homotopic deformation of the two base cycles on the torus. According to their definition, these curves cannot be shortened to a point, so there is a minimal length in each of these classes, and therefore the curve having that length will be a geodesic. We take the numerical string relaxation algorithm of Sec. 2 to find such geodesics. First, we fix the desired energy E needed to determine the Jacobi metric (10). Then we start by creating a string of the correct topology, i.e., we create an initial string in the class (α 1 , α 2 ) for α 1 , α 2 ∈ Z. Iteratively updating the nodes of the string by the update rule will let the string converge to a geodesic -and thus to a periodic orbit of the double pendulum. This result holds in any dimension, as the Betti number for an n-dof pendulum is b 1 (T n ) = n. It is indeed a classical result [13] that Proposition 1. For any n integers α 1 , ...α n , of which at least one is nonzero, there exists at least one periodic trajectory of the n-dof pendulum performing α i rotations around joint i, for i = 1, ..., n. Fig. 7 shows the trajectories of the double pendulum in the gravity field for the combinations (0, 1), (1, 0), (1,2), and (2, 1) for some fixed energies. It is important to note that each orbit is only valid for the energy it was computed for; the geodesics are not invariant w.r.t. the energy in the Jacobi metric. We observe, however, that they continuously deform with variations in energy. For the double pendulum, we find sometimes more than one geodesic, also for fixed energies. Only one of them will be a global minimum, others only local ones.
Disk Orbits
The algebraic topology results do not say anything about the existence of periodic orbits of the type (0, 0), i.e., of closed trajectories that oscillate in an interval less than 2π. Indeed, such trajectories do not need to exist in general (for arbitrary metrics), because all the zero cycles can be shrunk to a point; there is no hole to prevent their collapse. Nevertheless, it is not excluded that the metric encodes bumps or other local geometrical features such that closed curves make the arc length stationary: this makes them geodesics and, simultaneously, periodic orbits. Indeed, studies on chaotic systems show that they often display a rich variety of unstable periodic orbits [29], [30]. This has also been shown for mechanical systems [31], [32]. We employ a scheme similar to [29]: we take a boundary value solver [33] to find solutions q(t) to the differential equation (3) such that q(0) = q(T ), q(0) =q(T ) and H(q,q) = E des for some desired energy E des . The estimated period time T is updated as well during the optimization. Once we found a solution we perform numerical continuation [34] over the energy to generate families of solutions.
We have found a large variety of periodic orbits and show two particularly simple families in Fig. 8, which continuously vary with energy. Each row in Fig. 8 shows one family. On the left we show the orbits in configuration space and observe that they continuously deform with energy. On the middle we show the highest energy orbits in Cartesian space; and on the right the same orbit in configuration space. It looks like, for low energies, they collapse into brake orbits (more to those later in Sec. 4.3), although this observation has no theoretical backup yet. It will be certainly interesting to further investigate these types of trajectories in the future, because they are well suited for robots and also for biological limbs, which cannot perform full turns. For example, the swinging motion of a leg could be performed in such a mode.
Brake Orbits aka Nonlinear Normal Modes
Brake orbits were the primary focus of our initial research interest and were presented in detail in [22], [35]. We understand this type of orbits as generalization of normal modes of linear systems. For linear dynamics, periodic motions will take place in configuration space in the directions given by the eigenvectors. A line of research [36], [37] dating back to Rosenberg [38] noted that there is a straightforward generalization to nonlinear systems and therefore called these oscillations nonlinear normal modes. These modes were studied, however, only for systems composed of point masses and nonlinear potentials, thus not being applicable to robotic systems. In order to extend the results to robotics and to emphasize the connection to the linear modes, we coined the concept of eigenmanifolds in [35]. Each trajectory has the property that it oscillates back and forth between two points, where the system stops and reverses motion.
It turns out that trajectories of this kind have also been studied in a quite general setup with tools of differential geometry and algebraic topology since Seifert [39]. He has proven the existance of one brake orbit and conjectured there should be even more, both in his seminal work [39]: For any conservative mechanical systems with closed equipotential surfaces, there exists at least one brake orbit for each energy level.
Conjecture 1 (Seifert). For any conservative mechanical systems with n degrees of freedom and with closed equipotential surfaces, there exist at least n brake orbits for each energy level. This is an idea certainly inspired by the n modes of n DoF linear systems. Some authors have provided proofs of the conjecture for particular cases [40] and Giambó et al. claim in a recent preprint to have proven it [41] under conditions which apply to general Hamiltonian systems. However, the additional conditions of the theorems are generally not satisfied by robot dynamics equations, leaving this as a still open theoretical question.
Based on the insights of the theory, we developed numerical algorithms for searching the nonlinear modes (brake orbits) starting from the linearized solutions. Fig. 9 presents brake orbits of the double pendulum. On the left we show brake orbits growing our of the two linear eigenvectors. The dots show two configurations, which are used for initial conditions for the simulations on the right. In the middle, we display the brake orbits as a surface parameterized by energy as additional coordinate. This is a representation of the eigenmanifolds, which is alternative to the one from [35]. As predicted by the theorem, at least two nonlinear modes exist for every energy level. We report that for all robot systems analyzed so far, including legged robots and a 7 DoF robot arm [42], the Seifert conjecture holds. Fig. 10 summarizes the types of periodic orbits presented in this paper. One orbit for each type of our example system are also shown on the torus in Fig. 6b. For very low energies, the linearization around the equilibrium holds as an approximation, and one will have linear modes. As energy increases, one observes that the modes begin to bend and we recognize that the linear modes were a particular case of the nonlinear normal modes (brake orbits). Indeed, with the continuation method, at least two nonlinear modes can be found for the 2-DoF system. Our experience so far was that in general, at least n nonlinear modes can be found for n-DoF systems. Nonlinear normal modes will cease to exist as soon as the total energy exceeds the maximal possible potential energy, i.e. if E > U max (q). In that case, there is no point where all the energy is purely potential and there cannot be points with zero velocity.
Classification and Overview on Periodic Orbits
Although not predicted so far by the algebraic topology arguments, we have numerically shown that also closed orbits without full rotations exist, which we call disk orbits. These might be of particular interest to robotics because these trajectories can be executed also by robots that do not permit endless rotation, which is mostly the case in today's robots. Finally, starting at some minimal energy E > U min (q), allowing at least on joint to do full turns, closed multi-turn orbits appear. For E > U max (q), i.e., energies exceeding the maximal potential energy, these toroidal orbits are the dominant periodic behavior. There are infinitely many such closed orbits for any robot.
Conclusion
We hope to have triggered some interest of the robotics community in better understanding the potential benefits of geometric and topological approaches to study the behavior of robot dynamics from a global perspective. Classical robotics control takes a rather local view so far, while global solutions are traditionally the field of motion planning. The presented tools might provide a methodical bridge between the two areas.
Regarding the practical relevance, consider the large variety of periodic orbits we found even in our simplest example. Complex robots will display even richer behaviors! Imagine we can assemble our tasks out of pieces of these orbits -or even better: a task might coincide with a periodic orbit if we design the system properly. All one needs then is to compensate for friction, stabilize the natural orbits [42] and possibly develop approaches to shape them to a certain extent, by posture or by control. By designing and exploiting the intrinsic dynamics of a robotic properly, tasks can be achieved more naturally, more efficiently, and more performantly. If we would like a robot to jump like a flea, we should probably not build an elephant but rather something close to a flea. | 2022-11-16T06:42:28.125Z | 2022-11-15T00:00:00.000 | {
"year": 2022,
"sha1": "1345d320813f1bd050f8c8f852269e4057914968",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/2211.08165",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "1345d320813f1bd050f8c8f852269e4057914968",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Computer Science"
]
} |
266928471 | pes2o/s2orc | v3-fos-license | Integrative bioinformatics analysis of miRNA and mRNA expression profiles and identification of associated miRNA-mRNA network in intracranial aneurysms
Background Intracranial aneurysms (IAs) represent protrusions in the vascular wall, with their growth and wall thinning influenced by various factors. These processes can culminate in the rupture of the aneurysm, leading to subarachnoid hemorrhage (SAH). Unfortunately, over half of the patients prove unable to withstand SAH, succumbing to adverse outcomes despite intensive therapeutic interventions, even in premier medical facilities. This study seeks to discern the pivotal microRNAs (miRNAs) and genes associated with the formation and progression of IAs. Methods The investigation gathered expression data of miRNAs (from GSE66240) and mRNAs (from GSE158558) within human aneurysm tissue and superficial temporal artery (STA) samples, categorizing them into IA and normal groups. This classification was based on the Gene Expression Omnibus (GEO) database. Results A total of 70 differentially expressed microRNAs (DEMs) and 815 differentially expressed mRNAs (DEGs) were pinpointed concerning IA. Subsequently, a miRNA-mRNA network was constructed, incorporating 9 significantly upregulated DEMs and 211 significantly downregulated DEGs. Simultaneously, functional enrichment and pathway analyses were conducted on both DEMs and DEGs. Through protein-protein interaction (PPI) network analysis and functional enrichment, 9 significantly upregulated DEMs (hsa-miR-188-5p, hsa-miR-590-5p, hsa-miR-320b, hsa-miR-423-5p, hsa-miR-140-5p, hsa-miR-486-5p, hsa-miR-320a, hsa-miR-342-3p, and hsa-miR-532-5p) and 50 key genes (such as ATP6V1G1, KBTBD6, VIM, PA2G4, DYNLL1, METTL21A, MDH2, etc.) were identified, suggesting their potential significant role in IA. Among these genes, ten were notably negatively regulated by at least two key miRNAs. Conclusions The findings of this study provide valuable insights into the potential pathogenic mechanisms underlying IA by elucidating a miRNA-mRNA network. This comprehensive approach sheds light on the intricate interplay between miRNAs and genes, offering a deeper understanding of the molecular dynamics involved in IA development and progression.
Introduction
Intracranial aneurysms (IAs) represent pathological localized protrusions of the arterial wall, exhibiting a distinct deviation from the typical three-layer structure of the vascular wall.The rupture of IAs stands as the predominant cause, accounting for approximately 85 % of non-traumatic subarachnoid hemorrhage (SAH).This rupture results in the infiltration of blood into the subarachnoid space of the brain, contributing to non-traumatic intracerebral hemorrhage (ICH), a phenomenon more prevalent in individuals of working age [1,2].The intricate progression of aneurysmal disease carries the potential for severe and enduring neurological deficits or even mortality.Various factors are believed to be associated with the occurrence of IAs, encompassing age, hypertension, genetic predisposition, hemodynamic alterations, and environmental influences [3].Despite numerous studies on IAs, the precise pathophysiological mechanisms governing the formation, progression, and rupture of aneurysms remain elusive.The complex interplay of these factors and their contribution to the pathological evolution of IAs poses a significant challenge in comprehending the intricate dynamics of aneurysm development.Consequently, unveiling the underlying mechanisms holds crucial importance for advancing our understanding of these vascular abnormalities and facilitating the development of targeted interventions for their prevention and treatment.
MicroRNAs (miRNAs), small non-coding RNA molecules spanning approximately 18-22 nucleotides, serve as potent post-transcriptional regulators of gene expression.Their regulatory process is manifested through the binding to the 3′-untranslated (3′-UTR) regions of target mRNAs, specifically those encoding protein-coding genes, thereby exerting a pronounced negative impact on their translation processes [4].The indisputable significance of miRNAs in diverse biological phenomena, including cell cycle control, proliferation, differentiation, and apoptosis, underscores their pivotal role in orchestrating intricate cellular functions [4].The expanding body of evidence compellingly establishes the pivotal role of miRNAs in the initiation, growth, and progression of IAs [5,6].The multifaceted pathophysiology characterizing IAs, marked by endothelial dysfunction, phenotypic modulation of vascular smooth muscle cells (VSMCs), and the accumulation of inflammatory cells, is intricately governed by the regulatory influence of miRNAs [5].Yet, a critical knowledge gap persists, necessitating a comprehensive exploration to identify the specific miRNAs and their target genes implicated in these intricate IA-associated processes.This study aspires to unravel the fundamental pathophysiological pathways integral to the molecular mechanisms dictating the formation and development of IAs.By leveraging miRNA and mRNA expression profiles, our aim is to pinpoint and elucidate the pivotal players within these pathways.Furthermore, we endeavor to construct a comprehensive miRNA-mRNA regulatory network, offering nuanced insights into the intricate dynamics underpinning IA formation and development.Envisioning that these findings will not only enhance our understanding of IA pathogenesis but also pave the way for the development of sophisticated diagnostic tools and molecularly targeted therapies, we are poised to unlock novel avenues for addressing the complex challenges posed by IAs in clinical settings.
Microarray data
The microarray datasets containing miRNA and mRNA expression profiles linked to IAs were acquired from the Gene Expression Omnibus (GEO) database, administered by the National Center for Biotechnology Information (NCBI), and can be accessed at https://www.ncbi.nlm.nih.gov/geo/); accessed in September 2022).Notably, the miRNA expression dataset, GSE66240, was conducted on the GPL17303 platform, while the mRNA expression dataset, GSE158558, utilized the GPL20301 platform (Illumina HiSeq 4000).This dataset encompasses a total of 26 samples, comprising 10 samples derived from individuals with IAs and 16 from specimens of the superficial temporal artery (STA).To ensure methodological consistency and precision across various datasets and platforms, distinctive preprocessing techniques were applied.Specifically, the miRNA microarray analysis employed the miRCURY LNA Array (version 11.0; Exiqon, Vedbaek, Denmark).Meanwhile, for the mRNA microarray, the oligo package in R (version 3.4.2) was employed.Rigorous procedures for background correction and normalization were implemented on raw data, ensuring the reliability and robustness of subsequent analyses.Crucially, it is essential to underscore that ethical committee approval was considered unnecessary for this study.This decision is well-founded on the premise that the datasets utilized were sourced from public databases and were handled with strict adherence to GEO publication guidelines and data access policies.Consequently, the study aligns with established ethical standards for handling publicly available datasets, reflecting a commitment to responsible and ethical data utilization principles.This approach emphasizes transparency and integrity in the utilization of data from public repositories, eliminating the need for specific ethical clearances in this context.Fig. 1 demonstrates the workflow of the study.
Identification of DEGs
Differentially expressed genes (DEGs) distinguishing between IA and STA samples were identified through the utilization of GEO2R, an interactive web tool accessible at http://www.ncbi.nlm.nih.gov/geo/geo2r.GEO2R facilitates the comparative analysis of two or more datasets within a GEO series, enabling the identification of DEGs across various experimental conditions.In this analysis, P-values, along with the Benjamini and Hochberg false discovery rates, were employed to strike a balance between pinpointing statistically significant genes and mitigating the risk of false positives.To enhance the robustness of the results, several data refinement steps were implemented.Probe sets lacking corresponding gene symbols were excluded, and in cases where genes were represented by more than one probe set, either the removal or averaging of such duplicates was undertaken.A stringent statistical criterion was applied, considering a Fold Change (LogFC) greater than 1.5 and a P-value less than 0.05 as indicative of statistical significance.These parameters were chosen to ensure the identification of DEGs that not only reached statistical significance but also exhibited a biologically meaningful degree of differential expression.
Enrichment analysis
The Gene Ontology (GO) database serves as a valuable resource for predicting the functional annotations of gene products across categories such as GO-biological process (BP), GO-cellular component (CC), and GO-molecular function (MF) [7].Simultaneously, the Kyoto Encyclopedia of Genes and Genomes (KEGG) database is commonly employed to Fig. 2. Illustrates the expression profiles and cluster analysis of microRNAs (miRNAs) in intracranial aneurysm (IA) samples.In panel (A), a heatmap showcases the differentially expressed microRNAs (DEMs).The color scheme employs blue for IA samples and red for control samples, effectively portraying the distinct expression patterns between the two groups.In panel (B), a volcano plot represents the DEMs, emphasizing the relationship between statistical significance and fold change.Red dots signify upregulation, green dots denote downregulation, and gray dots indicate no significant differential expression.This visualization offers a clear and concise representation of the magnitude and significance of expression changes in the analyzed miRNAs.anticipate the pathways involving specific genes [8].To decipher the functional implications of differentially expressed microRNAs (DEMs) and their negatively correlated target genes, the clusterProfiler package (version 3.8.1;accessible at http://www.bioconductor.org/packages/release/bioc/html/clusterProfiler.html)[9] was utilized.Through the clusterProfiler package, enrichment analyses for GO and KEGG were conducted on the identified DEMs, shedding light on the biological processes, cellular components, and molecular functions implicated in the context of these miRNA expressions.Subsequently, the genes within this module underwent further GO and KEGG analyses using the DAVID tool.A false discovery rate (FDR) threshold of <0.05 was defined as significant, ensuring a stringent criterion for the identification of biologically meaningful enrichments.This integrative approach provides a comprehensive understanding of the functional landscape associated with the identified miRNAs and their target genes, enhancing our insights into the intricate molecular mechanisms at play in the context of IAs.
PPI network analysis
The prediction of the protein-protein interaction (PPI) network was carried out using the Retrieval of Interacting Genes (STRING) online database search tool, accessible at http://string-db.org (version 10.0) [10].Analyzing functional interactions among proteins is instrumental in gaining insights into the mechanisms underlying disease occurrence and development.In this study, the PPI network for DEGs was constructed utilizing the STRING database, with interactions possessing a composite score greater than 0.4 considered statistically significant.
Cytoscape, an open-source bioinformatics software platform for visualizing molecular interaction networks (version 3.4.0)[11], was employed to construct the PPI networks.Further refinement and identification of crucial modules within the PPI networks were facilitated by the Molecular Complex Discovery (MCODE) plug-in (version 1.4.2) [12].MCODE is an application designed for clustering networks based on topology, aiding in the identification of densely connected regions.In this context, the PPI networks were created using Cytoscape, and the most significant module was pinpointed using MCODE.The selection criteria for identifying significant modules were set as follows: MCODE scores exceeding 5, a degree cutoff of 2, a node score cutoff of 0.2, a maximum depth of 100, and a k-score of 2. This approach ensures a comprehensive exploration of the PPI network, emphasizing the detection of densely connected regions that may play pivotal roles in the context of the identified DEGs related to IAs.
Construction of the miRNA-targeted gene network
The identification of downstream target genes based on the Differentially Expressed MicroRNAs (DEMs) derived from the analysis of datasets GSE66240 and GSE50867 involved a multi-step process.Utilizing the TargetScan (http://www.targetscan.org/vert_71/),miRanda Fig. 3. Presents the expression profiles and cluster analysis of gene targets in intracranial aneurysm (IA) samples.In panel (A), a heatmap illustrates the expression values of differentially expressed genes (DEGs).The color scheme assigns red to IA samples and blue to control samples, effectively depicting the distinctive expression patterns between the two groups.In panel (B), a volcano plot showcases the DEGs, emphasizing the relationship between statistical significance and fold change.Red dots indicate upregulation, green dots signify downregulation, and gray dots denote no significant differential expression.This visual representation succinctly communicates the magnitude and significance of expression changes in the analyzed genes.
(http://zmf.umm.uni-heidelberg.de/apps/zmf/mirwalk),and miRDB (http://www.mirdb.org/)databases, predictions of downstream target genes were performed.The resulting target genes from these three databases were then compared with the dataset [13][14][15].Simultaneously, the DEMs from the GSE158558 dataset were considered, and an intersection was taken to identify candidate target genes.Leveraging the regulatory relationships between miRNAs and mRNAs, a comprehensive miRNA-mRNA regulatory network was established.To further enhance the regulatory network's complexity, predictions of Transcription Factors (TFs) that regulate miRNAs were made using the Transcriptional Regulatory Relationships Unraveled by Sentence-based Text mining (TRRUST) database, accessible at http://www.grnpedia.org/trrust/[16].This approach relies on existing literature to predict TFs influencing miRNA regulation.By incorporating data from multiple sources and databases, this comprehensive strategy ensures a thorough exploration of the regulatory landscape, unveiling potential interactions and providing a nuanced understanding of the intricate relationships between miRNAs, mRNAs, and transcription factors.
Differential expression analysis
A thorough examination of the GSE66240 dataset revealed a total of 70 Differentially Expressed MicroRNAs (DEMs), providing insights into the distinctive expression patterns between samples associated with IA and their normal counterparts.Within this set, 46 DEMs displayed upregulation, while 24 DEMs exhibited downregulation, as illustrated in Fig. 2A-B.Noteworthy among these were specific miRNAs, such as hsa-miR-188-5p, hsa-miR-590-5p, hsa-miR-320b, hsa-miR-423-5p, hsa-miR-140-5p, hsa-miR-486-5p, hsa-miR-320a, hsa-miR-342-3p, and hsa-miR-532-5p, which consistently demonstrated expression patterns across both datasets.Particularly, hsa-miR-188-5p and hsa-miR-590-5p consistently exhibited high expression levels, characterized by log2FC values exceeding 2 in both datasets.Visual representations of these DEGs through heatmaps, subjected to hierarchical cluster analysis (Fig. 3A-B), underscored their distinctive expression patterns between IA and normal groups.In a comprehensive computational analysis, target genes associated with DEMs were identified through predictions from TargetScan, miRanda, and miRDB.This approach resulted in the identification of 615 downregulated genes (Table 1).The integration of these findings not only accentuates the robustness of the identified DEMs but also provides valuable insights into potential downstream target genes, thereby enhancing our understanding of their pivotal roles in the context of IAs.
Protein-protein interaction network analysis
A miRNA-mRNA regulatory network was constructed to illustrate the interactions between 9 upregulated miRNAs and their targeted mRNAs.Fig. 4 and Table 2 visually represent this network, showcasing the regulatory relationships between each miRNA and the corresponding mRNAs.
Enrichment analysis of DEMs and DEGs
To comprehensively explore the biological functions attributed to the upregulated DEMs, an extensive analysis of GO pathway enrichment was conducted.The findings unveiled notable enrichment across a spectrum of functional characteristics.In terms of BP, the DEMs exhibited substantial involvement in processes encompassing the regulation of nucleobase, nucleoside, nucleotide, and nucleic acid metabolism, as well as functions related to transport and signal transduction, among others (Fig. 6A).Pertaining to CC, enrichments were observed in diverse cellular locales, including the nucleus, cytoplasm, lysosome, Golgi apparatus, exosomes, and various other cellular components (Fig. 6B).Within the domain of MF, the DEMs displayed enrichments in activities such as transcription factor activity, protein serine/threonine kinase activity, ubiquitin-specific protease activity, transcription regulator activity, and more (Fig. 6C).Moreover, GO and KEGG pathway enrichment analyses were systematically conducted for the DEGs identified within these modules.In terms of BP, the DEGs were predominantly associated with processes involving the establishment of protein localization, cellular macromolecule localization, positive regulation of biosynthetic processes, and the organization of protein-containing complex subunits (Fig. 7A).Regarding CC, enrichments were discerned in various cellular structures, including mitochondria, catalytic complexes, nuclear protein-containing complexes, the envelope, and more (Fig. 7B).MF enrichments encompassed activities such as enzyme binding, identical protein binding, ribonucleotide binding, adenyl nucleotide binding, and other functions (Fig. 7C).In the subsequent KEGG pathway enrichment analysis, the DEGs displayed significant enrichment in pathways associated with cellular senescence, the AMP-Fig.4. Illustrates the construction of the protein-protein interaction (PPI) network, showcasing the interconnected relationships between downregulated differentially expressed genes (DEGs) and the targeted 9 upregulated differentially expressed microRNAs (DEMs).In this network, nodes represent individual genes, and the lines connecting them depict the interactions between these genes.This visual representation offers insights into the complex web of interactions between the identified genes and microRNAs, providing a comprehensive overview of their regulatory relationships in the context of intracranial aneurysms (IAs).activated protein kinase (AMPK) signaling pathway, DNA replication, the spliceosome, base excision repair, non-alcoholic fatty liver disease (NAFLD), pathways in cancer, human T-cell leukemia virus 1 infection, propanoate metabolism, and insulin (Fig. 8A-B).This comprehensive analysis provides a nuanced understanding of the diverse biological processes and pathways linked to the identified DEMs and DEGs, contributing valuable insights into their potential roles in the context of IAs.
Discussion
IAs are localized pathological bulges in the arterial wall, frequently leading to SAH and disproportionately affecting individuals in their working age.The intricate progression of IA can result in severe and persistent neurological impairments or even fatal consequences.Recognizing the potential debilitating nature of IA, it becomes imperative to investigate the mechanisms dictating its formation, with a primary goal of preventing its progression and rupture.In this pursuit, we conducted an exhaustive exploration of the miRNA-mRNA network and the associated biological pathways linked to IA through rigorous bioinformatics analysis.
Aneurysms, characterized by their location, encompass various types, including abdominal aortic aneurysm (AAA), thoracic aortic aneurysm (TAA), and IA.Pathologically, aneurysms manifest features such as inflammatory cell infiltration, remodeling of the extracellular matrix (ECM), compromised arterial wall integrity, and the death of VSMCs.Despite significant progress in understanding aneurysm pathophysiology, the precise molecular mechanisms specific to each type remain elusive [17].However, we posit that shared miRNAs and signaling pathways exist among different aneurysm types, often overlooked, or underestimated despite anatomical distinctions.Recognizing and elucidating these shared elements holds substantial promise for advancing both fields, particularly in the realm of cellular and molecular-guided therapies.Emphasizing the necessity for cross-disciplinary preclinical studies, this approach stands to enrich our collective comprehension of the intricate landscape of aneurysm
Remarkably, the regulatory pairs within this network showcased the influence of hsa-miR-188-5p on ATP6V1G1/XRCC5/ZFP91/CBFB and hsa-miR-590-5p on TIMP3/PCBP1/PCBP2/S100A10/LANCL1.Previous studies have shed light on the significance of miR-188-5p in AAA progression, demonstrating its upregulation and its impact on elastin degradation, VSMC depletion, and mural angiogenesis inhibition [34].Similarly, miR-140-5p suppression has been observed in acute aortic dissection (AAD) patients, correlating with upregulated NCKAP1 levels, influencing VSMC proliferation, migration, and invasion [35].Deng et al., investigated the role of miR-140/BCL2L2 axis on the formation of IAs.The authors found that the expression of miR-140 increased in IA patients [36].BCL2L2 can significantly promote the proliferation of human brain vascular smooth muscle cells (HBVSMCs) and inhibit apoptosis by negatively regulating miR-140, thus controlling the occurrence of IAs.
The miR-320 family, including miR-320a and miR-320b, collectively hinders VSMC proliferation and migration by targeting Neuropilin 1 (NRP1) [37].MiR-320b specifically targets apoptosis-resistant E3 ligase 1 (AREL1), thereby influencing apoptosis in human umbilical vein endothelial cells (HUVECs) [38].Furthermore, the study delves into the context of bicuspid aortic valve (BAV) ascending aortic aneurysm (AsAA), where elevated plasma levels of miR-320a are associated with mid-ascending aortic wall strain [39].In the realm of circCCDC66, miR-342-3p, and CCDC66, their intricate interplay has been evidenced in human VSMC apoptosis and proliferation in AAA [40].Understanding the role of miR-532-5p in atherosclerosis progression was elucidated by its regulatory effects on VSMC behaviors [41].Overexpression of miR-532-5p demonstrated inhibitory effects on VSMC proliferation and migration [42].Additionally, the impact of miR-532-5p on human brain microvascular endothelial cells (HBMECs) damaged by ox-low-density lipoprotein (ox-LDL) was explored.The findings highlighted the protective role of miR-532-5p in mitigating HBMECs damage induced by ox-LDL, attributed to its down-regulation of intracellular chloride channel 4 (CLIC4) expression [43].This comprehensive exploration underscores the intricate regulatory networks involving specific miRNAs and their target genes in the context of IA.The multifaceted roles of miRNAs in influencing cellular behaviors and pathways associated with aneurysm progression provide valuable insights for further understanding and potential therapeutic interventions.
SAH stands as a critical neurological condition characterized by a high morbidity and mortality rate, primarily attributed to the rupture of IAs.The exploration of MiRNAs and their regulatory mechanisms, particularly their impact on IA rupture, has garnered significant attention from researchers.MiRNAs emerge as potential therapeutic targets and biomarkers for SAH, holding promise for drug modulation and diagnostic tool development.Notably, various studies have unveiled distinct miRNAs influencing the likelihood of IA rupture.
In an informative analysis by Zhao et al., Weighted Correlation Network Analysis (WGCNA) identified four miRNAs with potential as IA biomarkers, boasting Area Under Curve (AUC) values exceeding 0.75 [44].Notably, hsa-miR-423-5p exhibited commendable predictive performance.Lopes et al. employed Next-Generation Sequencing (NGS) to scrutinize miRNA expression in peripheral blood samples from SAH patients, identifying eight DEMs, of which miR-486-5p, displaying downregulation, correlated with poor neurological admission status [45].Furthermore, studies suggest that VSMC-secreted exosomes transfer miR-486 into endothelial cells (ECs), inhibiting their migratory activities in physiological conditions [46].Zheng et al.'s microarray analysis delved into plasma miRNA profiles in SAH patients, revealing the significant downregulation of miR-590-5p, indicating its association with IA rupture [47].
Pathway enrichment analysis of target genes implicated inflammation, VSMC proliferation, and cell adhesion as potential contributors to disease occurrence.Intriguingly, the present study identified an increase in the expression of 9 miRNAs, contrasting with the results of studies.Additionally, observed discrepancies between our findings and those investigating aortic aneurysms underscore the intricate and multifaceted nature of aneurysms, suggesting avenues for further exploration.This study not only sheds light on the complex landscape of aneurysms but also proposes novel goals and strategies for advancing the understanding and study of SAH.
Our findings highlight the significance of the AMPK signaling pathway in IA, a conclusion consistent with prior research [48,49].Li et al. conducted a study demonstrating that metformin exerts a protective effect against IA formation and rupture by impeding VSMC Illustrates the outcomes of Gene Ontology (GO) enrichment analysis for the candidate target genes.Panel (A) focuses on GO-biological process (BP), highlighting enrichments in processes such as the establishment of protein localization, cellular macromolecule localization, positive regulation of biosynthetic processes, and the organization of protein-containing complex subunits.In Panel (B), GO-cellular component (CC) terms reveal enrichments in cellular structures, including mitochondria, catalytic complexes, nuclear protein-containing complexes, and the envelope.Lastly, panel (C) delineates GO-molecular function (MF) terms, showcasing enrichments in activities such as enzyme binding, identical protein binding, ribonucleotide binding, and adenyl nucleotide binding.These visual representations provide a detailed overview of the functional characteristics associated with the candidate target genes.phenotype switching, as well as inhibiting VSMC proliferation, migration, and apoptosis in vivo [48].Notably, the study revealed AMPK pathway activation under various metformin doses, underscoring its role in restraining VSMC activities.In a related study, Sun et al. observed that AMPK activation mitigated the pro-inflammatory effects of miR-323a-3p in the human ECs line EA.hy926 [49].Their findings suggested that targeting the AMPK signaling pathway through miR-323a-3p could hold promise for future anti-inflammatory treatments for IAs.These results not only affirm the importance of the AMPK pathway in IA but also offer potential therapeutic avenues by modulating this pathway to counteract the inflammatory processes associated with IA development.
To gain deeper insights into the regulatory mechanisms of target genes within DEMs, our study identified potential TFs, with SP1, SP4, KLF7, and EGR1 emerging as the most prevalent.In a recent study, SP1 was confirmed to mediate the impact of miR-335-5p on the phenotypic switching of VSMCs in AAD, exhibiting a pro-apoptotic influence by repressing the expression of p21WAF1/Cip1 at the transcriptional, mRNA, and protein levels [50,51].Additionally, SP1 and SP4 have been recognized as key TFs involved in the regulation of vascular endothelial growth factor (VEGF) production by binding to specific sites in the VEGF promoter [52,53].KLF7's involvement in various cardiovascular diseases is well-documented.It inhibits pulmonary arterial smooth muscle cells' (PASMCs) proliferation and migration through the p21 pathway in vitro and in vivo [54].Studies have shown that KLF7 expression decreases in oxidized low-density lipoprotein (ox-LDL)-induced HUVECs and inhibiting KLF7 reverses the inhibition effect of miR-301a-3p, promoting inflammation, apoptosis, and oxidative stress in ox-LDL-induced HUVECs [55].EGR1, identified as a transcription factor activated by vascular injury, has been implicated in the pathogenesis of various vascular diseases, including AAA, TAA, atherosclerosis, myocardial ischemia/reperfusion injury, hypertension, and pathological angiogenesis [56][57][58][59][60][61][62].This underscores the potential of EGR1 regulation as an exploitable target in IAs.Subsequent GO and pathway enrichment analyses of target mRNAs or PPI networks revealed enrichments in cellular characteristics, the establishment of protein localization, the mitochondrion, and enzyme binding.These findings align with prior investigations that have identified apoptosis, ECs proliferation, VSMCs, and immune cell infiltration as key contributors to IA [63,64].Our results further substantiate the regulatory role of miRNAs in these fundamental pathophysiological processes, underscoring the pivotal role of miRNAs in the context of IAs.
Despite the results, this study has some limitations.The lack of complete human tissues reflecting the characteristics of the disease limits bioinformatics research in the field of IA.It is known that the expression of miRNAs or genes can be influenced by many factors, including comorbidities or types of tissue samples, which are definitely important for obtaining more reliable results.For example, the GSE66240 database contained healthy STA tissue from IA patients as a control group.Besides, the source of blood and cerebrospinal fluid (CSF) samples will more accurately explain the physiological and pathological process of IA formation and progression.This is one of the limitations of the present study.In addition, there is a need to study with other databases in patients with SAH.Further the sample size of datasets chosen is not huge enough.Also, in our study there is no the AUC analysis which represented the ability of miRNAs to distinguish IA samples from normal samples, and AUC >0.75 could be set as the criteria for screening DEMs analysis It is primarily this bioinformatics analysis used for potential pathogenesis analysis.These DEMs and DEGs may be key nodes in the occurrence and development of IA.This provides new insights into the pathogenesis of IA.However, this study still needs to be confirmed by further preclinical and clinical experiments for finding new diagnostic and therapeutic tools.
Conclusion
In this investigation, we have successfully identified pivotal miRNAs, their target mRNAs, and crucial pathways associated with IA.Particularly noteworthy are hsa-miR-188-5p and hsa-miR-590-5p, among a total of 9 upregulated miRNAs, which exhibited substantial increases in IA.This upregulation correlated with the downregulation of their target genes, including ATP6V1G1, XRCC5, ZFP91, CBFB, KBTBD6, TIMP3, PCBP1, PCBP2, S100A10, and LANCL1.These findings significantly contribute to a more comprehensive understanding of the potential mechanisms underpinning IA formation and progression, thereby offering novel insights for both diagnostic approaches and therapeutic interventions related to IA.It is imperative, however, to recognize the intricate nature of the miRNA-mRNA interaction network, emphasizing the need for further in vitro and in vivo investigations to validate our results and assess their potential clinical applicability in the context of IA.
Table 4
Information of vascular lineage-specific genes.
Cellular unit
Gene Important find References ECs ATP6V1G1 Promotes the proliferation of ECs and angiogenesis [18] ECs XRCC5 Promotes the proliferation of ECs and angiogenesis [19] ECs ZFP91 The inflammatory response of ECs.Role in the ECs migration and tube formation [20,21] ECs CBFB Promotes the proliferation of ECs and angiogenesis [22] Human PBMC KBTBD6 Play in innate and adaptive immune responses in vascular inflammation [23] ECs, VSMCs and fibroblasts
TIMP3
Was reported to be a potential risk factor of atherosclerosis, aneurysm, and hypertension.
Fig. 5 .
Fig. 5. Red nodes signify a strong expression level of 2 top microRNAs (miRNAs), while blue nodes signify a low level of expression levels of top their target genes.
Fig. 6 .
Fig.6.Depicts the outcomes of Gene Ontology (GO) functional enrichment analysis for the identified differentially expressed microRNAs (DEMs).In panel (A), the Biological Process (BP) category highlights the involvement of DEMs in processes such as the regulation of nucleobase, nucleoside, nucleotide, and nucleic acid metabolism, as well as functions related to transport and signal transduction.Panel (B) illustrates the Cellular Component (CC) category, showcasing enrichments in various cellular locales, including the nucleus, cytoplasm, lysosome, Golgi apparatus, and exosomes.Finally, in panel (C), the Molecular Function (MF) category outlines enrichments in activities such as transcription factor activity, protein serine/threonine kinase activity, ubiquitin-specific protease activity, and transcription regulator activity.These visual representations provide a detailed insight into the diverse functional characteristics associated with the identified DEMs.
Fig. 7 .
Fig. 7.Illustrates the outcomes of Gene Ontology (GO) enrichment analysis for the candidate target genes.Panel (A) focuses on GO-biological process (BP), highlighting enrichments in processes such as the establishment of protein localization, cellular macromolecule localization, positive regulation of biosynthetic processes, and the organization of protein-containing complex subunits.In Panel (B), GO-cellular component (CC) terms reveal enrichments in cellular structures, including mitochondria, catalytic complexes, nuclear protein-containing complexes, and the envelope.Lastly, panel (C) delineates GO-molecular function (MF) terms, showcasing enrichments in activities such as enzyme binding, identical protein binding, ribonucleotide binding, and adenyl nucleotide binding.These visual representations provide a detailed overview of the functional characteristics associated with the candidate target genes.
Fig. 8 .
Fig. 8. Presents the outcomes of the Kyoto Encyclopedia of Genes and Genomes (KEGG) pathway analysis conducted for the candidate target genes.In Panel (A), a KEGG Bar plot outlines the distribution of differentially expressed genes (DEGs) across various pathways.Panel (B) presents a KEGG Bubble plot, visually representing the significance and magnitude of enrichment for DEGs within different KEGG pathways.These visualizations collectively provide insights into the pathways associated with the candidate target genes.
Table 1
Key microRNAs (miRNAs) that are differentially expressed in intracranial aneurysm (IA) and their respective target genes.
Table 3
Nine upregulated differentially expressed microRNAs (DEMs) in intracranial aneurysm (IA) and their 50 most significant target genes. | 2024-01-11T16:08:37.847Z | 2024-01-01T00:00:00.000 | {
"year": 2024,
"sha1": "2a4f72452ae346afe3dc48da04d5e180a0371aac",
"oa_license": "CCBY",
"oa_url": null,
"oa_status": null,
"pdf_src": "PubMedCentral",
"pdf_hash": "9fd0b16431ef9ed63fea982ef133dc4f3142cad7",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
236221380 | pes2o/s2orc | v3-fos-license | Bitcoin’s price efficiency and safe haven properties during the COVID-19 pandemic: A comparison
Graphical abstract
Introduction
Financial markets experience periodic crises because of systemic failures, such as the global financial crisis of 2008, and exogenous shocks such as the COVID-19 pandemic. These recurring crises have serious consequences for economic growth and employment and increase the risk for financial institutions (Horta et al., 2014). They also have implications for financial returns globally because asset prices behave differently during, before, and after financial crises (Cajueiro et al., 2009;Kumar, and Deo, 2013). The unprecedented magnitude of the COVID-19 crisis brought economic activity to a near standstill in many countries, caused enormous social disruption, and inflicted both physical and psychological damage to millions of people (Schell et al., 2020). Given the economic uncertainties related to the trajectory of the pandemic, it is understandable that market reactions have been erratic. Investors have been deluged by a constant flow of information through social media, press and television, and official announcements. This information flow can be confusing, conflicting, and sometimes misleading, exerting a significant influence on the dynamics of the stock markets (Cepoi, 2020) and causing volatility in security prices.
During periods of extreme uncertainty, investors search for "safe havens" to reduce risk, limit losses, and protect the value of their portfolios. Prior research has established distinctions between terms such as "hedge," "diversifier," and "safe haven" although these terms are often used interchangeably in the popular financial press. A diversifier is an asset that is positively but not perfectly Fig. 1. Prices and returns of Bitcoin, gold, a US dollar index, and the MSCI World index over five minute intervals for the period from March 11, 2020, to July 10, 2020. correlated with another asset or portfolio, while a hedge is an asset that is uncorrelated or negatively correlated with another asset or portfolio (Baur and Lucey, 2010). A safe haven differs from a hedge in that it is uncorrelated or negatively correlated with another asset or portfolio in times of market stress or turmoil (O'Connor et al., 2015). The main objective of a safe haven asset is to help investors mitigate downside market risk during periods of market turmoil (Ji et al., 2020). McDermott (2010, p.1889) distinguish between strong and weak safe havens stating that "a strong (weak) safe haven is defined as an asset that is negatively correlated (uncorrelated) with another asset or portfolio in certain periods only, e.g., in times of falling stock markets." An important aspect of this distinction is the length of the effect. While a hedge holds generally, the key property of a safe haven is that it is required to hold only in certain periods, e.g., in a financial crisis. While investors need to hedge or diversify their portfolios during all times, safe haven assets retain or increase their value during periods of market crises.
The most extensively researched safe haven in the literature is gold, with several empirical studies finding gold to be both a safe haven and hedge. For example, Baur and McDermott (2010) found that gold was a strong safe haven for several developed markets during the 2008 financial crisis. Bredin et al. (2015) found that gold served as a safe haven during various financial crises but not during the 1980 economic contraction. Reboredo (2013) reported evidence for gold as a safe haven, hedge, and diversifier with respect to movements in the US dollar exchange rate, and Conlon et al. (2018) demonstrated gold's ability to serve as a hedge against inflation. Choudhry et al. (2015) showed that while gold has been a safe haven in pre-crisis periods, they found no evidence that it served that purpose during the subprime mortgage crisis. He et al. (2018) concluded that gold is usually a hedge and, at worst, acts as an excellent diversifier of portfolio risk.
The liberalization and globalization of financial markets in recent years has provided investors with more choices of assets than previously available. The development and subsequent growth in the popularity of digital currencies have made an entirely new asset class available that could be a safe haven during periods of crisis alongside traditional assets such as gold. The COVID-19 pandemic is the most severe health emergency the World Health Organization (WHO) has faced (BBC News website, 2021). As per WHO website (WHO Coronavirus (COVID-19) Dashboard, 2021) as of June 1, 2021, over 170,426,245 million people have been infected, and over 3, 548,628 million people have died, making COVID-19 a public health crisis of unprecedented proportions. The economic disruptions and dislocations caused by the pandemic have affected financial markets globally, creating extreme volatility in asset prices. In this study, we examine whether Bitcoin could be considered a safe haven during this period of market turmoil. We do so by examining the price efficiency and net cross-correlations among Bitcoin, gold, the US dollar, and the Morgan Stanley Capital International World Index (MSCI World), which tracks the returns of 23 equity markets in developed countries around the world.
According to the efficient market hypothesis, price efficiency (in its weak form) exists when market participants cannot realize abnormal future returns on the basis of past information because current market prices fully reflect publicly available information (Fama, 1970). The actual behavior of markets can deviate from this assumption because of collective errors in investors' judgment, especially during periods of high uncertainty. Consequently, price irregularities and even predictable patterns in market returns appear from time to time and may even persist for short periods. In addition to the negative consequences for the allocation of resources in an economy, an inefficient market has great arbitrary power, increasing information asymmetry and market risk (Malkiel, 2003). If markets were perfectly efficient, there would be no incentive for professionals to invest time and resources into discovering information that is not yet reflected in market prices (Grossman and Stiglitz, 1980). Furthermore, the efficiency of a market or asset varies over time (Kim et al., 2011); therefore, an asset's pricing efficiency should be monitored over time. Especially in times of financial crisis, such monitoring should be done before, during, and after the crisis as assets are more susceptible to changing behavior in such periods. Prior research suggests that during financial crises, assets become more volatile and less efficient and regain efficiency over time (Kim et al., 2011). Given the relatively short history of Bitcoin and the severity of the COVID-19 pandemic that has caused substantial turbulence in financial markets, it is important to investigate the safe haven properties of Bitcoin during this period of extreme volatility.
Bitcoin as a safe haven during the COVID-19 crisis
Since its introduction in 2008, Bitcoin has been the subject of considerable controversy and debate both in academia and in the public arena. Bitcoins are exchanged over a peer-to-peer electronic cash system and have "no association with any higher authority, [has] no physical representation and are infinitely divisible" (Corbet et al., 2019). Bitcoin's supply is based mostly on the security of an algorithm and its value is ultimately based on market demand. It is backed neither by any tangible asset nor by a government. While the debate continues as to whether Bitcoin is a currency, a commodity, an investment, or a collectible, its popularity and value have increased dramatically since its early days. Priced at about $ 0.08 when it began trading in July 2010, Bitcoin's price was close to $20, 000 in 2017 and has fluctuated wildly since then, climbing above $60,000 in early 2021 followed by wild gyrations. It has also led to the creation of other cryptocurrencies, such as Ethereum and Ripple. The fact that the vast majority of Bitcoins are held in dormant accounts (Weber, 2014) suggests that it is being used more as a speculative asset than as a functioning currency.
Bitcoin experienced considerable volatility in the months after the WHO declared COVID-19 a global pandemic. The WHO's announcement was made on March 11th, 2020, and Bitcoin's price declined more than 40 % between March 6th and March 13th (from $8,900 to $5,165)), reaching its lowest level in a year. This decline was not confined to Bitcoin; other financial assets also experienced a comparable loss in value as shown in Fig. 1. However, compared to other assets, Bitcoin's price recovery was much faster. Between April 25, 2020, and July 10, 2020, the price of Bitcoin rose from USD 7,542 to USD 9,243, a level well above its price during the months before the outbreak. By March 2021 it was fluctuating between USD 50,000 and USD 60,000.
The economic uncertainty caused by the pandemic has led many researchers to examine whether Bitcoin could serve as a store of value, a source of portfolio diversification, or a safe haven during this period of crisis, with the results varying considerably across studies. Corbet et al. (2019) provided a comprehensive review of the growing body of research on cryptocurrencies noting that while some studies found that cryptocurrencies can serve as a safe haven, others found the exact opposite. For example, Goodell and Goutte (2021a) found that the price of Bitcoin rose in the days following the outbreak. Corbet et al. (2020b) found that cryptocurrencies acted as a store of value during the crisis period and as a safe haven, similar to precious metals. Before the COVID-19 pandemic, Urquhart and Zhang (2019) had found that Bitcoin acted as a safe haven, a hedge, and a diversifier versus a range of international currencies. Similar results have been reported by Wang et al. (2020); Aysan et al. (2019); Mensi et al. (2020), and Kliber et al. (2019). Shahzad et al. (2019) found weak support for Bitcoin as a safe haven, as revealed by the fact that its behavior as a safe haven was time-varying. Ji et al. (2020) analyzed the period in which the pandemic affected the financial market most severely (between December 2019 and March 2020) and found that Bitcoin role as a safe haven degenerated during this period. Goodell and Goutte (2021b) found that Bitcoin is not a safe haven for stocks. Corbet et al. (2020aCorbet et al. ( , 2020b and suggested that during periods of market crises, Bitcoin is neither a safe haven nor a hedge but an amplifier of contagion. Smales (2019) argued that Bitcoin cannot be considered a safe haven because it is more volatile, less liquid, and costlier to transact than other assets. These differing results may be attributable to differences in sample composition, inference procedures, and the asset classes against which cryptocurrencies were compared. For example, compared to studies reporting that Bitcoins have the potential to act as a safe haven, studies that found negative results for Bitcoin's hedging and safe haven properties typically cover fairly long time periods, sometimes starting years before the COVID-19 outbreak. We provide a summary of the results of empirical studies examining the role of Bitcoin as a safe haven in Table 1a and 1b.
This study focuses on Bitcoin's usefulness as a safe haven. Given that the COVID-19 pandemic is the first major global economic crisis since Bitcoin was introduced, this gives us an opportunity to investigate whether Bitcoin behaves as a safe haven compared to other types of assets. We examine Bitcoin's price efficiency along with those of gold, the US dollar index, and the MSCI (World Stock Index) using intraday data and the net cross-correlations among them during the first four months of the pandemic.
Data
We compared Bitcoin to three other assets, namely, gold, the US dollar, and the MSCI World Index representing a precious metal commodity, a currency, and the global developed country stock markets, respectively. Following Cepoi (2020), the starting point for our data collection was March 11, 2020, the day the WHO declared COVID-19 a global pandemic. The final date of the analysis was July 10, 2020, based on the European Union Council's announcement on June 30th of the gradual lifting of temporary restrictions on non-essential travel to European countries. To capture the full impact of this decision in different markets, we collected data for ten days after that announcement.
Methodology
Following Guedes et al. (2019) our analysis was conducted in two stages. In the first stage, we computed the price efficiency of each asset class under consideration using a multifractal detrended fluctuation analysis (MF-DFA). The correlations among them were then estimated using a detrended partial-cross-correlation analysis (DPCCA). These methodological approaches are described below.
Multifractal detrended fluctuation analysis (MF-DFA)
Financial time series are functionally complex systems that have predominantly nonlinear temporal dynamics (Tsay, 2010). They are especially sensitive to external disturbances, making it very difficult to forecast their future behaviors. In recent years, researchers have started to apply nonlinear methodological approaches to analyze these behaviors. One such approach that has found increasing use is the MF-DFA, which allows researchers to measure the long-term efficiency of price returns. MF-DFA has been widely used to measure the efficiency of stock markets (Zunino et al., 2008;Onali and Goddard, 2009;Wang et al., 2009;Liu et al., 2010;Stavroyiannis et al., 2010;Horta et al., 2014;Jin, 2016;Al-Yahyaee et al., 2018;Tiwari et al., 2019;Maganini et al., 2018).
Multifractals are complicated self-similar objects consisting of differently weighted fractals of various non-integer dimensions. Thus, a multifractal system is a generalization of a fractal system in which a single scaling exponent does not sufficiently describe its dynamics (Dutta et al., 2013). In recent years, several methodological approaches have been developed for the specific purpose of studying multifractality in time series data. The best known of these approaches is MF-DFA (Salat et al., 2017), in particular the detailed six-step process proposed by Kantelhardt et al. (2002). The measurement of multifractality in MF-DFA is based on the Hurst exponent, which ranges from 0 to 1.0. A Hurst exponent between 0 and 0.5 means that a time series has an anti-correlated structure and between 0.5 and 1.0 means that a time series has a long-range dependence (i.e., it is correlated). In the special case when the Hurst exponent is equal to 0.5, the time series has an independent or short-range dependent structure (random walk behavior) (Ihlen, 2012). The degree of multifractality is expressed as Δh = h(q min ) − h(q max ), where the exponent h(q) is known as the generalized Hurst exponent; the smaller the Δh coefficient, closer to 0.5 are the range of the Hurst exponents. A price series is considered efficient if a time series of those prices follow a random walk. The closer the Hurst exponent is to 0.5, the more anti-correlated the values in time series. The smaller the value of Δh, the closer the Hurst exponent is to 0.5; therefore, small values of Δh indicate efficiency in the time series. Tether was the only cryptocurrency that was safe haven during the pandemic.
Le et al.
Twenty-three spot exchange rates versus the US dollar and a US dollar index, 11 commodities (two precious metals, five agriculture, and four energy), 14 international equity indices, UST bonds (30 year maturity), and Bitcoin between January 1, 2019, and April 30, 2020.
Cross-spectral analysis and tail-dependency networks Bitcoin and US Treasury bonds are both disconnected from tail-dependency networks, suggesting their safe haven characteristics.
Iqbal et al.
Quantile-on-Quantile Regression
The changing intensity levels of COVID-19 affect bearish and bullish market scenarios for cryptocurrencies differently (asymmetric impact). During times of stress, panic, and uncertainty, BTC, Ethereum, ADA, and CRO performed better compared to other cryptocurrencies. Le et al.
Daily closing/spot price of bitcoin (CoinDesk price index), gold (ounce of gold-LBMA), forex rates (EUR-USD and CNY-USD), the WTI crude oil (rolling frontmonth futures contract), and soybeans commodity futures. between November 2018 and June 2020 Vector Autoregression (VAR) Traditional assets, gold and oil as well as Bitcoin and green bonds are useful hedges compared to other assets.
Mariana et al.
Bitcoin, Ethereum, S&P500, and gold spot price daily returns between July 1, 2019, and April 6, 2020 DCC-GARCH The two largest cryptocurrencies are suitable as shortterm safe havens. Their daily returns tend to be negatively correlated with the S&P500 during the pandemic.
An alternative quantifier for the degree of multifractality is the width Δα (intermittency degree) of the singularity spectrum (α). The parameter α is the Hӧlder exponentor singularity strength, while f(α) is the fractal dimension of the subset of the time series with singularities of strength equal to α. The lower the values of the Δh and Δα coefficients, the higher the efficiency of the time series (Zunino et al., 2008;Wang et al., 2009).
Detrended partial cross-correlation analysis (DPCCA)
Researchers have used different methodological approaches to measure the correlation between two time series. Perhaps the most popular is the Pearson coefficient; however, this coefficient is not always robust (Wilcox, 2005) and can be misleading if outliers are present, as in real-world financial data characterized by a high degree of nonstationarity (Devlin et al., 1975). It has been shown that time series data exhibit complex and dynamic behaviors whose autocorrelation (the cross-correlation of the signal with itself) can be characterized by power laws (Moret et al., 2003). Hurst was one of the first to identify a power law in a time series in the real world, specifically studying the Nile River and problems related to water storage using rescaled (R/S) statistics (Hurst, 1951). An alternative method, DFA, was proposed to detect long-range autocorrelations embedded in the mosaic structure of DNA because it avoids the spurious detection of apparent long-range autocorrelations (Peng et al., 1994). The DFA method performs better than the R/S standard in quantifying the scaling behavior of noisy signals across a wide range of correlations (Hu et al., 2001;Chen et al., 2002).
Several variations of the DFA method have been developed to analyze long-range correlations and identify price efficiency in time series. MF-DFA, as proposed by Kantelhardt et al. (2002), is one such widely used approach in the literature. Using the DFA approach, techniques have been developed to analyze cross-correlations between two time series. One such technique is detrended cross-correlation analysis (DCCA) as proposed by Podobnik and Stanley (2008), which is designed to investigate power law cross-correlations between time series of equal length N recorded simultaneously in the presence of nonstationarity. Podobnik and Stanley (2008) demonstrated DCCA's usefulness by applying it to problems in physics, physiology, and finance. For example, they reported power law cross-correlations in the absolute values of logarithmic changes in price between the Dow Jones and NASDAQ stock market indices. Yuan et al. (2015) proposed the DPCCA, which improves on DCCA by incorporating the partial correlation technique and is therefore useful in quantifying multi-signal correlations in a complex system. DPCCA is useful for extracting long-term intrinsic power law cross-correlations between two non-stationary signals. A key advantage of these methods over traditional approaches is that they can analyze changes in correlations over time. Another advantage is that they are useful for analyzing complex non-stationary real-world systems. During the last few years, DPCCA has been applied in a number of different areas, such as climate studies (Piao et al., 2016;Yuan et al., 2016), medical research (Ide et al., 2017;Chen et al., 2018), biology (Ezenwa et al., 2016), and finance (Lin et al., 2018;Lima et al., 2019;Ferreira et al., 2019;and Guedes et al., 2019).
Following Yuan et al. (2015) we calculated partial cross-correlation coefficients at different temporal scales. Implementing DPCCA for m temporal series of length N, x i (k), i = 1, …, m; k = 1, …, N consists of inverting a matrix of DCCA cross-correlation coefficients calculated at scale n: , i, j = 1, …, m, F 2 i,j (n) is the detrended covariance between two series obtained by DCCA, and F i (n) and F j (n) are DFA fluctuation functions of the individual series. The inverse matrix is then used to calculate partial cross-correlation coefficients: that quantify intrinsic cross-correlations among series i and j, at each time scale n, when the influence of other signals are eliminated (Yuan et al., 2015). If the two series are not cross-correlated, ρ DPCCA (n) oscillates about 0(bounded by − 1.0 and 1.0 for perfect negative and perfect positive cross-correlation, respectively), while for anti cross-correlated series ρ DPCCA (n) is strictly negative and for positively cross-correlated series ρ DPCCA (n) is positive.
Data and preliminary statistics
Our analysis used intraday price data at 5-minute intervals for Bitcoin, gold, the MSCI World index, and the US dollar index for the period March 11, 2020, through July 10, 2020, available from the Refinitiv Market Data. Table 2 shows descriptive statistics for the analyzed return series. Calculating the DPCCA coefficients requires two time series of equal length. Therefore, we filtered the data to obtain the days and times when all four of our time series had values available. The final sample consisted of 21,093 logarithmic returns for each time series. Table 2 shows that Bitcoin had the highest intraday return, which varied by as much as 13.09 % over a 5-minute interval. Bitcoin also suffered the largest decline in a 5-minute interval, dropping by 12 %. The skewness values show that the asymmetry in returns was highest for the MSCI index (− 7.2047) and lowest for gold (− 0.1371). Fig. 1 shows a large decline in all prices over the first 5000 data points (equal to approximately 1 month, quoted every 5 min) after the WHO declared COVID-19 to be a pandemic; then, there is a gradual process of adjustment in prices. Fig. 1 also shows a greater tendency toward volatility clusters for the Bitcoin and MSCI return series than for gold and the US dollar.
Price Efficiency during the pandemic crisis
The results of our MF-DFA analysis are presented in Table 3. The efficiency of each price series is assessed using two indicators, Δh and Δα. The lower the values of Δh and Δα, more efficient is the time series analyzed (Wang et al., 2009;Sensoy and Tabak, 2016;Silva Filho et al., 2018;Diniz-Maganini et al., 2021). Table 3 shows that price returns for gold have the lowest Δh value (0.3089), followed by Bitcoin (0.3925), the US dollar index (0.4403), and the MSCI World index (0.7609). The values of Δα, which vary between 0.4599 and 0.9299, are also lower for Bitcoin and gold compared to the US dollar and the MSCI World index. Together, these results indicate that the prices of gold and Bitcoin are closer to a random walk than the prices of the US dollar and MSCI World indices over the period we analyzed. Fig. 2 provides a graphical representation of these results.
In Fig. 2, the graph on the left shows the generalized Hurst exponent, and the graph on the right shows the intermittency degree of the multifractal spectrum for each time series. The red, blue, green, and light blue lines representing the prices of Bitcoin, gold, the MSCI World index, and the US dollar index, respectively, show that the greatest variations in the value of h(q) are generated by the MSCI World (green) and US dollar indices (light blue). The graph on the right shows the maximum and minimum values for the parameter α for each of the time series. The width of the parabola (the difference between the maximum and minimum values for the α parameter) is an indicator of the multifractal behavior of the data and also of the efficiency of the financial time series, that is, wider the parabola, the less efficient the time series. Fig. 2 shows that the parabolas are widest for the MSCI and US dollar indices and lowest for Bitcoin and gold. Thus, we conclude that the time series of prices for gold and Bitcoin demonstrate relatively high levels of price efficiency compared to the US dollar and MSCI World indices, which show somewhat lower levels of efficiency. In the next stage of the analysis, we analyze whether the prices of these assets behaved like "safe havens" relative to each other.
Cross-Correlations between time series during the pandemic crisis
Given that a safe haven is an asset that is uncorrelated or negatively correlated with another asset or portfolio in times of market stress or turmoil, we examined the strength of the cross-correlations among the four time series using different time scales. These results are presented in Fig. 3, which shows the DPCCA coefficients ρ DPCCA XY (n) (see Yuan et al., 2015) for the correlations between Bitcoin and the US dollar and MSCI World indices as well as for gold and the US dollar and MSCI World indices.
Table 3
Levels of multifractality and intermittency of intraday (5 min) returns for Bitcoin, gold, a US dollar index, and the MSCI World index from March 11, 2020, to July 10, 2020. Lin et al. (2018), when DPCCA coefficients at different time scales are below 0.3 and some values are close to zero, it implies that these pairs of correlations between markets are weak and are strongly influenced by other markets. From the first graph in Panel A, we can see that the net cross-correlation values between Bitcoin and the US dollar index fall in a range between just over − 0.2 and approximately 0.3, regardless of the time scale. The fact that they hover around 0 suggests that the net cross-correlation of Bitcoin and the US dollar index is weak and strongly influenced by other markets.
The second graph in Panel A shows the net correlations between Bitcoin and the MSCI World index, and the graphs in Panel B present the net correlations of gold with the US dollar and MSCI World indices, respectively. In all three cases, the DPCCA coefficients appear to behave as monotonically decreasing functions of the time scale, that is, they decrease as the time scale increases. When the time scale is (n) > 15000 (equating to approximately three months of prices quoted every five minutes), Bitcoin can be considered as a safe haven relative to MSCI and when the time scale (n) > 10000 (approximately two months quoted every five minutes), gold can be considered a safe haven relative to both the US dollar and MSCI World indices because the correlations between Bitcoin and the MSCI World index fall to − 0.4 and for gold relative to both the US dollar and MSCI World indices decline to approximately − 0.6 for those longer time scales.
Taken together, the results of our analysis indicate the following. First, both Bitcoin and gold can be considered as safe havens for investors in the MSCI World index, but only gold can be considered a safe haven against the US dollar exposure. Second, the safe haven properties observed for Bitcoin and gold are a function of the time scale considered. Third, gold's safe haven property appears to be stronger than Bitcoin's.
Discussion and conclusions
At the time this study was conducted, the COVID-19 pandemic continued to ravage most countries of the world, with little sign of abating a full year after the outbreak. The pandemic has inflicted enormous economic damage globally, caused more than a million deaths, and brought activities of daily life to a standstill in the most affected areas. We remain apprehensive about the eventual economic impact of the crisis, including the impact on financial markets (Goodell, 2020). We analyzed the price behaviors of Bitcoin, gold, the US dollar index, and the MSCI World index during the most critical period of the pandemic to understand their potential safe haven properties. Our results suggest that gold and Bitcoin have greater price efficiency than the MSCI World and US dollar indices and also appear to act as safe havens relative to the US dollar and global developed equity markets.
During the first four months of the crisis caused by the COVID-19 pandemic, when relatively short time scales are considered, the net cross-correlations between these assets are relatively weak. When longer time scales are considered, the net cross-correlations are negative and substantially higher. Gold exhibits high negative cross-correlations with both the US dollar and MSCI World indices, whereas Bitcoin shows high negative cross-correlations only against the MSCI World index. Thus, it appears that gold has stronger safe haven properties than Bitcoin. However, our results may not warrant generalizations as prior research clearly shows that results can be highly dependent on the time frames of the analyses. While our results offer preliminary evidence that Bitcoin may have safe haven properties, additional studies involving longer time periods and additional asset classes are necessary before we can draw any strong conclusions on this issue. Therefore, expectations of the potential value of Bitcoin and its benefits to financial market participants should be moderated (Corbet et al., 2019). Our results suggest that Bitcoin may serve as safe haven during periods of extreme economic stress and turmoil but not as effectively as gold as per the analysis of both assets' price behaviors during the first four months after the pandemic outbreak. | 2021-06-09T13:15:10.619Z | 2021-06-01T00:00:00.000 | {
"year": 2021,
"sha1": "63f68191f7b0249915c047dac6992f6bcff9cea8",
"oa_license": null,
"oa_url": "https://doi.org/10.1016/j.ribaf.2021.101472",
"oa_status": "BRONZE",
"pdf_src": "PubMedCentral",
"pdf_hash": "df2d25139bbba1fce2232089b504204130002f8b",
"s2fieldsofstudy": [
"Economics"
],
"extfieldsofstudy": [
"Economics"
]
} |
202412873 | pes2o/s2orc | v3-fos-license | G-protein-coupled receptor 40 agonist GW9508 potentiates glucose-stimulated insulin secretion through activation of protein kinase Cα and ε in INS-1 cells
Objective The mechanism by which G-protein-coupled receptor 40 (GPR40) signaling amplifies glucose-stimulated insulin secretion through activation of protein kinase C (PKC) is unknown. We examined whether a GPR40 agonist, GW9508, could stimulate conventional and novel isoforms of PKC at two glucose concentrations (3 mM and 20 mM) in INS-1D cells. Methods Using epifluorescence microscopy, we monitored relative changes in the cytosolic fluorescence intensity of Fura2 as a marker of change in intracellular Ca2+ ([Ca2+]i) and relative increases in green fluorescent protein (GFP)-tagged myristoylated alanine-rich C kinase substrate (MARCKS-GFP) as a marker of PKC activation in response to GW9508 at 3 mM and 20 mM glucose. To assess the activation of the two PKC isoforms, relative increases in membrane fluorescence intensity of PKCα-GFP and PKCε-GFP were measured by total internal reflection fluorescence microscopy. Specific inhibitors of each PKC isotype were constructed and synthesized as peptide fusions with the third α-helix of the homeodomain of Antennapedia. Results At 3 mM glucose, GW9508 induced sustained MARCKS-GFP translocation to the cytosol, irrespective of changes in [Ca2+]i. At 20 mM glucose, GW9508 induced sustained MARCKS-GFP translocation but also transient translocation that followed sharp increases in [Ca2+]i. Although PKCα translocation was rarely observed, PKCε translocation to the plasma membrane was sustained by GW9508 at 3 mM glucose. At 20 mM glucose, GW9508 induced transient translocation of PKCα and sustained translocation as well as transient translocation of PKCε. While the inhibitors (75 μM) of each PKC isotype reduced GW9508-potentiated, glucose-stimulated insulin secretion in INS-1D cells, the PKCε inhibitor had a more potent effect. Conclusion GW9508 activated PKCε but not PKCα at a substimulatory concentration of glucose. Both PKC isotypes were activated at a stimulatory concentration of glucose and contributed to glucose-stimulated insulin secretion in insulin-producing cells.
We evaluated the role of the PKC pathway in the enhancement of GSIS by GPR40 activation. To analyze this, we chose GW9508, a selective and potent small-molecule agonist of GPR40 [21]. Among the multiple PKC isoforms that are expressed in pancreatic β-cells, PKCα and PKCε are likely to have dominant functions in GSIS [24,25]. The roles of these two proteins in GW9508-potentiated GSIS were also determined. To minimize the interference of glucose against GW9508-induced signal transduction, we conducted this study in INS-1 cells, which secrete less insulin in response to glucose stimulation than primary β-cells.
Cell culture and transfection
INS-1D cells were a gift from Dr. Sekine (Tokyo University) [28]. The cells were grown in 60-mm culture dishes at 37˚C and 5% CO 2 in a humidified atmosphere. The culture medium was RPMI 1640 (Sigma, St. Louis, MO, USA) supplemented with 10 mM glucose, 10% fetal bovine serum, 1 mM sodium pyruvate, 1 mM L-glutamine, and 50 μM 2-mercaptoethanol. For fluorescence imaging, the cells were cultured in a 35-mm glass-bottom dish (AGC Techno Glass Co., Ltd., Shizuoka, Japan) at 50% confluence 2 days before transfection. A plasmid encoding the GFP-tagged proteins was transfected into the cells using Lipofectamine 2000 (Invitrogen, Burlington, ON, Canada). Experiments were performed within 2 days of transient transfection. We established stable transfectants from parental INS-1 cells expressing myristoylated alanine-rich C kinase substrate (MARCKS)-GFP or PKCα-GFP by G418 selection and cloning.
Imaging experiments
Epifluorescence microscopy. Fluorescence images were captured at 5-s intervals using a Nikon inverted microscope (60×/1.45 numerical aperture oil immersion objective) that was equipped with a cooled (−85˚C) charge-coupled-device digital camera, and recorded and analyzed on a NIS-Elements imaging station (Nikon Corporation, Tokyo, Japan). The excitation light source was a 150-watt xenon lamp with a high-speed scanning polychromatic light source. GFP fluorescence was excited at 488 nm, and the emitted light was collected through a 535/45-nm bandpass filter with a 505-nm dichroic mirror. We measured the fluorescence intensity of the GFP-tagged proteins in the cytosol, excluding the nucleus, as markers of translocation. These values (F) were normalized to each initial value (F 0 ), and the relative fluorescence change was referred to as F/F 0 . The cells expressing GFP-tagged proteins were loaded with 2 μM Fura2 for the measurement of intracellular Ca 2+ concentration [Ca 2+ ] i in the standard extracellular solution for 30 min at room temperature. The cells were washed twice and used within 2 h. Fura2 was excited at wavelengths alternating between 340 and 380 nm, and emissions were collected using the same bandpass filter used for the GFP fluorescence. A shortpass filter of 330-495 nm was used to reduce the background fluorescence between the dichroic mirror and the emission filter, which allowed for simultaneous measurements of GFP and Fura2 fluorescence. We previously determined that GFP and Ca 2+ signals were distinguishable under these experimental conditions [26].
Total internal reflection fluorescence microscopy, or evanescent wave microscopy. To obtain a high signal-to-noise ratio as compared with conventional epifluorescence microscopy, we installed a total internal reflection fluorescence microscopy (TIRFM) unit (Olympus Corp., Tokyo, Japan) on an Olympus inverted microscope (60x/1.45 numerical aperture oil immersion objective) that was equipped with an automatic focus device (ZDC2) and a digital complementary metal oxide semiconductor (CMOS) camera (ORCA-Flash4.0, C11440, Hamamatsu Photonics, Hamamatsu, Japan). Incidental light was introduced from the objective lens for TIRFM to generate the electromagnetic zone, or the so-called "evanescent field." The evanescent wave selectively excites fluorophores within 100 nm of the glass-water interface, which enabled us to monitor fluorescent proteins at and beneath the plasma membrane of a cell. GFP was excited by a 488-nm laser with a 1.3 neutral density filter (Edmond Optics, Tokyo, Japan), and emissions were collected through a 520/35-nm bandpass filter (Semrock, Rochester, NY, USA). HCI MAGE software (Hamamatsu Photonics) was used to capture fluorescence images. The fluorescence intensity of a range of interest (ROI) in individual cells was measured and analyzed on an Aquacosmos imaging station (Hamamatsu Photonics).
Measurement of insulin secretion
Insulin secretion from INS-1D cells was measured in a static incubation system as described previously [33]. INS-1 cells were subcultured in 35-mm dishes and grown to 80-90% confluence for 3-4 days. INS-1 cells were preincubated in KRB buffer containing 3 mM glucose at 37˚C in a humidified incubator for 1 h. The solution was then replaced with KRB alone or KRB containing various test agents. Antennapedia, antp-PKCα, and antp-PKCε were added 1 h prior to the insulin secretion experiment. The stimulation time was carefully adjusted to standardize the time required for solution changes and sample collection. The experiments were terminated by withdrawing the supernatant solution after 1 h of incubation. The supernatant was then placed in an ice bath. Samples were kept at −20˚C until further analysis. Insulin concentration was measured using an insulin enzyme-linked immunosorbent assay kit (Morinaga Institute of Biological Science, Kanagawa, Japan). All samples were assayed in triplicate.
Statistical analysis
Data are given as means ± standard error. Statistical significance was evaluated using student's t-test for paired observations. Multiple comparisons were examined by one-way analysis of variance with post hoc Fisher's LSD test. A p value < 0.05 was considered to be statistically significant. Data were analyzed using BellCurve for Excel (Social Survey Research Information Co., Ltd., Tokyo, Japan).
GW9508 enhances glucose-stimulated insulin secretion from INS-1D cells
First, we examined GSIS from INS-1D cells in the presence or absence of GW9508. As expected, 10 μM GW9508 enhanced insulin secretion at a stimulatory concentration (20 mM) of glucose (Fig 1). At the substimulatory concentration (3 mM) of glucose, GW9508 did not significantly increase insulin secretion (Fig 1).
GW9508 translocates MARCKS-GFP from the plasma membrane to the cytosol at a substimulatory concentration of glucose
We used GFP-tagged MARCKS, a putative substrate for PKC [34], as a marker of PKC activation to examine the mechanism of GW9508 activation of PKC in living cells. When activated PKC phosphorylates plasma membrane-anchored MARCKS, the phosphorylated MARCKS is translocated from the plasma membrane to the cytosol [35]. This translocation can be identified by reciprocal changes in the fluorescence intensity of MARCKS-GFP between the cytosol and the plasma membrane [26]. Thus, we measured the relative fluorescence change in MARCKS-GFP in the cytosol. MARCKS translocation and [Ca 2+ ] i levels in INS-1 cells stably expressing MARCKS-GFP were monitored simultaneously. To reduce the effects of glucose on GW9508-induced signal transduction as much as possible, we used a standard extracellular solution containing 3 mM glucose, which is substimulatory in terms of electrical activity and insulin secretion. These conditions were used to evaluate PKC activation by GW9508 in real time; Fig 2 shows a representative experiment (n = 81).
At 10 μM GW9508, we noted sustained translocation of MARCKS-GFP to the cytosol ( Fig 2). In contrast to the transient MARCKS-GFP translocation that followed a change in [Ca 2+ ] i induced by a depolarizing concentration of potassium (40 mM KCL), GW9508-induced translocation of MARCKS-GFP was not affected by changes in [Ca 2+ ] i , indicating that GW9508 activated PKC in a Ca 2+ -independent manner (Fig 2).
Stimulatory glucose concentration changes GW9508-induced translocation of MARCKS-GFP from a Ca 2+ -independent to a Ca 2+ -dependent mechanism
We demonstrated that GW9508 increased GSIS and did not amplify insulin secretion at 3 mM glucose (Fig 1). Next, we compared GW9508-induced MARCKS translocation at 3 mM and 20 mM glucose. The application of GW9508 resulted in sustained translocation of MARCKS-GFP to the cytosol, as well as multiple transient translocations of MARCKS-GFP that occurred just following sharp increases in [Ca 2+ ] i , at 20 mM glucose ( Fig 3B). We then plotted the [Ca 2+ ] irelated increases in the F/F 0 of MARCKS-GFP in the cytosol against sharp elevations in [Ca 2+ ] i during the 5-min application of GW9508 at 3 mM or 20 mM glucose. The correlation between the increase in MARCKS and [Ca 2+ ] i elevation was weak at 3 mM glucose (r = 0.349; p < 0.01), but stronger at 20 mM glucose (r = 0.752; p < 0.01) (Fig 3C and 3D). A strong correlation between [Ca 2+ ] i elevation and the increase in MARCKS also existed below a 1.5 elevation of the [Ca 2+ ] i ratio at 20 mM glucose (r = 0.572; p < 0.01) (Fig 3E and 3F). These observations suggest that GW9508-evoked Ca 2+ signals induced the activation of PKC more robustly at a stimulatory concentration of glucose.
Profiles of PKCα and PKCε translocation in response to GW9508 at substimulatory and stimulatory concentrations of glucose
The observations above prompted us to investigate whether there were differences in the activation of PKC isoforms between substimulatory and stimulatory concentrations of glucose. We examined the GW9508-evoked translocation of PKCα-GFP and PKCε-GFP in transfected INS-1D cells using TIRFM. Only 17% of experimental cells showed transient translocation of PKCα in response to GW9508 at 3 mM glucose ( Fig 4A, Table 1). At 20 mM glucose, more than twice the number of cells responded to GW9508, i.e. 35% of cells showed transient PKCα translocation (Fig 4B, Table 1). Conversely, 51% of cells showed transient or sustained translocation of PKCε at 3 mM glucose ( Fig 4C, Table 1). Interestingly, in addition to sustained translocation of PKCε, GW9508 also elicited transient translocation of PKCε from a higher percentage of cells at 20 mM (51%) compared with 3 mM (20%) glucose (p < 0.01) (Fig 4D, Table 1). However, the response time for translocation of both PKCα and PKCε did not differ significantly between 3 mM glucose and 20 mM glucose (Table 1). These results suggest that PKCε played a dominant role in GW9508-induced MARCKS activation at substimulatory and stimulatory concentrations of glucose.
Effect of PKC inhibitors on GW9508-potentiated insulin secretion in INS-1 cells
We tested the isoform-specific roles of the two PKCs in GW9508-potentiated insulin secretion in INS-1D cells using antp-PKCα and antp-PKCε. GW9508-induced insulin secretion at 20 mM glucose was significantly reduced by 75 μM antp-PKCα (p < 0.05) and 75 μM antp-PKCε (p < 0.01) (Fig 5). Antp-PKCε inhibited insulin secretion more potently than antp-PKCα at 20 mM glucose (p < 0.05) (Fig 5). Double inhibition with antp-PKCα and ε did not decrease insulin secretion below the level inhibited by antp-PKCε alone (Fig 5). To strengthen these results, we also tested Gö 6976, an inhibitor of conventional PKC, and BIS I, a broad PKC inhibitor. While both Gö 6976 and BIS I significantly reduced GW9508-induced insulin secretion, BIS I had a stronger effect (Fig 5). These results agreed with those using antp-PKCα and antp-PKCε. Taken together, the results of the PKC inhibitor experiments suggest that both PKC isoforms, but PKCε in particular, were responsible for GW9508-potentiated insulin secretion in INS-1D cells.
Discussion
We demonstrated in this study that: 1) GW9508-induced activation of PKC was mainly characterized by sustained PKCε, which was inconsistent with changes in [Ca 2+ ] i at a substimulatory concentration of glucose (Figs 2, 3A, 3C, 4A and 4C); 2) a stimulatory concentration of glucose enabled GW9508-induced transient translocation of PKCα and PKCε that followed changes in [Ca 2+ ] i (Figs 3B, 3D, 4B and 4D); and 3) GW9508-potentiated GSIS was disrupted by two PKC inhibitory peptides, with more marked inhibition of PKCε, in INS-1D cells ( Fig 5). These results indicate that PKCε is directly activated by GW9508, independent of glucose concentration, and suggest that the shift in PKCα and PKCε activation from a stable, sustained mode to a transient mode is involved in the potentiation of GSIS. In a previous report, we demonstrated that Ca 2+ influx via voltage-dependent Ca 2+ channels (VDCCs) can activate PKC [26], and that GLP-1-induced PKC activation is transient and Ca 2+ -dependent [7]. Nevertheless, in this study PKC activation by GW9508 was independent of elevations in [Ca 2+ ] i and sustained during the application of GW9508 at a substimulatory concentration of glucose (Figs 2 and 3A). GPR40 signaling generates IP 3 and DAG via PLC activation [13]. IP 3 induces Ca 2+ release from the ER, and DAG activates PKC directly [14,36]. Thus, GPR40-mediated DAG, but not IP 3 , could play a key role in sustained PKC activation that is induced by GW9508 at a substimulatory concentration of glucose. However, a significant but weak correlation between PKC activation and [Ca 2+ ] i elevation was confirmed, which could represent a population of INS-1 cells that allowed Ca 2+ influx through VDCCs (Fig 3C and 3E). The TIRFM imaging experiments showed that GW9508 induced sustained PKCε activation, but not sustained PKCα activation, at a substimulatory concentration of glucose (Fig 4A and 4C). These results suggest that IP 3induced Ca 2+ release was insufficient for activation of the conventional PKCα isotype in response to GW9508 at a substimulatory concentration of glucose in INS-1D cells, whereas DAG was sufficient for activation of the novel PKCε isotype.
We have shown here that a stimulatory concentration of glucose altered GW9508-induced PKC activation from a Ca 2+ -independent to a Ca 2+ -dependent mechanism, despite the amplitudes of induced [Ca 2+ ] i elevations over the entire cell being similar between the stimulatory and substimulatory concentrations of glucose (Fig 3A-3F). On the other hand, nifedipine, a blocking agent of VDCCs, attenuated the transient translocation of MARCKS-GFP that was induced by GW9508 at a stimulatory concentration of glucose (S1 Fig). The TIRFM imaging experiments showed that GW9508 increased the fraction of cells that underwent transient PKCα activation at a stimulatory concentration of glucose ( Fig 4B, Table 1). High concentrations of glucose are known to stimulate insulin secretion through an intracellular pathway involving an increase in the intracellular adenosine triphosphate (ATP)/adenosine diphosphate (ADP) ratio and closure of K ATP channels, followed by membrane depolarization, which leads to the activation of VDCCs and a rise in [Ca 2+ ] i [37][38][39][40]. In our previous report, we demonstrated that Ca 2+ influx was a much stronger stimulus of PKCα translocation than Ca 2+ mobilization from intracellular stores in INS-1D cells [26]. In light of that result, it could be interpreted that the Ca 2+ mobilization that was induced by GW9508-generated IP 3 in the current study failed to translocate PKCα at the substimulatory concentration of glucose, and that K ATP -induced Ca 2+ influx through VDCCs activated PKCα at the stimulatory concentration of glucose. However, INS-1D cells are known to exhibit a strong electrical and insulin response to KCl stimulation and a less potent response to glucose stimulation. This could explain the lack of differentiation in the response time of PKCα between the substimulatory and stimulatory concentrations of glucose (Table 1).
TIRFM imaging also showed that GW9508 induced transient activation of PKCε, in addition to the sustained activation, at a stimulatory concentration of glucose ( Fig 4D, Table 1). We demonstrated in a previous report that Ca 2+ influx via VDCCs could activate PLC [26]. Among all known PLC isoforms, the PLCδ isoforms are the most sensitive to Ca 2+ [41]. Thus, Ca 2+ influx via VDCCs would be expected to activate PLC, and PLCδ, in particular, leading to the transient activation of PKCε that we observed at the stimulatory concentration of glucose. Another recent report concluded that the rapid activation of PKCε in the plasma membrane is due to exocytotic release of ATP, with autocrine feedback activation of P2Y 1 purinoceptors, which in turn induces DAG via PLC activation [42,43]. Thus, glucose-stimulated Ca 2+ signaling and autocrine signaling could be sufficient to trigger the rapid activation of PLC, which activates PKCε by generating DAG.
The exact mechanism of GPR40-potentiated GSIS in β-cells remains unclear. Here, we observed that both PKCα and, to a greater degree, PKCε, were involved in GW9508-potentiated insulin secretion, but only at a stimulatory concentration of glucose (Figs 1 and 5). However, 12-O-tetradecanoylphorbol 13-acetate (TPA), which binds the diacylglycerol site to potently activate PKC, has been reported to induce insulin secretion at substimulatory as well as stimulatory concentrations of glucose in insulin-producing cells [44][45][46]. This discrepancy could be explained by a requirement for PKC activation to exceed a threshold value for GPR40-mediated insulin secretion to occur. We also observed a larger amplitude of [Ca 2+ ] i increase in GW9508mediated INS-1D cells compared with vehicle at a stimulatory concentration of glucose (S2 Fig). A recent report showed that fasiglifam, another GPR40 agonist, enhanced GSIS through both IP 3 -mediated amplification of Ca 2+ oscillations and DAG-mediated augmentation of downstream secretory mechanisms independent of Ca 2+ oscillations [47]. Thus, IP 3 might be involved in insulin secretion only at a stimulatory concentration of glucose, as shown in this study. Recently, it was reported that GPR40 depolarizes the plasma membrane and increases background current via the transient receptor potential canonical 3 (TRPC3) channel at a substimulatory concentration of glucose in pancreatic β-cells [48]. TRPC3 is a class of nonselective cation channels that is activated by PLC/PKC signaling, not Ca 2+ from the ER, resulting in the potentiation of GSIS [48]. It has also been reported that physiological concentrations of GLP-1 stimulate insulin secretion through the PKC-dependent activation of transient receptor potential melastatin 4 (TRPM4) and TRPM5, which are Na + -permeable cation channels [8]. Thus, the GW9508-induced PKCε activation at substimulatory concentrations of glucose that we observed here might have involved TRPC3 and TRPM activation, which potentiate Ca 2+ influx at stimulatory concentrations of glucose.
PKCε played a dominant role over that of PKCα in GW9508-induced insulin secretion by INS-1D cells (Fig 5), consistent with evidence of the dominant translocation of PKCε induced by GW9508 (Fig 4A-4D). PKCε is involved in GSIS, and several studies have shown that the inhibition of the function of PKCε is associated with reduced GSIS [24,25,49]. Activated PKCε has been shown to localize to insulin granules, enhance biosynthetic pathways of proinsulin, and induce the processing of proinsulin to mature insulin [24,49]. Another recent study showed that novel PKCs stimulated mitochondrial ATP production via ERK1/2 signaling [50], which increased the cytosolic ATP/ADP ratio [51]. These mechanisms may contribute to the GW9508-enhanced insulin response to a stimulatory concentration of glucose via PKCε activation, as observed in our study. In contrast, the contribution of PKCα activation to GSIS remains a subject of debate. Inconsistencies in the data may be explained in part by the different effects of PKCα on the initial and late phases of secretion [52].
We also investigated the effect of γ-linolenic acid (γ-LA), a natural ligand of GPR40, and found that it elicited insulin secretion not only at the stimulatory but also at the substimulatory concentration of glucose (S5 Fig). At a substimulatory concentration of glucose, γ-LA elicited sustained PKC activation, whereas at the stimulatory concentration of glucose, PKC exhibited not only sustained activation but also transient activation (S3 and S4 Figs, S1 Table). Unlike GW9508, however, γ-LA-stimulated insulin secretion was not affected by antp-PKCα or antp-PKCε (S5 Fig). These results suggest that receptor-independent pathways rather than the GPR40 pathway are also involved in γ-LA-evoked insulin secretion, and could include the malonyl-CoA/long-chain acyl-CoA pathway and triglyceride/free fatty acid cycling via the intracellular metabolism of fatty acids [53]. Our results using GW9508, however, suggest that PKC-dependent pathways are the sole signaling pathways for GPR40-dependent insulin secretion.
In conclusion, the GPR40 agonist GW9508 induced the sustained activation of the novel isoform PKCε at substimulatory concentrations of glucose, and evoked the transient activation of the conventional isoform PKCα and PKCε following increases in [Ca 2+ ] i via VDCCs at stimulatory concentrations of glucose. This activation, which was especially potent for PKCε, was involved in GW9508-potentiated GSIS. GPR40 agonists have the potential to be key drugs for increasing insulin levels with minimal risk of iatrogenic hypoglycemia in patients with type 2 diabetes. Table. Effect of γ-linolenic acid on PKC activation at 3 mM and 20 mM glucose. The response to GW9508 was further categorized into cell fractions with a sustained or transient translocation of green fluorescent protein (GFP)-tagged myristoylated alanine-rich C kinase substrate (MARCKS-GFP). Lag time = the response time of MARCKS-GFP, and is shown as mean ± standard error of the mean. � p < 0.05 vs. GW9508 at 3 mM glucose; �� p < 0.01 vs. | 2019-09-11T13:06:31.481Z | 2019-09-09T00:00:00.000 | {
"year": 2019,
"sha1": "59116c440295b757b0d37f491561de286fc0f02a",
"oa_license": "CCBY",
"oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0222179&type=printable",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "17d5917e4981082a7a82523ea62d661290679402",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine",
"Chemistry"
]
} |
146212840 | pes2o/s2orc | v3-fos-license | Ultra-High-Performance Concrete Shear Keys in Concrete Bridge Superstructures
Many existing bridges have adjacent girders that utilize grouted shear keys to transfer loads laterally across the superstructure. Cracking and leaking often cause degradation of the shear key and the girder. This work investigates the potential for using non-proprietary ultra-high performance concrete (UHPC) as a grouting material for repair of deteriorated shear keys by testing bond strength between UHPC and substrate concrete surfaces that were either formed or scarified by chipping. Bond strengths were adequate for both surface textures even though texture depth was substantially less than recommended by ACI 546. Scanning electron microscopy has also been used to investigate the bonded area. This microscopic scanning has shown fly ash residue remaining on the substrate after bond failure, indicating that the supplementary cementitious materials produce much of the bond. Ongoing work for this project also includes full-scale testing of UHPC shear keys between pre-stressed channel girders.
Introduction
Precast girders are commonly used in North America due to their ease of construction, affordability, favorable aesthetic appeal, and high flexural stiffness. Some girder shapes require shear keys to ensure monolithic behavior and load transfer across the superstructure. However, shear key durability is a concern to engineers since shear key degradation can jeopardize the integrity of the structure. Transportation agencies have reported that 75% of distress in adjacent girder bridges was due to cracking along shear keys and de-bonding of the interfaces between the girders and the shear keys [1].
The work presented in this paper investigates the potential for using locally produced ultra-high performance concrete (UHPC) as a grouting material to rehabilitate shear keys in adjacent prestressed girder bridges in New Mexico, USA. Previous research at New Mexico State University has shown that locally produced UHPC has excellent mechanical and durability properties [2]. UHPC has also been shown to have good bonding characteristics [3,4] that are desirable in a potential grouting material. Additionally, the use of nonproprietary UHPC mixtures provides an economical and sustainable alternative to proprietary products. In comparison to other proprietary mixtures, locally produced UHPC has a decrease in cost of anywhere from 10 to 70% [5]. To evaluate UHPC for potential use as a grout in shear keys, bond between UHPC and substrate concrete was studied. Ongoing work also includes shrinkage testing and full-scale testing of UHPC grouted shear keys.
UHPC
UHPC typically consists of cement, ground quartz, fine sand, an accelerating admixture, a high-range water reducing admixture (HRWRA), and approximately 2% by volume of steel fibers [1]. According to ASTM C1856, UHPC has a minimum compressive strength of 120 MPa (17,000 psi). Additionally, UHPC has low porosity, high density, and excellent durability and resistance to chemical and physical attack [6][7][8][9]. Another important characteristic of UHPC is that it exhibits better bond to different surface roughnesses compared to normal grout materials [1].
Shear keys
Shear keys are primary structural components of a bridge superstructure. They are located between adjacent girders and are designed to restrain lateral displacement and facilitate load transfer between the girders. Shear key degradation may include debonding cracking and leaking that can compromise the strength and serviceability of the bridge. As a result, individual beams may have to carry entire wheel loads. As damage progresses, differential displacements between girders can cause cracking in overlays and in the shear keys [10].
Several geometric configurations have been studied to improve shear key performance. Some configurations include the design of a shear key at partial, medium, and full depth. Grouting materials, such as non-shrink grout, epoxy grout, engineered cementitious composites, and UHPC have also been investigated [11].
Transverse post-tensioning improves load transfer across shear keys by creating a more monolithic behavior [10]. Post-tensioning also protects the bond between the shear key grout and a girder by keeping the bonded interface compressed. Other studies have shown that the post-tensioning force required to ensure monolithic behavior is dependent on the width of the bridge and the depth of the girder [12,13].
UHPC and substrate mixtures
Proportions for the UHPC and substrate concrete mixtures are presented in Tables 1 and 2, respectively. The UHPC mixture consisted of Type I/II portland cement, silica fume, fly ash, HRWRA, water, and 1.5% steel fibers by volume. The sand, cement, and fly ash were obtained from local sources, and the silica fume, steel fibers, and HRWRA were obtained from regional suppliers.
Bonded specimens
Specimens were produced by casting the substrate, moist curing the substrate to an age of seven days, casting the UHPC portion of the specimen, and then moist curing for another seven days. The composite specimens were then tested for seven and 28 day strengths (measured from the time that the UHPC portion of the specimen was cast).
Bond strength testing
Two different tests are being performed to assess the bond strength between the substrate and the UHPC grout. These tests are the slant shear and direct tension tests. These tests are being performed on composite specimens with substrate textures that included a formed surface and chipped textures with depths of 0.100mm (0.004 in), 0.128 mm (0.00503 in), and 1.126 mm (0.0496 in). Texture depth was determined in accordance with ASTM E965. Surface preparation varies from formed to medium roughness. The formed surface was prepared by using a stiff wire brush to dust off loose material and the chipped surfaces were prepared by using an air hammer to chip off the formed surface. Figure 1 illustrates the two substrate textures.
Slant-shear testing
The slant-shear strength of the composite bonded specimens was determined in accordance with ASTM C882. Normal strength concrete was cast in a 152 by 304 mm (6 by 12 in) cylindrical mold. The mold uses a plastic dummy insert to produce a 60° incline. The specimens are moist cured for seven days prior to texturing (if needed) and placement of UHPC. After demolding, the composite specimens are cured at ambient conditions (20°C [68°F] and 30% relative humidity).
After seven more days of curing, the specimens are tested in compression to determine the shear strength of the bond. The slant-shear specimens were tested in compression, as shown in Figure 2, until failure. Three possible failure modes can occur that include failure in the UHPC, failure of the bond, and failure in the substrate. Shear strength of the bond was calculated using the following equation: where τ = shear stress, P = ultimate load, A = crosssectional area, and α = angle of the bonded interface from horizontal (60°).
Direct tension testing
Direct tensile strength of bonded specimens is being evaluated using a modified version of ASTM C1583. The standard test uses a pull-off device to determine the strength from cored slabs. For this research, that device was replaced with the tension test apparatus shown in Figure 3 to test cylindrical cores taken from prismatic specimens.
Scanning electron microscopy
Scanning electron microscopy was performed to provide a detailed characterization of the fractured surfaces.
Images were collected for both formed and chipped surfaces to provide a visual explanation of the microstructure of each fracture specimen. Table 3 presents the results obtained from slant-shear testing. According to ACI 546 [14], the recommended minimum slant-shear strength is 7 MPa (1000 psi) at seven days and 14 MPa (2000 psi) at 28 days. Table 3 shows that all of the strengths were adequate at seven days and that the average bond strengths were adequate at 28 days. It is important to note that ACI 546 [14] recommends a minimum texture depth of 6.25 mm (0.25 in), which is much greater than the depth of even the deepest chipped textures studied in this work. However, it is difficult to produce adequate texture in the field during shear key repairs, so the texture depths investigated in this work are probably representative of what should be expected in the field. Table 4 presents the results obtained from the direction tension tests. According to ACI 546 [14], the recommended bond strength from direct tension tests should be 1 MPa (150 psi). Table 4 shows that the average strengths for both textures meet the ACI recommendation. Again, adequate bond strength was achieved even with inadequate texture.
Scanning electron microscopy
A sample of the scanning electron microscopy images obtained in this work are presented in Figure 4. In the left image, the arrow indicates fly ash residue that remained bonded to the substrate surface after fracture. This provides an indication that the supplementary cementitious materials are primarily responsible for the bond strength. The second image in Figure 4 shows steel fibers that are protruding from the UHPC after it debonded from the substrate. This observation could indicate that the steel fibers might obstruct some of the bond or that some of the fibers are not participating in the behavior of the UHPC.
Ongoing work
Ongoing work associated with this project includes: 1. Assessment of early-age and longer-term shrinkage of the UHPC mixture since shrinkage of the UHPC grout can cause both shear and tensile stresses on the bond between the grout and a girder. 2. Additional scanning electron microscopy imaging to study the quality of the bond between the UHPC grout and various substrate surfaces. 3. Full-scale structural testing of UHPC shear keys between adjacent pre-stressed channel girders.
Conclusions
The following conclusions have been drawn from this work: 1. Adequate bond strength can be achieved between UHPC and substrate concrete with the proper surface preparation, even when texture depth is inadequate. 2. Quality of the bond seems to be largely dependent upon supplementary cementitious material particles penetrating into the substrate concrete. | 2019-05-07T14:15:59.189Z | 2020-10-22T00:00:00.000 | {
"year": 2019,
"sha1": "a2fb359946c76883d67e5c7c77f09a90eee14f0c",
"oa_license": "CCBY",
"oa_url": "https://www.matec-conferences.org/articles/matecconf/pdf/2019/20/matecconf_tran-set2019_07006.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "18f04e76e2a2303c22f83dcfd41aeac6d10e6c8a",
"s2fieldsofstudy": [
"Engineering"
],
"extfieldsofstudy": [
"Materials Science"
]
} |
247379520 | pes2o/s2orc | v3-fos-license | Depression among Korean men during COVID-19: social media and physical activity
Background: Due to the COVID-19 pandemic, Korean men are withdrawing from social interactions and feeling depressed due to financial difficulties. This depression can be reduced through physical activities and use of social media for communication. Therefore, this study aims to examine activities effective in reducing depression by analyzing the differences in physical activities and levels of social media addiction, depending on the level of depression. Methods: A total of 591 Korean males affected by the COVID-19 pandemic participated in the study. Results were extracted by frequency analysis, descriptive statistical analysis, chi-square test, and t-test using SPSS 25.0 (IBM Corp., Armonk, NY, USA). Results: Chi-square test, which analyzed differences in demographic characteristics based on the level of depression, revealed significant differences in monthly average family incomes, levels of participation in physical leisure activities, extent of social media usage and kinds of social media used. Significant differences existed in high-intensity physical activities as well as social media non-addiction and addiction depending on the level of depression. Conclusions: Since COVID-19 is increasing depression in males, government or health-related institutes need to provide spacious areas to engage in physical activities, which may help reduce suicidal ideation and restore mental health in Korean males. Moreover, there is a need to develop diverse health-related social media contents, which can help reduce depression. Thus, it has been thought that places to perform physical activities and sufficient communication with others on social media can prove helpful in managing depression.
Introduction
Social distancing measures enforced in response to the prolonged COVID-19 pandemic have greatly impacted people's daily lives. In Korea, indoor facilities such as restaurants, gyms, indoor golf ranges, billiard halls, and badminton courts were closed after 9:00 PM to minimize social contact [1]. Such restrictions have increased feelings of isolation and psychological anxiety among the population [2].
Bueno-Notivol et al. [3] suggested that feelings of depression were seven times greater following the COVID-19 pandemic. In this regard, such feelings of depression have been named the "Covid blues" or "Covid depression" in Korea and the Korean government has been making various efforts to reduce this phenomenon [4]. Such feelings of depression can have negative psychological effects on any-one; however, men are particularly vulnerable as they tend not to admit that they are feeling depressed and hence do not receive timely treatment. This can worsen depression [5]. According to Jeon [6], while working from home due to COVID-19, physical activities among Korean people are reducing and obesity is increasing. This increase in obesity could further lead to depression.
Chang [7] stated that depression due to COVID-19 has become severe enough that the suicide rate may increase among Korean men in their 30s and 50s who are experiencing financial hardship. According to research on "The Present State of National Mental Health During the COVID-19" conducted by the Ministry of Health and Welfare in 2021 [8], the risk of suicide among females increased from 9.2% in 2020 to 15.1% in 2021 and it increased from 10.1% in 2020 to 17.4% in 2021. In 2020, only 17.5% of males belonged to a high-risk group for depression, however, it increased to 24.2% in 2021, which means COVID-19 has affected the depression rate of males significantly. Joshua [9] suggested that symptoms of depression can manifest in men through anger, irritability, and aggression, and that emotional stability can be improved through the provision of regular physical activity. In particular, Kim et al. [10] explained that Korean males are 44% more likely to be exposed to social isolation than females, because males have weaker relationships with friends or families and their financial difficulties could cause disharmony with families or acquaintances. This suggests that the psychological wellbeing of Korean men in the wake of the COVID-19 pandemic should be highly prioritized and that there is an urgent need to identify the measures that may reduce their depression.
Meanwhile, physical activity serves to provide emotional stability by improving interpersonal relationships and strengthening social communication during the COVID-19 pandemic [11]. Accessible physical activities that can be performed daily, such as walking and jogging, can improve mood and promote physical and mental recovery, which helps to actively cope with the challenges of the COVID-19 pandemic [12]. Physical activity can increase the quality of life by positively impacting psychological health, wellbeing, and cognitive function, leading to the reduction of depressive symptoms [13]. Thus, moderate physical activity may help prevent symptoms of depression and anxiety brought on by the COVID-19 pandemic [14].
Oh [15] stated that the use of smart phones and social media among Korean people is increasing due to the COVID-19 pandemic. This excessive use is leading to social media addiction, which could negatively affect depression, impulsivity, and physical health. Specifically, Yeon [16] revealed that 25% of people who excessively use social media, such as Facebook, Instagram, and Twitter, are 2.7 times more likely to feel depressed than average users.
In this regard, the Korea Health Promotion Institute [17] suggested physical activity and interpersonal communication through social media as a strategy to mitigate depression and anxiety due to social isolation during the COVID-19 pandemic, considering that 40.7% of Koreans had reported feelings of depression and anxiety in response to restrictions on social gatherings. Singh et al. [18] conducted a study among elderly people with mild depression to understand the effect of high and low-intensity resistance training on combating depression. They demonstrated that high-intensity training was more effective in improving depression than low-intensity training. This result was supported by Kanamori et al. [19], who also found highintensity physical activities to be more effective in reducing depression among the Japanese elderly than low-intensity physical activities. Accordingly, this study examined if high-intensity physical activities could. reduce depression among Korean adults.
Consequently, social media usage has increased due to social distancing measures as people seek connection and social stimulation through the only means available [20]. A general increase in social media usage was observed throughout the COVID-19 pandemic, with text communication through Facebook, Instagram, and WhatsApp increasing by 70% [21]. Recent studies have exhibited that sharing of positive health information through social media to reduce negative emotions and feelings of social disconnection, may lead to increase in healthy behaviors [22].
Correa et al. [23] revealed that while people tended to prefer receiving news through traditional outlets, they accepted information more meaningfully when it came from social media. Similarly, Houston and Franken [24] reported that following the psychological trauma from the 2011 tornado in Joplin, Missouri, USA, increased rates of communication and information sharing through social media helped improve coping behavior. Similarly, connecting with friends and family can lower anxiety in the face of difficult situations [25]. Furthermore, Galea et al. [26] revealed that acquiring information through audio or video content accessible through social media was a better alter-native compared to the typically-used delivery systems by the government such as e-mail or text message. As such, communication through social media may help restore mental health in a post-COVID-19 world [27]. Therefore, it is necessary to investigate whether interpersonal communication using social media is indeed effective for Korean men who are experiencing depressive symptoms due to social isolation or financial strain in the wake of COVID-19.
As mentioned earlier, physical activity and interpersonal communication through social media can reduce depression. It is not clear, however, what specific kinds of activities may effectively reduce depression among Korean men facing financial and social challenges due to the COVID-19 pandemic. This study aims to provide preliminary data for improving mental health in men by identifying the kinds of physical activity and social media engagement that is most effective for alleviating depression.
The following hypotheses were established: Hypothesis 1: There are differences in demographic characteristics according to the level of depression experienced by Korean men during the COVID-19 pandemic.
Hypothesis 2: There will be differences in physical activity levels according to the level of depression experienced by Korean men during the COVID-19 pandemic.
Hypothesis 3: There will be differences in extent of social media usage or behaviors of social media addiction according to the level of depression experienced by Korean men during the COVID-19 pandemic.
Materials and methods
This study aimed to identify activities that are effective in alleviating depression among Korean men by examining differences in demographic information, physical activity levels, and social media usage corresponding to the differing levels of depressive symptoms. Data was collected over two periods: for seven days between November 13, 2020, and November 19, 2020, and for 11 days between September 3, 2021, and September 14, 2021.
Study participants
The survey targeted Korean adult males aged 19 to 65 years. For the first survey, 275 participants were recruited during the seven-day period by Embrain (EMBRAIN Corp., Gangnam, SEOUL, KOREA), a specialized survey firm. For the next survey, conducted during the 11-day period, 316 participants were recruited by Panelnow (PanelNow Corp., jung-gu, SEOUL, KOREA). A total of 591 participants joined this study. Informed consent was received from the participants before the survey. If the participants chose their gender as female, the survey was terminated immediately in order to include only male participants in the survey.
The sample size was based on standards by Comrey and Lee [28], which scaled sample size with the potential validity of a research study (50 = very poor, 100 = poor, 200 = fair, 300 = good, 500 = very good, 1000 or more = excellent). The sampling method used was convenience sampling, a type of non-probability sampling. The questionnaires were self-evaluative.
Demographic characteristics
Five demographic characteristics were measured through the questionnaire-age, average monthly household income, highest educational degree, physical activity level, and whether participants had a physical activity companion and the type of companion. The data were collected using the 2019 National Leisure Activity Survey conducted by the Ministry of Culture, Sports, and Tourism [29].
Depression diagnostic scale
To measure the level of depression experienced by Korean men, the Health Questionnaire-9 (PHQ-9), which has been used in previous studies [30,31], was revised for this study upon expert consultation with a professor and a doctoral researcher in sports sociology. This scale includes a nine-item depression diagnostic tool that outlines the criteria for diagnosing mental disorders, including major depressive disorder [32]. Items on the scale range from 0 ("never") to 3 ("almost daily"). The total score for the PHQ-9 scale is 27, with the standard scores classifying symptoms of depression as being minimal (0-4), mild (5-9), moderate (10)(11)(12)(13)(14), and severe (≥15). A score above 10 is suggestive of depression [26]. The Cronbach's α of the PHQ-9 was 0.915, indicating the reliability of the measurement tool.
Social media addiction scale
The Bergen Facebook Addiction Scale (BFAS), revised by Andreassen et al. [33], was revised upon expert consultation with a professor and a doctoral researcher in sports sociology for use as a social media addiction scale. The types of social media platforms included in the questionnaire involved those that were used widely in Korea, such as YouTube, Facebook, Twitter, Instagram, and KakaoTalk.
The questionnaire consisted of a total of six questions, for which a total BFAS score above 19 indicated a high degree of social media addiction [34]. The measurement tool comprised a five-point Likert scale ranging from 1 ("very rarely") to 5 ("very often"). The Cronbach's α of BFAS was 0.917, indicating the reliability of the measurement tool.
Concerning the Topline Questionnaire used in a study by Zhao & Zhou [35], questions about the extent of social media usage comprised of two questions: (1) regarding the extent of social media use and (2) the type of social media platform used most frequently.
Measurement of physical activity levels
A shortened version of the International Physical Activity Questionnaire (IPAQ) was used to measure the participants' physical activity levels. Furthermore, following the IPAQ guidelines, the durations of high-intensity and moderate-intensity physical activity were converted to metabolic equivalent of task (MET) values for analysis, which indicate energy consumption at rest.
The MET is calculated as follows: 8.0 METs and 4.0 METs for high-intensity and moderate-intensity physical activity, respectively, were multiplied by the duration of each activity and the weekly physical activity levels were derived and scored [36]. For example, METs/hours/week = amount of physical activity (time/week) × duration of physical activity (hours/time) × intensity of physical activity.
Participants were asked to provide the frequency and duration of physical activity classified into high-intensity, moderate-intensity, and walking (suggesting low-intensity) for each session lasting for a minimum of 10 minutes in the previous week. High-intensity physical activity included not only sports but also daily activities such as running, hiking, biking at a high speed, and carrying heavy objects. Moderate-intensity physical activity referred to activities such as swimming, tennis, and hiking. The World Health Organization (WHO) has recommended at least 150 minutes of moderate-intensity, or 75 minutes of high-intensity physical activity per week for adults 18 years or older [37].
Data processing
The data were processed through coding and data cleaning and analyzed using SPSS version 25.0 (IBM Corp., Armonk, NY, USA). The procedure for data processing was as follows. First, frequency analysis and descriptive statistical analysis were performed on the participants' demographic characteristics. Second, the skewness and kurtosis values for the PHQ-9 and BFAS scales were calculated pertaining to West et al. [38]. The skewness and kurtosis values for the PHQ-9 and BFAS scales were -0.190-1.282 and -1.016-0.749, respectively, forming a normal distribution. Third, the reliability of the measurement tool was examined using Cronbach's α. Fourth, a t-test was performed to identify the difference in demographic characteristics according to the level of depression to explore Hypothesis 1. Fifth, a chi-square test was performed to identify the difference in levels of physical activity and social media addiction according to the level of depression. The effect size was analyzed with reference to Cohen [39], who defined a d-value of 0.2 as a small effect, 0.5 as a medium effect, and 0.8 as a large effect.
Results
The participants' demographic characteristics are exhibited in Table 1. A total of 591 men participated in this study and the average age of the participants was 44.03 years (SD = 13.458). Specifically, 118 participants were in their 20s (20.0%), 119 were in their 30s (20.1%), 118 were in their 40s (20.0%), 117 were in their 50s (19.8%), and 119 were aged between 60-65 (20.1%). The highest average monthly household income was 2,010,000\-4,000,000\, as observed in 206 participants (34.9%). The most frequent highest educational degree was a four-year university degree, observed in 337 participants (57.0%).
First, to verify Hypothesis 1, chi-square test was carried out. Significant differences were identified in monthly average family incomes (x 2 = 14.428, p = 0.006), levels of participation in physical leisure activities (x 2 = 12.014, p = 0.014), extent of social media usage (x 2 = 11.583, p = 0.021) and kinds of social media used (x 2 = 11.276, p = 0.024) there was a significant differences in. Table 2 summarizes the results of a t-test analysis regarding the differences in levels of physical activity and extent of social media addiction according to the PHQ-9 level of depression, used to verify hypotheses 2 and 3. Specifically, the results demonstrate significant differences in high-intensity physical activity (t = -2.436, p = 0.015), social media non-addiction (t = -2.829, p = 0.005), and social media addiction (t = -5.757, p = 0.000). No significant difference was observed in low-intensity physical activity.
Discussion
This study aimed to examine whether there is a difference in demographic characteristics, physical activity levels, and social media use according to the reported level of depression among Korean men in the wake of the COVID-19 pandemic. The findings are summarized as follows.
First, the demographic characteristics of Korean men were analyzed using a chi-square test, based on their reported depression levels. Our results indicate significant differences among average monthly household income and social media use. Difficulties in school, work, and personal finances brought about by the prolonged social restrictions placed due to COVID-19 have led to high levels of depression [40]. Additionally, Pierce et al. [41] reported that young people tend to suffer higher levels of psychological distress due to unemployment, reduced income, and layoffs compared to older people. Furthermore, a study by Lee [42] revealed that Korea saw a dramatic increase in suicide rates. This study also revealed that men suffer from depression severe enough to contemplate suicide, whereas women are more likely to report feeling depressed and express this more openly compared to men. Such states of depression can be attributed to a decline in mental health due to job loss and debt, exacerbated by the prolonged COVID-19 pandemic [43]. In this regard, a new term, "n-po generation" has appeared in Korea, similar to the term, "Ikea generation" [44]. The "n-po generation" refers to a generation of people who have given up on dating, marriage, becoming a homeowner, owning property, building a career, fostering relationships, and the like. As such, the prolongation of the COVID-19 pandemic has led people to give up on basic aspects of life, which in turn, negatively affects individuals' mental states. Therefore, the government should play an active role in establishing measures to reduce depression through fundamental interventions that can alleviate the economic struggles of young adults.
Second, an analysis of the difference in physical activity levels according to levels of depression among Korean men revealed a significant difference in levels of highintensity physical activity. This indicates a high frequency of high-intensity physical activity among men who may be feeling depressed. Many studies have suggested that participation in moderate physical activity can effectively reduce symptoms of depression [45][46][47]. Nevertheless, the findings of this study demonstrate no significant difference for moderate-intensity physical activity, revealing a significant difference only for high-intensity physical activity.
Characteristically, Koreans are conscious of the opinions of others [48]. In particular, taking pictures following a vigorous workout session and posting them on Instagram has become a trend among Korean men in their 20s and 30s [49]. Regarding this phenomenon, Chung [50] describes the recent attribution of value to external appearances, such as a fit body, which has led to those who do not fit these bodily standards feeling depressed. In this way, an increase in the number of people wishing to gain muscle or change their appearance through dieting may have led to high levels of depression in those engaging in high-intensity physical activity.
It is worth noting that men who participate in highintensity physical activity regularly are most likely athletic or sporting enthusiasts. Due to restrictions on the use of facilities such as gyms, participation in regular physical activity became increasingly difficult [51]. Thus, Korean men have been purchasing equipment in order to engage in physical activity at home [52]. Despite this, the amount of physical activity at home may be insufficient compared to what participants may have been used to at the gym [53].
According to Dumitru et al. [54], the desire to engage in physical activity is more commonly observed in men than women, and men become stressed if they are not able to exercise. Moreover, Alcaraz-Ibáñez et al. [55] suggested that one may feel depressed when their physical activity needs are not fulfilled. In a similar manner, Korean men have reported feeling higher levels of depression as their physical activity levels decreased due to restricted access to spaces in which they could engage in physical activity. As such, it is necessary to establish a strategy to enable participation in physical activity in a spacious location, while maintaining social distancing [56].
Lin et al. [57] reported that men who engaged in high-intensity physical activity demonstrated lower levels of depression. In this study, however, there were contradictory findings as higher levels of depression were observed in those who engaged in high-intensity physical activity. There were no significant differences in moderate-intensity physical activities depending on the level of depression. This is because more males tend to continue moderateintensity physical activities than females from the past to the present [57]. Particularly, according to Jeon [58], men in their 40s have menopausal symptoms and experience a fall in masculinity and an increase in weight, because of a decrease in testosterone. Therefore, they may feel depressed by physical inconvenience, regardless of physical activities. Absence of significant differences in moderate-intensity physical activities results from these personal difficulties faced by Korean males. As such, various social problems may lead to a reduction in physical activity levels in men. Therefore, establishing measures for men to participate in physical activities with COVID-19 restrictions in place can allow Korean men to overcome feelings of depression.
Third, an analysis of the extent of social media addiction according to the levels of depression among Korean men indicated feelings of depression for both "social The p-value was analyzed using an independent t-test.
media non-addiction" and "social media addiction" groups. According to the "2021 Social Media Market and Current Status Analysis Report" published by DMCREPORT [59], 89% of the Korean population uses social media. The usage rate in Korea is high compared to the global population. Most of the Korean citizens use social media platforms such as YouTube, which is supported by the findings of this study, indicating that YouTube has the greatest influence among the various social media platforms.
A study by Son and Heo [60], which explored feelings of helplessness and loneliness arising from social media use in Koreans during the COVID-19 pandemic, demonstrated that feelings of helplessness and loneliness were felt more strongly when an increased amount of time was spent on social media. On the other hand, the use of social media platforms made for social communication, such as KakaoTalk and Instagram, reduced loneliness. Additionally, more time spent on YouTube during the COVID-19 lockdown reduced viewers' sleep time [61] and subsequently increased depressive symptoms [62]. Thus, the appropriate use of social media, which includes the right kinds of applications and platforms as well as implementing the right degree of use, may be an effective method to improve depression in response to the COVID-19 pandemic. Examples include promoting personal happiness through YouTube and social communication through KakaoTalk and Instagram.
Further, social media is not simply used to maintain a social network with acquaintances. It can enable the provision of emotional support for those who are struggling due to the pandemic and provide support and encouragement for particular groups. For example, medical staff and frontline workers, who experienced high levels of stress, trauma, and isolation, received support through social media groups, which helped them overcome the psychological stress of COVID-19 [63]. Ongoing social communication, including through social media, can lead to reduced stress, positive mood, and promotion of social interactions [64]. Such active use of social media can sustain communication with friends and family and is effective in reducing isolation, and boredom, and ultimately tackle anxiety and other prolonged mental issues due to COVID-19 [65]. Therefore, the development of various social media content that aims to enhance communication among users at the national and local level may increase mental stability and reduce depression. This can be compared to social media use that simply connects close acquaintances. Our findings suggest that wider and broader social connections may address issues of depression in this way.
The limitations of this study are as follows: first, this is a cross-sectional study, which made it difficult to control for individual psychological and social circumstances unrelated to COVID-19 prior to the survey. Therefore, in the future, a longitudinal study controlling for individual situations may facilitate higher validity and reliability of findings. Second, this study was conducted online using a professional survey company, which may have led to differences in the completion of the questionnaire depending on the participants' computer skills. Therefore, a brief training on questionnaire completion may lead to more consistent results. Third, as this study used a questionnaire using the IPAQ scale to measure physical activity levels, the results may not have reflected the participants' usual physical activity levels as individual social, physical, and psychological states were not controlled for. Therefore, more detailed results may be obtained if the study was conducted among participants with similar lifestyles.
Conclusions
In this study, frequency analysis, descriptive statistical analysis, chi-square test and t-test were performed to understand the differences in descriptive statistic characteristics, physical activities and extent of social media addiction depending on the level of depression among Korean males in the COVID-19 situation.
When analyzing differences in demographic characteristics depending on the level of depression, there existed differences in monthly average family incomes, levels of participation in physical leisure activities, levels of social media usage, and kinds of social media used. There were significant differences in high-intensity physical activities, social media non-addiction, and social media addiction depending on the level of depression. As the continuance of COVID-19 increases depression in males, government or health-related institutes need to provide spacious areas to engage in physical activities to help reduce suicidal ideation and restore mental health in Korean men. Moreover, there is a need to develop diverse health-related social media content to reduce depression by actively using social media that are often used by males. Therefore, it is ideated that provision of spaces to perform physical activities in their entirety and sufficient communication with others on social media can be helpful in combating depression.
Ethics approval and consent to participate
This study obtained approval of the Chung-Ang University Research Ethics Board, in accordance with the 'Helsinki Declaration' enacted in 1964 (041078-202009-HRSB-259-01). Informed consent was received from the participants before the survey. | 2022-03-11T16:12:23.608Z | 2022-03-02T00:00:00.000 | {
"year": 2022,
"sha1": "9fbb3f45cd237ae4d2c9c91c8c4f2f831f0feebb",
"oa_license": "CCBY",
"oa_url": "https://www.imrpress.com/journal/JOMH/18/3/10.31083/j.jomh1803068/pdf",
"oa_status": "GOLD",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "9a951f592f8bf7d14c9be1b952e5e2d3c4a8decf",
"s2fieldsofstudy": [
"Psychology",
"Medicine",
"Sociology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
216364346 | pes2o/s2orc | v3-fos-license | Junior high school mathematics teachers’ knowledge in calculators
Helping junior high school students to use calculators and computers for problem solving and investigating real-life situations is an objective of the junior high school mathematics curriculum in Ghana. Ironically, there is a technological drought in junior high school mathematics instruction in Ghana, with a suspicion that mathematics teachers’ competency in the use of calculators for teaching may be the source of this lack of use. This study sought to establish a correlation between junior high school mathematics teachers’ competence and the motivation supporting the use of calculators in teaching. A descriptive survey comprising of a test and questionnaire was used to collect data from junior high school mathematics teachers in an educational district in Ghana. Teacher characteristics such as educational attainment, age, and gender in relation to teachers’ competency in the use of calculators were discussed in the study. The results showed that about 70% of the teachers exhibited a low level of calculator competence. Besides, novice teachers outperformed expert teachers in the calculator competency-based test. Additionally, mathematics teachers’ enthusiasm for using calculators in teaching was directly associated with the teachers’ level of competency. The findings may send a signal to stakeholders in their efforts to revising the Ghana JHS curriculum in order to actualize the curriculum desire for the integration of technology in the teaching and learning of JHS mathematics.
Introduction
Calculators, just as any technological tool, have become an integral part of teaching and learning mathematics in Ghana (Ministry of Education, 2007) and elsewhere in Africa (Ochanda & Indoshi, 2011). The technology principle of the National Council of Teachers of Mathematics' asserts that "Technology is essential in teaching and learning of mathematics; it influences the mathematics that is taught and enhances students' learning" (NCTM, 2000, p. 24). The document further promotes calculators and other technologies as essential mathematical tools to the extent that using calculators has become both an innovation and an aid to entering the technological world. However, a teacher's ability to engage appropriately with this technological tool depends on his/her knowledge and skill about the tool (Ely, 1999;Pierce & Ball, 2009). (Note: hereafter, and unless otherwise stated in this paper, calculator refers to scientific calculator).
Whereas some opponents argue that the use of calculators threatens students' ability to develop basic computation skills and as well breeds laziness among students (Mason, 2010;Satianov, 2015), its usefulness in mathematics instructions cannot be underestimated. Miles (2008) for instance, suggested that calculators aid in the development of mathematical concepts. Miles further asserted that calculators could be used as self-drill apparatus that serve to enhance the problem solving abilities of learners. Additionally, Miles held the view that the use of calculators reduced the amount of time spent in computation. In a similar measure, technological affordances such as calculators exert enormous impact on the teaching and learning of mathematics. Perhaps, calculator users are able to solve problems that hitherto would have been very difficult if not impossible. Additionally, greater opportunities for more exploration, better representation and effective communication of results are demonstrated through the power calculators (NCTM, 2000).
Given the widespread agreement among mathematics educationists on the necessity to equip teachers with the needed knowledge of technology in the use of calculators and other technological tools, it follows that teachers are trained in how to incorporate calculators in mathematical instruction and learning (Salani, 2013). In this light, recommendations have been made in Ghana for a modification of the mathematics curriculum for teacher training, to equip trainees in the use of calculators in mathematics instruction (Asare-Inkoom, Apau Gyamerah, & Najimudeen, 2008;Mereku et al., 2007). Colleges of Education (CoE) in Ghana have been accredited to provide content, pedagogic and technological training of basic school mathematics teachers (Government of Ghana, 2012). The training of pre-service mathematics teachers in CoEs is to equip them with the knowledge on how to use, apply and integrate technology in the teaching of mathematics. Yet, mathematics teachers in junior high schools (JHS), similar to secondary school mathematics teachers in Ghana (Agyei & Voogt, 2011), seldom integrate technology in their instruction. Evidence to this is Amanyi, Sigme, & Lloyd (2016) report that in Ghana, JHS mathematics lessons are primarily characterized by paper-pencil calculations. It is not as if the education system inhibits the use of calculators in teaching mathematics at the JHS although the use of calculators in the Basic Education Certificate Examination -BECE (External examination taken by students at the completion of JHS) have been a subject of debate (Mereku et al., 2007); Its use in teaching is not only encouraged but mandatory. This is because the JHS mathematics curriculum intends to help students use calculators and computers for problem solving and investigation of real-life situations (Ministry of Education, 2007).
The limited use of calculators in junior high school mathematics lessons raises questions about teachers' ability to use calculators in Ghana. Whilst the junior high school mathematics teachers' competence in the use of scientific calculators remains a suspicion, there is ample evidence that Ghana's basic school mathematics teachers are challenged in the use of other types of calculators. For instance, Wilmot (2015) found that Ghanaian teachers lag behind their USA counterparts on a KAT item which was easily answerable using the graphical calculator but a deficiency on the part of the Ghanaian teachers might have caused them. Despite a plethora of research on teachers' attitudes, beliefs and perceptions toward the use of calculators (Adabor, 2008), it seems there is less research on teachers' level of competency in the use of calculators.
The purpose of this study was to examine Ghanaian junior high school mathematics teachers' competence to use calculators for mathematical computation and their ability to use the calculators in teaching. Subsequent to this result, the researchers also sought to correlate the teacher's competency with their motivation for using calculators in their teaching.
To achieve the aims of this study, the researchers sought to answer the overarching research question of how well Ghanaian junior high school mathematics teachers' competency in calculators correlated with their perceived motivation for using calculators in teaching. To this end, two sub-questions were asked. The first question was: "What is the current calculator competency level of JHS mathematics teachers in Ghana? Here, the researchers sought to examine how well mathematics teachers performed on a test that was answered using the calculator. Intrinsically, the study determined whether teacher characteristics such as age, teaching experience and category of the teacher might expose differences between and among the mathematics teachers. The second question for this study was: "What perceived factors influenced JHS mathematics teachers in Ghana to use calculators during teaching? On the basis of this question, the mathematics teachers' competency in calculators was correlated with their perceived motivation for using calculators in teaching.
Though a myriad of factors has been found to impact the use of technology in the teaching and learning of mathematics in Ghana, the focus has largely being tilted towards ICT and computers, leaving handheld calculators largely unexplored. Technology competency has been established by research as one of the significant factors that influence technology use in teaching and learning interactions (Agyei & Voogt, 2011;Pelgrum, 2001;Salani, 2013). Pelgrum (2001) bemoaned the impact of teachers' competency gap in technology on the success of educational innovations, claiming that the success in vitalizing education through technological innovations depended on the technological competence of teachers. Apart from singling out competency, other teacher characteristics such as age, teaching experiences, gender, and educational qualification also directly influence the use and adoption of technology in general (Afshari, Bakar, Luan, Samah, & Fooi, 2009).
The Adopter-based theories (Surry, 1997) provided a theoretical anchor to this study. The adopter based (Instrumentalist) theory focuses on social conditions, human, and interpersonal dimensions necessary for innovation, diffusion, and use. The ultimate implementer of innovation in this theoretical framework is seen as a primary agent of change who implements the innovation in a practical setting. Tessmer (1990), for example, held the view that several factors accounted for the adoption or rejection of innovation in a technological context such as in instruction. According to Tessmer, the factors responsible for the adoption or rejection of innovations were hinged on the user of the innovation. Consequently, Tessmer argued that the level of the technical superiority of an innovation/product is not the only factor that influences its use. But, personal and interpersonal factors can play a more substantial role in the use of technology innovations than technological superiority (Surry & Farquhar, 1997).
Ghana's JHS Mathematics teachers' use of calculators might not only be affected by the power and efficiency of the calculators themselves but also human conditions classified as teacher characteristics. Schools and for that matter, government agencies overseeing the implementation of the JHS syllabus could influence some of these characteristics, yet not all. Conceptually, factors related and unrelated to calculator technology influence teachers' use of calculators ( Figure 1). Operationally, factors related to calculator technology are the combination of calculator knowledge and skills of JHS mathematics teachers, whereas, unrelated factors are the non-manipulative teacher characteristics/factors such as gender, age, teaching experience and ownership of calculators (Afshari, Bakar, Luan, Samah, & Fooi, 2009
Research methods
A descriptive survey design was used to examine the competency level of Ghana's JHS mathematics teachers in the use of calculators. Kothari (2004) held the view that descriptive survey studies aim to provide a description of the characteristics of a particular individual, or of a group of individuals. Additionally, Ochanda and Indoshi (2011) explained that a descriptive survey is a fact-finding research design in educational research where accurate and primary information regarding teachers' use of calculators could be obtained. Though different types of descriptive research methods exist (Hale, 2011;Jackson, 2015), here in this study, a survey method was undertaken in which mathematics teachers responded to a set of questionnaires and test items. By so doing and within a relatively short period of time, test scores and responses on teachers' competence in using calculators and the motivation for using calculators in teaching mathematics respectively were concurrently collected from the junior high school mathematics teachers. Creswell and Creswell (2017) described the survey design as a non-experimental quantitative approach that is suitable for gathering a numeric description of a state of affairs of a population by studying the desired characteristics in a sample of that population. It, therefore, answers the question of 'what is?' and this sits well with the research questions in this study.
A total of 97 JHS mathematics teachers drawn from an educational district in Ghana participated in the study. The participants included 46 student-teachers who had just completed a yearlong practicum at the JHS and 51 practicing mathematics teachers. The educational district had 59 JHS distributed in eight circuits. The number of mathematics teachers in each school varied from one to three depending on the number of streams. A stratified sampling (Alvi, 2016) was used to select respondents from five circuits since the teachers were already in their natural strata. Some practicing teachers declined to respond to the questionnaires because of the test component. Whereas some wanted to take the questionnaire home, others were frank to say that they could not use the calculators. Within a week of an intensive visit to schools, only 51 responded positively. Through snowballing, 46 student-teachers who taught JHS 1 mathematics for the entire practicum period also participated in this study.
The main instrument for the data collection comprised a set of questionnaires and a competency-based test which was developed and administered by the researchers. Since we could not assess an existing validated instrument, the researchers in consultation with one mathematics teacher educator in a College of Education in Ghana, designed a calculator competency-based test. The test items demanded from respondents two things. Firstly, the teachers were to provide answers to the questions with the aid of a calculator without doing any form of paper-pencil computations, and secondly, they were to describe the steps to be followed in using a calculator to compute a task. In this regard, the junior high school mathematics teachers could demonstrate their skill and the understanding of the calculator operation and function keys. The choice for these two tests of competence in using the calculator is premised on the notion that a person's technological competence is not just about the theoretical and factual information of using calculators, but also the selection and performance ability to make the right choice of keys and functions at the right step. The questionnaire component (Table 4) of the instrument was an 11-item four-point Likert scale questionnaire that sought to explore teachers' perception of what motivated them in using calculators in the teaching of mathematics. The responses to the four-point Likert scale ranged from a score of one through to four representing 'strongly disagree to strongly agree respectively. A pilot test of the research instrument was carried out on 18 teachers who were pursuing degree programs (mathematics major) over a 4-week interval period. The calculator use questionnaire yielded a Cronbach alpha reliability coefficient of 0.719 which was good for implementation (Tavakol & Dennick, 2011). With regards to the competencebased test, an internal consistency reliability test yielded a 0.806 Pearson correlation at 0.01 (2-tailed) significance level (Liebe, Meyerhoff, & Hartje, 2012;Tavakol & Dennick, 2011). However, using the Escudero, Reyna, and Morales (2000) discrimination index and the marking scheme drawn for the test, some items on the pilot test were either modified and/or replaced based on the pilot test.
For instance, the problem "For the standard deviation, a student enters and gets the answer 6.63. Which other approaches could s/he have used?" was deleted because it had a discrimination index of zero. Simply, none of the teachers who participated in the pilot test attempted answering this question. Whereas some claimed they had forgotten the algorithm, others were not sure of any other approach. Other questions such as 1. Solve for and simultaneously using the calculator: 2x − y = 5 and 2x + y = 7; 2. Increase 250 by 25% 3. How do you access the decimal functions on the calculator? did not get the needed responses and had to be rephrased. In question (a) and (b), the teachers were rather using the calculator only to verify their answers as in figure 2. In question (c), the teachers had misconstrued the question to mean the location of the decimal function key. However, during the interaction phase after the pilot test, the teachers suggested a re-framing of these questions which the researchers found relevant. Subsequently, questions (a), (b) and (c) were rephrased respectively as 1. Indicate systematically keys that would lead to solving simultaneously: 2x − y = 5 and 2x + y = 7.
2. Indicate systematically keys that would lead to increase 250 by 25% 3. After computation, how do you access a "decimal answer" on the calculator? In order to maintain the 16 items as before the pilot testing, the deleted question which had a discrimination index of zero was replaced with a similarly framed question 'What did a student do wrong inputting 2 2 × 5 for computing 2 3×5 ?' http://journals.ums.ac.id/index.php/jramathedu The modified research instrument (that is, the instrument after pilot testing) was distributed to the mathematics teachers at their various schools where they responded in the presence of the researcher. Even though three brands of calculators (CASIO FX-115ES Plus, HP 35s, CASIO fx-991ES PLUS) were made available, the CASIO fx-991ES PLUS calculator was preferred by the teachers. Responses from the 11-item four Likert scale questionnaire were included in an exploratory factor analysis (Principal axis factoring with varimax-rotation), to identify possible perceived factors that motivated the mathematics teachers to use calculators in teaching. The sample size of 97 fell within Tabachnick, Fidell, and Ullman (2007) 1 to 10 criteria (about eight persons for a variable).
Permission to conduct the study was sought from head teachers. Once permission was granted by the head teacher, the purpose and procedure for the study were explained to the mathematics teacher(s) who either agreed or declined to participate without coaxing. Data assembled was analyzed descriptively using means and standard deviations. Analysis of variances for equality of means within and between groups was performed. Effect size statistics were determined based on Cohen's d benchmark (as cited in Agyei & Voogt, 2015).
Results and discussion
In the following analysis, we defined the independent variables as follows. To begin with, the gender of a teacher was explained as either a male or a female teacher. Also, teachers' teaching experience with respect to the number of years of teaching basic school mathematics was classified into three categories. These are novice teachers, those who had at most three years of teaching, intermediate teachers were those who had taught for a period between 4 and 10 years, and expert teachers were teachers with at least 11 years of teaching JHS mathematics.
Furthermore, the maturity of the teachers with respect to their age was defined as either a young adult at most 24 years old or an older adult of at least 25 years old. This classification is in agreement with standards acceptable for admission into institutions of higher studies in Ghana such as GIMPA (2019). In addition, the teacher type/category referred to the statue of the teacher as to either a student-teacher or a practicing teacher. During a familiarisation visit to some of the junior high schools in the study area prior to data collection, we found that some students from a college of education were having their practicum. Hence the inclusion of teacher-trainees in the study. Lastly, teachers' highest professional qualification for teaching mathematics at the JHS was described by their academic attainment. This included secondary school certificate holders, diploma holders, and degree holders.
Research has shown conflicting results of the relationship between these aforementioned independent variables and mathematics teachers' perceived competence in the use of technological tools such as the calculator. For instance, Salani (2013) observed that mathematics teachers differed in their perceived competence in the use of calculators by teachers' gender and teaching experience. With a calculator as a technological tool, Agyemang and Mereku (2015) and Salani (2013) also upheld that apart from gender, a mathematics teacher's use of technology in Ghana was not significantly influenced by age/maturity and teaching experience. Riding on the same argument, Agyei and Voogt (2011) claimed that practicing teachers were superior to pre-service/student-teacher in technology use competency.
Each item on the competency test was scored on a scale of 0 to 2. A correctly answered question attracted 2 marks, 1 mark for a partially correct answer and a 0 mark for a wrong solution or an unanswered question. Thus, the optimum score could have to be 32 since there were 16 questions. With a median score of 16, we decided without recourse to any literature to categorize the score obtained by a mathematics teacher on the competency-based test as a low or high score. On that basis, a teacher was either classified as a low scorer or a high scorer. Low scorers obtained a score of 16 or lower and a higher scorer obtained a score of 17 or better on the test. Approximately 70% (M = 10.16; SD = 4.363; N = 68) of the teachers obtained a low score, and approximately 30% (M = 19.38; SD = 4.248; N = 29) of the teachers obtained a high score. The scores were also normally distributed among the teachers (Shapiro-Wilk test of significance = .145). Considering the descriptive statistics, it could be inferred that the teachers in the study underperformed in the test. An independent sample t-test on the difference in performance was statistically significant t(87.11) = -13.131, p < .001, two-tailed, with a very large effect size (Eta square = .544) observed. A 99% confidence interval on the difference was within range (-11.066 to -7.369). These statistics suggest that in general, and with respect to the study area, junior high school mathematics teachers' knowledge in the use of calculator knowledge is low. Above all, the test was sufficient to disaggregate junior high school mathematics teachers according to their competency in calculator technology.
A Pearson product moment correlation was executed to test for possible association between teachers' calculator competency scores and the independent variables identified. Further association was tested among the independent variables for which linear relationships (Pallant & Manual, 2007) were established. For example, a negatively weak correlation was found between a teacher's calculator competency score and their academic attainment, r = -.28, n = 97, P < .005. Among the predictor variables, teaching experience correlated: (1) strongly and positively with academic qualification (r = .68, n = 97, P < .005); (2) moderately strongly and positively with age (r = .42, n = 97, P < .005); (3) but negatively with teacher type (r = -.54, n = 97, P < .005). A positively large and negatively large association between age and academic qualification (r = .57, n = 97, P < .005) and teacher type (r = -.56, n = 97, P < .005) respectively. A very high negative association between academic qualification and teacher type (r = -.84, n = 97, P < .005) was recorded. The correlations among the variables ranged from negative to positive, from small to large but some correlation were not significant (Taylor, 1990 Table 1 shows how the mathematics teachers differed in their competence in calculators. The mathematics teachers were categorized according to five teacher characteristics with respect to their level of calculator competence. Expert teachers were statistically not different (F (1, 98.75) = 3.355, p = .164) in their mean score on the competency-based test. That is apart from the sub-category of experts, all other levels of categorizing teachers depicted a sharp difference in their performance on the competency test. The gap between high competency and low competency was glaringly obvious.
A further analysis (Table 2) of the competence showed significant differences within teacher type (F (1, 95) = 8.355, p = .005) and academic qualification (F (1, 94) = 4.135, p = .019). The difference in teacher type was in favour of student teachers (M = 14.63,SD = 4.668) with a relatively small effect (8.1%) on competency. Unlike, Agyei and Voogt (2011) who found that practicing teachers were more competent technologically than student teachers, this study, however, found that student teachers were more competent in the use of calculators than practicing teachers. On academic qualification, a Tukey HSD post hoc test at 95% confidence level showed a 0.019 significance difference between secondary school certificate holders and degree holders in favour of the former. The effect size was however low (.081). It was envisaged that holders of higher education certificates in mathematics would have been more competent but that was not seen to be the case. Examining the data revealed that of the 20 mathematics teachers with degree certificates, 45% (09) pursued a bachelor of education B.Ed (Mathematics) and the remaining 55% (11) have bachelor's degrees in other subjects such as social studies. What the researchers could not confirm was the module of the degree program (distance or regular) and content of study therein. Among the 28 diploma holders, 10 held a Diploma in Basic Education -DBE (mathematics major), while the remaining 18 had DBE in other subjects other than mathematics. The secondary school certificate holders were all 2019 final students pursuing DBE with mathematics as a major subject of study. These student teachers had gone through a method of teaching basic school mathematics course barely a year earlier, during which they were trained on how to teach with calculators. The high competency level of these student teachers in the use of calculators could be that their calculator knowledge was still fresh in memory and/or they indeed did use the calculators during the teaching practice. All the student teachers did their teaching practice in JHS one (1) class (grade 7) where introduction to calculators is a topic in the JHS mathematics syllabus in Ghana (Ministry of Education, 2007).
On the issue of maturity which was simplified as age, table 2 reveals that there was significantly no difference in competency level with age. Furthermore, a series of 10 threeway analysis of variance (ANOVA) at 99% confidence level were computed to explore how the JHS mathematics teachers from an educational district in Ghana differed in their competency ability on calculators. There was no single statistical difference observed among the interactions and main effects ( Table 3). The interactions were sufficient to control the univariate main effect observed in teacher type.
Additional exploration of variance on the impact of maturity and academic attainment on calculator competency yielded no statistically significant interaction result (F (1, 92) = .143, p = .706). The interaction effect between gender and academic attainment on the calculator competence of teachers was not statistically significant (F (2, 91) = 1.260, p = .265). There was equally no statistical difference main effect for gender (F (1, 92) = 1.004, p = .319). Another test of variance on the impact of gender and teaching experience on the calculator competence of teachers yielded no statistically significant interaction result (F (1, 92) = .589, p = .445) at .01 (Pallant & Manual, 2007). There was equally no statistical difference main effect for gender (F (1, 92) = 2.844, p = .095). Descriptive data analysis revealed that young adult mathematics teachers were neither intermediate nor expert teachers. All young adults were novices in the teaching of JHS mathematics. Besides, a between-groups analysis of variance indicated that the mean difference in teachers' calculator competency was not statistically significant given the interaction effect of http://journals.ums.ac.id/index.php/jramathedu academic attainment and teaching experience of JHS mathematics teachers (F (1, 97) = .056, p = .814). In relation to finding possible perceived factors that motivated the mathematics teachers to use calculators in teaching, the Kaiser-Meyer-Olkin measure of sampling adequacy was 0.705, above the recommended value of 0.6, Bartlett's test of sphericity was significant (2 (55) = 275.832, p < .05) and communalities beyond 0.03. The Kaiser-Guttman eigenvalue rule (at least 1.0) yielded three factors that explained approximately 57% of mathematics teachers' perception. These three factor points were observed on the scree plot inspection. After some reliability analysis, nine items that met the factor loading criteria were retained (see Table 4). The alphas ranged from a low of 0.559 for enthusiasm through to a moderate 0.676 for a lack of anxiety and a high of 0.809 for the instructional tool.
Descriptive statistics on the composite scores for each of the three factors were based on the mean (Table 5). There was statistically no difference within each of the subscales with respect to the mathematics teachers' competence level in calculators use. Thus, irrespective of mathematics teachers' perceptions of the motivation behind their use of calculators in teaching, their competency was not different.
http://journals.ums.ac.id/index.php/jramathedu Investigating the association between mathematics teachers' level of competence within these three subscales revealed a direct and significant relationship with enthusiasm, albeit small. However, the association between calculator competence and teachers' use of calculators in teaching, as a result of their lack of anxiety on one hand, and their view that calculators are seen as instructional tools on the other, were not significant.
Conclusion
From the results, the competency level of JHS mathematics teachers in this educational district of Ghana is low. The relatively high competence among the student teachers could be attributed to their continuous use of calculators for the purpose of examinations and that discontinued use of calculators, just as for any other technological affordance/tool, would cause knowledge lapse and reduced interest. Practicing teachers' low competence could have resulted from this hypothesis since they hardly use calculators in their teaching. The denial of JHS students from using calculators at examinations, particularly at the BECE, discourages teachers from encouraging their students to use calculators in learning mathematics despite the merits described in the literature. JHS mathematics teachers' low competency with calculators is hidden by this official position. But how long can teachers continue to hide? The time is coming and it is here with us, where technology will drive classroom instruction. Teachers' low competency can, in turn, http://journals.ums.ac.id/index.php/jramathedu lead to low confidence, discomfort and low enthusiasm in using calculators for teaching and this was made manifest during data collection. The concern is that based on teachers' discomfort for calculators, they may most likely impede students' use of calculators which according to Seeley (2006) is unacceptable in the technological learning environment. If stakeholders are really bent on getting students to use calculators and computers in solving mathematical problems, mathematics teachers should be helped through in-service and professional training in order to build up their knowledge because teachers set classroom interaction agendas.
The study was limited to one educational district in Ghana and captured the views and abilities of only 97 JHS mathematical teachers. The observations here may not be applicable for nationwide generalization, however, districts and regions with similar characteristics may exhibit similar observations. It is expected that subsequent studies would conduct a follow-up inquiry to validate these findings on a larger sample. Additionally, the researchers would recommend an in-depth study on why student teachers could out-perform practicing teachers in the use of calculators. | 2020-03-05T11:10:59.035Z | 2020-02-28T00:00:00.000 | {
"year": 2020,
"sha1": "22ee0984fa43823319726e58d6ef02f3ab62352f",
"oa_license": "CCBY",
"oa_url": "https://journals.ums.ac.id/index.php/jramathedu/article/download/9523/5377",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "6d3447747e170c73b74d50bf66a450e652bd1f86",
"s2fieldsofstudy": [
"Education"
],
"extfieldsofstudy": []
} |
213437614 | pes2o/s2orc | v3-fos-license | Laparoscopic Heller Esophagomyotomy is Safe and Effective in Rural East Africa
Background: The incidence of achalasia in subSaharan Africa is not known. Experience in our region suggests the disorder affects mainly younger patients. Esophagomyotomy is the gold standard treatment for achalasia. Benefits of laparoscopic treatment are well documented in western populations. African data is insufficient. Methods: A retrospective review of patients over 16 years who underwent esophagomyotomy at Tenwek Hospital (2008–2017). The primary outcome was improvement in symptoms before and after surgery as measured using the Eckardt score— lower scores for dysphagia, regurgitation, pain, weight loss indicate less severe symptoms. The secondary outcomes were duration of surgery, length of myotomy, length of hospital stay, and complication rate. Data analysis used ANOVA. Results: 54 patients were included: 28 with laparoscopic Heller myotomy (LHM), 26 with open Heller myotomy (OHM). LHM patients were younger than OHM patients (p<0.05). Patients who had LHM had lower postoperative Eckardt scores (p<0.05). Duration of surgery for LHM was longer (p<0.001) than for OHM, conversion rate was 10.71%, and length of the myotomy was unaltered. No difference was seen in hospital stay or complication rate between the two groups. Conclusion: LHM is effective and safe in a rural East African setting, with excellent functional outcomes compared with open techniques. Thus, laparoscopy can be feasible worldwide.
Introduction
Esophagomyotomy (Heller myotomy) is described as the gold standard treatment for esophageal achalasia (1,2). Achalasia is a neurodegenerative motility disorder of the esophagus, resulting in deranged peristalsis and loss of lower esophageal sphincter function. The etiology is unclear but likely to be multifactorial (3). Treatment of achalasia involves myotomy and has been commonly managed with laparoscopic Heller myotomy (LHM). Achalasia is a rare primary esophageal motility disorder that occurs with equal distribution, irrespective of gender and race. The incidence of achalasia in western populations is as low as 0.5-1/100,000 people and accounts for 3-20% of cases among esophageal diseases (4). The incidence of achalasia in sub-Saharan Africa has not been established; a few studies exist, mainly within the northern and western Africa regions (5,6), but studies are lacking in East and Central Africa. Therefore, we sought to evaluate the efficacy and safety of LHM in our population. We undertook this 9-year retrospective review to compare LHM with open Heller myotomy (OHM).
Patients, setting, and surgical technique
The study was conducted at Tenwek Hospital, a rural hospital in western Kenya in the south rift region. The research and ethics review committee at Tenwek Hospital gave approval to proceed with the study, after which all patients who were included in the study were informed and gave written consent for use of any data and photos. We reviewed retrospectively records of all patients who had a diagnosis of achalasia and who underwent LHM or originaL articLe OHM between 2008 and 2017. The age of these patients in the series was 16-74 years. For each patient included, the diagnosis had been established by barium swallow esophagram; an endoscopic evaluation excluded other pathology. Exclusion criteria included: 1) age <16 -the focus was on adult patients, and in the data collected no paediatric patient files were found with a diagnosis of achalasia, 2) patients who underwent endoscopic dilation as therapy for achalasia before myotomy. For the procedure, the patient is under general anaesthesia in low lithotomy position with the surgeon positioned caudal and the assistants on the right and left of the patient. The patient's abdomen is insufflated with carbon dioxide, and laparoscopic instruments are introduced via five keyhole incisions (1-2 cm) made on the abdomen. The lower oesophagus was approached and a lengthwise cut was made in the muscle layer. Care was taken to cut only the muscle layer, leaving the inner lining of the oesophagus intact. As this procedure is often complicated by acid reflux after the operation, a partial or complete fundoplication is also performed at the same time to minimize this. The Dor partial fundoplication was the anti-reflux procedure performed for all patients. Participating surgeons prefer this method to avoid the distal esophagus angulating, and because it requires less extensive dissection than total fundoplication (7,8). Most participating surgeons routinely use intraoperative esophagogastroduodenoscopy (EGD) during LHM, after completion of the myotomy.
Outcome measures
The primary outcome was improvement in pre-to postoperative symptoms, measured by Eckardt Symptom Score (ESS). The ESS is the grading system most frequently used to evaluate symptoms and stages of achalasia and efficacy of treatment. It attributes points from 0 to 4, based on the symptoms of the disease (dysphagia, regurgitation, chest pain, and weight loss), giving a cumulative score that ranges from 0 to 12 ( Table 1). Scores of 0-1 correspond to clinical stage 0, 2-3 to stage I, 4-6 to stage II, and a score >6 to stage III (Table 2) (9,10). Secondary outcomes included duration of surgery, length of myotomy, hospital length of stay, and complication rate. Possible complications included mucosal perforations, inadvertent pleural cavity breech, unplanned take back. Data on intraoperative revision of the myotomy post-EGD were not available for analysis.
Data collection
Patient charts from 2008-2014 were reviewed vide paper medical records while those from 2014-2017 were accessed vide the hospital electronic medical record system. Files of patients who underwent esophagomyotomy for a diagnosis of achalasia between 2008 and 2017 were retrieved and reviewed. Patients who were 16 years and below were excluded as were patients who had undergone endoscopic dilation as therapy for achalasia before myotomy. Patients had been evaluated in the surgical outpatient clinic; they completed a questionnaire based on the ESS to evaluate the current symptoms they were experiencing. In our setup, data were collected with the help of clinical translators who had the questionnaire explained to them beforehand, and data were entered in real time at the time of the interviews. The surgeon selected the patients; selection criteria was based on the availability and safe functionality of the laparoscopy equipment. Patients who did not undergo the procedure were scheduled for open Heller esophagomyotomy. Preoperative measurements consisted of identifying common symptoms associated with the disorder and then quantifying them as scores. Scoring used a self-designed questionnaire. Symptoms were scored according to their frequency of dysphagia, regurgitation and retrosternal pain, and quantity of weight lost based on a score of 0 to 3, to give a possible cumulative total of between 0 and 12 (Tables 1 and 2). The result was the Eckardt score. As an example, a patient with occasional dysphagia, daily regurgitation, occasional retrosternal pain and with 6 kg weight loss would have an Eckardt score of 6. Postoperative symptomatology was assessed using the same parameters to calculate posttreatment Eckardt score and correlate this score with the clinical. Standardized questionnaires were filled in during clinic visits or through a telephone interview with the patients. Questionnaires were filled between 6 and 24 months postoperative. The questionnaires also contained information about preoperative investigations, i.e. CXR, barium swallows and CT scans.
Statistical analysis
Groups were compared using chi-square for categorical data, and Student t-test and Mann-Whitney test for continuous data. ANOVA was used to analyse the interaction between use of intraoperative EGD and postoperative ESS.
Results
Fifty-four patients met the inclusion criteria: 28 had undergone LHM and 26 OHM.
Secondary outcomes
Duration of surgery for LHM was longer than for OHM (p<0.001) with a 10.71% conversion rate, but length of myotomy did not differ. No statistically significant difference was observed in hospital length of stay, with LHM being slightly shorter than OHM, or in complication rate, which was 7.14% overall. No mortalities or re-admissions were reported. Table 4 shows these results. Complications included two mucosal perforations, an inadvertent pleural cavity breech, and an unplanned take back. Blood loss for the procedure was approximately 10 cc.
Discussion
In treating achalasia, the focus is on alleviating symptoms rather than on addressing the etiology-a neurodegenerative pathology that is irreversible. The most common primary symptom on presentation remains dysphagia, next are pain and regurgitation (9). The frequency of symptoms in our patient population was consistent with that in north and west Africa where similar studies have been conducted (5,6). Compared with data in industrialized countries, the frequency of symptoms was also comparable (2,3,11,15). The ESS is a fair measure of achalasia symptom severity; its strengths are in assessing dysphagia and regurgitation, factors that consistently performed well when assessed for reliability and validity. The apparent weaknesses of the ESS lie in the chest pain, due to the heterogeneous nature of chest pain, and weight loss, because weight loss may be multifactorial in origin (9). Our study shows that Heller myotomy affords significant relief of the symptoms of dysphagia, regurgitation and chest pain. Total symptom scores improved for (7). With the significant improvement on patient symptoms after LHM, we can infer a better quality of life, as described by Decker et al. (11). This single-centre retrospective review provides a unique analysis of surgical treatment of achalasia in a rural African setting with resource limitation. The patient population of this study is also different from that presented in western data, with a higher likelihood of advanced pathology such as sigmoid esophagus than in the western population and coupled with limitations in diagnostic capabilities (lack of manometry) introduces the potential for poor outcomes.
The challenges of carrying out laparoscopy in a rural setting extend beyond just having the right equipment. Issues encountered in our setting include unreliable electricity, lack of operative support personnel to assist in troubleshooting problems that occur intraoperatively, supply chain issues, and patients who have limited resources to fund the procedure and to allow appropriate follow-up. While some of these factors may not affect the primary outcome substantially, they certainly could be reflected in the complication rate and the need to convert to open surgery. A combination of these factors may even render the procedure unfeasible in our setting. These factors can also have a significant impact on cost and further implementation of the procedure. While patients undergoing LHM were significantly younger than those with OHM, the mean age for both cohorts is dramatically younger than in previously published data on western populations (12). This younger age reflects a possible difference in disease aetiology, population characteristics or risk factors and warrants further study. With the advent of minimally invasive procedures, reduced complication rates and decreased hospital length of stay have been demonstrated among primarily western patient populations (7). In this study, we aimed to demonstrate that LHM does not have inferior outcomes to OHM, which has been carried out for longer in most parts of Africa and indeed in Kenya. Alleviating patient symptoms may be greater with LHM, which showed a trend toward shorter hospital length of stay. It is not surprising that the duration of surgery was longer for the laparoscopic cases, which we can attribute to the challenges faced in a resource-limited environment coupled with the procedure's steep learning curve. The surgeons who performed the procedures had previous experience in a western setting, and a true learning curve that accompanies a de novo introduction of LHM would not be reflected in this study. Laparoscopy was introduced at the facility in 2009 and its use gradually increased to become the main treatment modality for achalasia.
Previously published work notes that this learning curve typically plateaus after 20 esophagomyotomies, at which point the surgeon has accumulated sufficient experience and confidence (13). Anecdotally, technical issues that may have contributed to duration of the procedure included frequent malfunction of the equipment with difficulty in troubleshooting, thus prolonging operative time. Our conversion rate of 10.71% and low complication rate agree with other literature, indicating the feasibility of LHM in our setting (1,14). The limitations of this study include its retrospective nature, particularly in defining preoperative Eckardt scores from data in patient charts. In addition, variability in the level of proficiency of operating surgeons was not evaluated in this study. Within a teaching institution such as a rural hospital in Kenya, the case could be performed by an attending surgeon or by a surgical resident under the supervision of the attending surgeon; however, which individual performed critical portions of the case is not routinely recorded in the medical record. Patient follow-up was poor as most of the patients would not attend follow-up clinics in the recommended time post-procedure. Some did not provide a telephone contact which in turn had the effect of prolonging the time to complete the questionnaire. An additional limitation is the lack of documentation on the choice of laparoscopy versus an open procedure. The initial thought was that this decision was based on the availability of laparoscopy services during the study period (i.e. availability of working equipment and CO 2 ). However, a notable difference in baseline characteristics was that older patients were more likely to undergo an open procedure, owing to patient or surgeon preference, but this warrants further investigation. Our analyses were limited by our small sample size, so while the study provides an initial snapshot of LHM within an East African population, further studies with larger samples sizes would be beneficial. A prospective arm of study is being considered to evaluate the changes, if any, that have been brought about by introducing laparoscopy in our setting.
Conclusion
We demonstrate that laparoscopic Heller esophagomyotomy is effective and safe in a rural East African setting, with excellent functional outcomes compared with open techniques, despite several limitations ranging from limited availability of advanced diagnostic modalities (manometry), advanced patho-logy at presentation and lack of funds (patient) to sub-optimal patient followup. Our findings contribute to growing evidence that laparoscopy can be feasible worldwide. | 2020-02-13T09:20:15.135Z | 2020-02-07T00:00:00.000 | {
"year": 2020,
"sha1": "2d46c7f7762d8bc6f7281e31bb26399a2f739f10",
"oa_license": "CCBY",
"oa_url": "https://www.ajol.info/index.php/aas/article/download/192992/182117",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "13d112894122a2373eb92b2867a0187db1a7c811",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
257365130 | pes2o/s2orc | v3-fos-license | A Multi-Grained Self-Interpretable Symbolic-Neural Model For Single/Multi-Labeled Text Classification
Deep neural networks based on layer-stacking architectures have historically suffered from poor inherent interpretability. Meanwhile, symbolic probabilistic models function with clear interpretability, but how to combine them with neural networks to enhance their performance remains to be explored. In this paper, we try to marry these two systems for text classification via a structured language model. We propose a Symbolic-Neural model that can learn to explicitly predict class labels of text spans from a constituency tree without requiring any access to span-level gold labels. As the structured language model learns to predict constituency trees in a self-supervised manner, only raw texts and sentence-level labels are required as training data, which makes it essentially a general constituent-level self-interpretable classification model. Our experiments demonstrate that our approach could achieve good prediction accuracy in downstream tasks. Meanwhile, the predicted span labels are consistent with human rationales to a certain degree.
INTRODUCTION
Lack of interpretability is an intrinsic problem in deep neural networks based on layer-stacking for text classification. Many methods have been proposed to provide posthoc explanations for neural networks (Lipton, 2018;Lundberg & Lee, 2017;Sundararajan et al., 2017). However, these methods have multiple drawbacks. First, there is only word-level attribution but no high-level attribution such as those over phrases and clauses. Take sentiment analysis as an example, in addition to the ability to recognize the sentiment of sentences, an ideal interpretable model should be able to identify the sentiment and polarity reversal at the levels of words, phrases, and clauses. Secondly, as argued by Rudin (2019), models should be inherently interpretable rather than explained by a posthoc model.
A widely accepted property of natural languages is that "the meaning of a whole is a function of the meanings of the parts and of the way they are syntactically combined" (Partee, 1995). Compared with the sequential outputs of layer-stacked model architectures, syntactic tree structures naturally capture features of various levels because each node in a tree represents a constituent span. Such a characteristic motivates us to think about whether the representations of these internal nodes could be leveraged to design an inherently constituent-level interpretable model. One challenge faced by this idea is that traditional syntactic parsers require supervised training and have degraded performance on out-of-domain data. Fortunately, with the development of structured language models (Tu et al., 2013;Maillard et al., 2017;Choi et al., 2018;Kim et al., 2019), we are now able to learn hierarchical syntactic structures in an unsupervised manner from any raw text. In this paper, we propose a general selfinterpretable text classification model that can learn to predict span-level labels unsupervisedly as shown in Figure 1. Specifically, we propose a novel label extraction framework based on a simple inductive bias for inference. During training, we maximize the probability summation of all potential trees whose extracted labels are consistent with a gold label set via dynamic programming Published as a conference paper at ICLR 2023 with linear complexity. By using a structured language model as the backbone, we are able to leverage the internal representations of constituent spans as symbolic interfaces, based on which we build transition functions for the dynamic programming algorithm.
The main contribution of this work is that we propose a Symbolic-Neural model, a simple but general model architecture for text classification, which has three advantages: 1. Our model has both competitive prediction accuracy and self-interpretability, whose rationales are explicitly reflected on the label probabilities of each constituent. 2. Our model can learn to predict span-level labels without requiring any access to span-level gold labels. 3. It handles both single-label and multi-label text classification tasks in a unified way instead of transferring the latter ones into binary classification problems (Read et al., 2011) in conventional methods.
To the best of our knowledge, we are the first to propose a general constituent-level self-interpretable classification model with good performance on downstream task performance. Our experiment shows that the span-level attribution is consistent with human rationales to a certain extent. We argue such characteristics of our model could be valuable in various application scenarios like data mining, NLU systems, prediction explanation, etc, and we discuss some of them in our experiments.
ESSENTIAL PROPERTIES OF STRUCTURED LANGUAGE MODELS
Structured language models feature combining the powerful representation of neural networks with syntax structures. Though many attempts have been made about structured language models (Kim et al., 2019;Drozdov et al., 2019;Shen et al., 2021), three prerequisites need to be met before a model is selected as the backbone of our method. Firstly, it should have the ability to learn reasonable syntax structure in an unsupervised manner. Secondly, it computes an intermediate representation for each constituency node. Thirdly, it has a pretraining mechanism to improve representation performance. Since Fast-R2D2 (Hu et al., 2022;) satisfies all the above conditions and also has good inference speed, we choose Fast-R2D2 as our backbone.
FAST-R2D2
Overall, Fast-R2D2 is a type of structured language model that takes raw texts as input and outputs corresponding binary parsing trees along with node representations as shown in Figure 3(a). The representation e i,j representing a text span from the i th to the j th word is computed recursively from its child node representations via a shared composition function, i.e., e i,j = f (e i,k , e k+1,j ), where k is the split point given by the parser and f (·) is an n-layered Transformer encoder. When i = j, e i,j is initialized as the embedding of the corresponding input token. Please note the parser is trained in a self-supervised manner, so no human-annotated parsing trees are required.
MODEL
There are two basic components in the Symbolic-Neural model: 1. A Structured LM backbone which is used to parse a sentence to a binary tree with node representations. 2. An MLP which is used to estimate the label distribution from the node representation.
For Structured LMs that follow a bottom-up hierarchical encoding process (such as our default LM Fast-R2D2), context outside a span is invisible to the span, which may make low-level short spans unable to predict correct labels because of a lack of information. So we introduce an optional module to allow information to flow in parse trees from top to down.
The overall idea is to construct a top-down process to fuse information from both inside and outside of spans. For a given span (i, j), we denote the top-down representation as e i,j . We use the Transformer as the top-down encoder function f . The top-down encoding process starts from the root and functions recursively on the child nodes. For the root node, we have [·, e 1,n ] = f ([e root , e 1,n ]) where e root is embedding of the special token [ROOT] and n is the sentence length. Once the top-down representation e i,j is ready, we compute its child representations recursively via [·, e i,k , e k+1,j ] = f ([e i,j , e i,k , e k+1,j ]) as illustrated in Figure 2. are role embeddings for the corresponding inputs.
We denote the parameters of the model as Ψ, the parameters used in the Structured LM as Φ and the parameters used in the MLP layer and the top-down encoder as Θ. Thus Ψ = {Φ, Θ}.
LABEL EXTRACTION FRAMEWORK FOR INFERENCE
During inference, we first use Fast-R2D2 to produce a parsing tree, then predict the label of each node in the parse tree and output a final label set by the yield function introduced below.
Inductive bias. Through observing cases in single/multilabel classification tasks, we propose an inductive bias that a constituent in a text corresponds to at most one label. As constituents could be seen as nodes in a binary parsing tree, we can associate the nodes with labels. Nodes with multiple labels could be achieved by assigning labels to non-overlapping child nodes. Please note such an inductive bias is not applicable for special cases in which a minimal semantic constituent of a text is associated with multiple labels, e.g., the movie "Titanic" could be labeled with both 'disaster' and 'love'. However, we argue that such cases are rare because our inductive bias works well on most single/multi-label tasks as demonstrated in our experiments. Label Tree. A label tree is transferred from a parsing tree by associating each node with a label. A label tree example is illustrated in Figure 3(b). During inference, we predict a probability distribution of labels for each node and pick the label with the highest probability. To estimate the label distribution, we have P Ψ (·|n i,j ) = softmax(MLP(e i,j )). Please note if the top-down encoder is enabled, we replace e i,j with e i,j .
Algorithm 1 Definition of Yield function 1: function YIELD(t) 2: The list of nodes to visit 5: while len(q) > 0 do 6: nij ← q.pop(0) 7: if nij.label == φNT then 8: if not nij.is leaf then 9: q.append(nij.left) 10: q.append(nij.right) 11: Yield function. We design a yield function that traverses a label tree in a top-down manner and extracts labels. For brevity, we use Y short for the yield function. We divide the labels into two categories: terminal labels and non-terminal labels, which indicate whether Y should stop or continue respectively when it traverses to a node. Considering some nodes may not be associated with any task-defined labels, we introduce empty labels denoted as φ T and φ N T for terminal and nonterminal ones respectively. For simplicity, we do not discuss nesting cases 1 in this paper, so there is only one unique non-terminal label which is φ N T and all task-defined labels are terminal labels. However, our method can be naturally extended to handle nesting cases by allowing non-terminal labels to be associated with task labels. As defined by the pseudo-code in Algorithm 1, Y traverses a label tree from top to down starting with the root; when it sees φ N T , it continues to traverse all its children; otherwise, when it sees a terminal label, it stops and gathers the task-defined terminal label of the node. Figure 3 illustrates how Y traverses the label tree and gathers task-defined labels.
TRAINING OBJECTIVE.
During the training stage, though the Structured LM can predict tree structures, the difficulty here is how to associate each node with a single label without span-level gold labels. We define our training objective as follows: Training objective Given a sentence S whose length is |S| and its gold label set T = {l 1 , ..., l m }, t is its best parsing tree given by the unsupervised parser of Fast-R2D2 andt is a label tree transferred from t.t [C] denotest satisfying condition C. The training objective is to maximize the probability of a given tree transferring to a label tree yielding labels that are consistent with the ground-truth labels, which could be formalized as minimizing − log P Ψ (t [Y(t)=T ] |t).
Before we get into the specifics, several key aspects are defined as follows: (1) Denotations: t i,j denotes the subtree spanning from i to j (both indices are inclusive), whose root, left and right subtree are n i,j , t i,k and t k+1,j respectively in which k is the split point.
(2) Symbolic Interface: P Ψ (l|n i,j ) is the probability of a single node n i,j being associated with the specified label l. Thus, the probability of t transferring to a specific label treet is the product of all the probabilities of nodes being associated with the corresponding labels int.
Figure 4: To ensure that the yield result oft i,j contains label l, node n i,j needs to be associated with either φ N T or l, whose probabilities are P Ψ (φ N T |n i,j ) and P Ψ (l|n i,j ) respectively. If associated with l, it satisfies the condition. If associated with φ N T , at least one of its children's yield results should contain l. Here we use \l to denote that the yield result does not contain label l. In conclusion, Y l i,j could be estimated recursively by Equation 1.
Obviously, it is intractable to exhaust all potentialt to estimate P Ψ (t [Y(t)=T ] |t). Our core idea is to leverage symbolic interfaces to estimate P Ψ (t [C] |t) via dynamic programming. We start with an elementary case: estimate the probability that the yield result of t i,j contains a given label l, i.e., . For brevity, we denote it as Y l i,j . As the recursive formulation illustrated in Figure 4, we have: However, for a given label set M, if we try to estimate P Ψ (t in the same way, we will inevitably exhaust all potential combinations as illustrated in Figure 5(a) which will lead to exponential complexity. 2 To tackle the problem of exponential complexity, we try to divide the problem of estimating Let F denote the set union of all the task labels and {φ T , φ N T }, and let O denote F \ T . By assuming that the states of labels are independent of each other, where the state of a label indicates whether the label is contained in the 2 Details about the dynamic programming algorithm with exponential complexity to estimate yield result 3 , we have: We do not approximate P Ψ (t [O∩Y(t) =φ] |t) as it could be computed directly. The above function premises that multiple non-overlapping spans could associate with the same label. In some cases, if there is a mutual-exclusiveness constraint that two non-overlapping spans are not allowed to associate with the same task label as shown in Figure 5(b), the function becomes: ) and the objective function given a parsing tree is: Because it has been verified in prior work Hu et al. (2022) that models could achieve better downstream performance and domain-adaptivity by training along with the self-supervised objective L self (Φ), we design the final loss as follows:
DOWNSTREAM TASKS
In this section, we compare our interpretable symbolic-Neural model with models based on dense sentence representation to verify our model works as well as conventional models. All systems are trained on raw texts and sentence-level labels only.
Baselines. To fairly compare our method with other systems, all backbones such as Fast-R2D2 and BERT (Devlin et al., 2019) are pretrained on the same corpus with the same vocabulary and epochs. We record the best results of running with 4 different random seeds and report the mean of them. Because of GPU resource limit and energy saving, we pretrain all models on Wiki-103 (Merity et al., 2017), which contains 110 million tokens 4 . To compare our model with systems only using whole sentence representations, we include BERT and Fast-R2D2 using root representation in our baselines. To study the reliability of the unsupervised parser, we include systems with a supervised parser Zhang et al. (2020) that uses BERT or a tree encoder as the backbone. For the former, we take the average pooling on representations of words in span (i,j) as the representation of the span. For the latter, we use the pretrained R2D2 tree encoder as the backbone. To compare with methods dealing with multi-instance learning (MIL) but without structure constraints, we extend the multi-instance learning framework proposed by Angelidis & Lapata (2018) to the multi-instance multi-label learning (MIMLL) scenario. Please find the details about the MIL and MIMLL in Appendix A.7. We also conduct ablation studies on systems with or without the top-down encoder and the mutual-exclusiveness constraint. For the systems using root or [CLS] representations on multilabel tasks, outputs are followed by a sigmoid layer and filtered by a threshold that is tuned on the training set.
Hyperparameters. Our BERT follows the setting in Devlin et al. (2019), using 12-layer Transformers with 768-dimensional embeddings, 3,072-dimensional hidden layer representations, and 12 attention heads. The setting of Fast-R2D2 follows Hu et al. (2022). Specifically, the tree encoder uses 4-layer Transformers with other hyper-parameters same as BERT and the top-down encoder uses 2-layer ones. The top-down parser uses a 4-layer bidirectional LSTM with 128-dimensional embeddings and 256-dimensional hidden layers. We train all the systems across the seven datasets for 20 epochs with a learning rate of 5 × 10 −5 for the encoder, 1 × 10 −2 for the unsupervised parser, and batch size 64 on 8 A100 GPUs. Table 1: We report mean accuracy for SST-2, Matthews correlation for CoLA, and F1 scores for the rest. We use "S.N." to denote the systems based on the Symbolic-Neural architecture, and "Sent." to denote those using only whole sentence representations. We use subscript f p for the models based on full permutation, topdown, and exclusive for those with the top-down encoder and the mutual-exclusiveness constraint. Please find the details of S.N. f p in Appendix A.2.
Results and discussion. We make several observations from Table 1. Firstly, We find that our models overall achieve competitive prediction accuracy compared with strong baselines including BERT, especially on multi-label tasks. The result validates the rationality of our label-constituent association inductive bias. The significant gap compared to MIMLL fully demonstrates the superiority of building hierarchical relationships between spans in the model. Secondly, when using sentence representation, the models with the unsupervised parser achieve similar results to those with the supervised parser on most tasks but significantly outperform the latter on CoLA. A possible reason for the poor performance of the latter systems on CoLA is that there are many sentences with grammar errors in the dataset which are not covered by the training set of the supervised parser. While the unsupervised parser can adapt to those sentences as L bilm and L KL are included in the final loss. The result reflects the flexibility and adaptability of using unsupervised parsers. Thirdly, 'parser+TreeEnc.' in Symbolic-Neural architectures does not perform as well as 'parser+TreeEnc.' using sentence representation, while the systems using the unsupervised parser show opposite results. Considering that the Symbolic-Neural model relies heavily on the representation of inner constituents, we suppose such results ascribe to the tree encoder having adapted to the trees given by the unsupervised parser during the pretraining stage of Fast-R2D2, which leads to the self-consistent intermediate representations. This result also verifies the structured language model that learns latent tree structures unsupervisedly is mature enough to be the backbone of our method.
4.2 ANALYSIS OF INTERPRETABILITY. Bastings et al. (2022) propose a method that "poisons" a classification dataset with synthetic shortcuts, trains classifiers on the poisoned data, and then tests if a given interpretability method can pick up on the shortcut.
Setup. Following the work, we define two shortcuts with four continuous tokens to access the faithfulness of predicted span labels: #0#1#2#3 and #4#5#6#7 indicate label 1 and 0 respectively. We select SST2 and CoLA as the training sets, with additional 20% synthetic data. We create a synthetic example by (1) randomly sampling an instance from the source data, (2) inserting the continuous tokens at random positions, and (3) setting the label as the shortcut prescribes.
Verification steps. The model trained on the synthesis data could achieve 100% accuracy on the synthetic test data and the model trained on the original dataset achieves around 50% on the synthetic test set.
Sorting tokens. Since our model does not produce a heatmap for input tokens, it lacks an intuitive way to get top K tokens as required in the shortcut method. So we propose a simple heuristic tree-based ranking algorithm. Specifically, for a given label, we start from the root denoted as n and compare P (l|n lef t ) and P (l|n right ) where n lef t and n right are its left and right children. If P (l|n lef t ) > P (l|n right ), all descendants of the left node are ordered before the descendants of the right child, and vice versa. By recursively ranking according to the above rule, we could have all tokens ranked. We additionally report the precision of shortcut span labels in the predicted label trees. A shortcut span label is correct only if the continuous shortcut tokens are covered by the same span and the predicted label is consistent with the shortcut label. Results. From the table 2, we have an interesting finding that the precision declines with the increase of training epochs. We think the reason for this phenomenon is that shortcut spans are the easiest to learn at early epochs, so almost all top tokens are short-cut tokens. With continuous training, the model gradually learns the semantics of texts from the original data. Although the label for a sentence in the synthetic data is random, there is still around 50% probability that it is semantically consistent with the text and hence the label probability of a certain span may exceed the probability of the shortcut span. Please note the precision of shortcut span labels predicted by our model is 100%. Such results demonstrate again that our model is self-interpretable and could reflect the model's rationales by span labels. Samples of label trees with shortcut tokens are shown in Appendix A.5
CONSISTENCY WITH HUMAN RATIONALES
To evaluate the consistency of the span labels learned by our model with human rationales, we design a constituent-level attribution task. Specifically, we hide the gold span positions in NER and slot-filling datasets to see whether our model is able to recover gold spans and labels. So only raw text and sentence-level gold labels are visible to models. We then train models as multi-label classification tasks and evaluate span positions learned unsupervisedly by models. Figure 7: A sample of our method on semi-supervised slot filling. The ground truths are Denver, Oakland, afternoon, 5 pm, nonstop for each slot correspondingly. However, the last three are reasonable even though different from the ground truths.
Data set. We report F1 scores on the following data sets: ATIS (Hakkani-Tur et al., 2016), MITRestaurant (Liu et al., 2013a) and MITMovie (Liu et al., 2013b). ATIS is a slot-filling task and the others are NER tasks. Baselines. We include two baselines with attribution ability on multi-label tasks: integratedgradient(IG) (Sundararajan et al., 2017) and multi-instance learning (Angelidis & Lapata, 2018). We follow the setup in Sec 4.1 and report the results of the last epoch. For IG, we set the interpolation steps as 200 and use the same BERT in the last section as the encoder, filter the attribution of each token by a threshold and select filtered positions as outputs. We use zero vectors and [MASK] embeddings as the baselines for IG as Bastings et al. (2022) find the latter one could significantly improve its performance. Considering IG scores not having explicit meaning, we allow IG to adjust thresholds according to the test datasets. We report the best results of both baselines and corresponding thresholds. Please find the full version of the table in Appendix A.4. For MIMLL, we select the span with the max attention score for a specified label. Please find details in Appendix A.7. Metrics. We denote the predicted span set as P and gold span set as G and the overlap of P and G with the same labels as O. Then we have: Table 2: F1 scores for semi-supervised slot filling and NER whose golden span positions are hidden. "Thres." is short for threshold.
Results and discussion. From Table 2, one observation is that models with the mutualexclusiveness constraint achieve better F1 scores. Such results illustrate that a stronger inductive bias is more helpful for models to learn constituent-label alignments. Besides, we find the Neural-Symbolic models significantly outperform the MIMLL and IG baselines on the NER datasets but trail the IG on the slot-filling task. Through studying the outputs of our method, with a sample shown in Figure 7, we find that our model tends to recall long spans while the ground truths in ATIS tend to be short spans. We also find that on sls-movie-trivial, MIMLL significantly outperforms IG. So we hypothesize that the distribution of golden span lengths may affect results. We divide sentences into buckets according to the average golden span length and compute F1 scores for each bucket, as shown in Table 2. Interestingly, we find that the scores of IG decline significantly with increasing span lengths, while our method performs well on all the buckets. In addition, we argue that the F1 scores on the NER datasets can reflect interpretability more objectively, because the boundaries of proper nouns are clear and objective, while the choice of slots is relatively ambiguous about whether to include prepositions, modal verbs, etc. We output the label trees generated by our model trained on the Navigator, SST-2, and CoLA to observe whether the model has sufficient interpretability. From Figure 8 we can find our method is able to learn potential alignments of intents and texts and show them explicitly. This can be used in multi-intent NLU systems to help determine the attribution of slots to corresponding intents. We also study the difference between generated label trees of the vanilla Symbolic-Neural model and the Symbolic-Neural topdown . The cases could be found in Appendix A.12. We find the vanilla Symbolic-Neural model fails to deal with multi-intent cases. Such an observation verifies the necessity of introducing the top-down encoder. For SST-2, as there are no neutral samples, we randomly sampled sentences from Wiki-103 as neutral texts and force all nodes to be φ N T by the mean squared error loss. Figure 9(a) shows the sentiment polarity of each constituent and the polarity reversal of "never". Such a characteristic could be used for text mining by gathering the minimal spans of a specified label. We also study the generated label trees on CoLA, a linguistic acceptance data set. We transfer the task to a grammar error detection problem by converting the label "1" to φ as "1" means no error is found in a sentence. Figure 9(b) shows it's able to detect incomplete constituents and may help in applications like grammar error location. More cases could be found in the Appendix.
CONCLUSION & LIMITATION
In this paper, we propose a novel label extraction framework based on a simple inductive bias and model single/multi-label text classification in a unified way. We discuss how to build a probabilistic model to maximize the valid potential label trees by leveraging the internal representations of a structured language model as symbolic interfaces. Our experiment results show our method achieves inherent interpretability on various granularities. The generated label trees could have potential values in various unsupervised tasks requiring constituent-level outputs.
Regarding to the limitation of our work, we require that the labels corresponding to the texts in the dataset have a certain degree of diversity, thus forcing the model to learn self-consistent constituentlabel alignments. For example, in ATIS, almost all training samples have the same labels like "fromloc.city name" and "toloc.city name". That's why our model fails to accurately associate these two labels with correct spans in Figure 7.
REPRODUCIBILITY STATEMENT
In the supplemental, we include a zip file containing our code and datasets downloading linkage. We've also included in the supplemental the scripts we run all baselines and the Symbolic-Neural models.
ACKNOWLEDGEMENT
This work was supported by Ant Group through CCF-Ant Research Fund. We thank the Aliyun EFLOPS team for their substantial support in designing and providing a cutting-edge training platform to facilitate fast experimentation in this work. We also thank Jing Zheng for his help in paper revising and code reviewing.
A APPENDIX
A.1 RELATED WORKS Structured language models. Many attempts have been made to develop structured language models. Pollack (1990) proposed to use RvNN as a recursive architecture to encode text hierarchically, and Socher et al. (2013) showed the effectiveness of RvNNs with gold trees for sentiment analysis. However, both approaches require annotated trees. Gumbel-Tree-LSTMs (Choi et al., 2018) construct trees by recursively selecting two terminal nodes to merge and learning composition probabilities via downstream tasks. CRvNN (Chowdhury & Caragea, 2021) makes the entire process end-to-end differentiable and parallel by introducing a continuous relaxation. However, neither Gumbel-Tree-LSTMs nor CRvNN mention the pretraining mechanism in their work. URNNG (Kim et al., 2019) proposed the first architecture to jointly pretrain a parser and an encoder based on RNNG (Dyer et al., 2016). However, its O(n 3 ) time and space complexity makes it hard to pretrain on large-scale corpora. ON-LSTM and StructFormer (Shen et al., 2019; propose a series of methods to integrate structures into LSTM or Transformer by masking information in differentiable ways. As the encoding process is still performed in layer-stacking models, there are no intermediate representations for tree nodes. Maillard et al. (2017) propose an alternative approach, based on a differentiable CKY encoding. The algorithm is differentiable by using a soft-gating approach, which approximates discrete candidate selection by a probabilistic mixture of the constituents available in a given cell of the chart. While their work relies on annotated downstream tasks to learn structures, Drozdov et al. (2019) propose a novel auto-encoder-like pretraining objective based on the inside-outside algorithm Baker (1979); Casacuberta (1994) but is still of cubic complexity. To tackle the O(n 3 ) limitation of CKY encoding, Hu et al. (2021) propose an MLM-like pretraining objective and a pruning strategy, which reduces the complexity of encoding to linear and makes the model possible to pretrain on large-scale corpora.
Multi-Instance Learning. Multi-Instance learning (MIL) deals with problems where labels are associated with groups of instances or bags (spans in our case), while instance labels are unobserved. The goal is either to label bags Keeler et al. (1990); Dietterich et al. (1997); Maron & Ratan (1998) or to simultaneously infer bag and instance labels Zhou et al. (2009);Kotzias et al. (2015). Angelidis & Lapata (2018) apply MIL to segment-level sentiment analysis based on an attention-based scoring method. In our work, we refine instances to different semantic granularities and consider hierarchical relationships between instances.
Model Interpretability. In the line of work on model interpretability, many approaches have been proposed. Ribeiro et al. (2016); Lundberg & Lee (2017) If |M| > 1, let C(M) be the set of all potential pairs where Y(t i,k ) ∪ Y(t k+1,j ) = M. If |M| = 1, it's similar to the case described in Figure 4. If M = φ, n i,j could only be associated with φ T or φ N T with M l = φ and M r = φ.
Finally the transition function for t i,j where i < j is: The transition function works in a bottom-up manner and iterates all possible M ⊆ T . X T 1,|S| is the final probability. Even though, iterating C(M) and all M ⊆ T is of exponential complexity, so it only works when |T | is small.
A.3 PRETRAIN BERT AND FAST-R2D2 FROM SCRATCH
The dataset WikiBooks originally used to train BERT (Devlin et al., 2019) is a combination of English Wikipedia and BooksCorpus (Zhu et al., 2015). However, BooksCorpus is no longer publicly available. So it's hard to pretrain Fast-R2D2 on the same corpus, making it impossible to compare fairly with the publicly available BERT model. Considering the limited GPU resources, we pretrain both BERT and Fast-R2D2 from scratch on Wiki-103. We train BERT from scratch following the tutorial by Huggingface 5 with the masked rate set to 15%. The vocabulary of BERT and Fast-R2D2 is kept the same as the original BERT. As demonstrated in RoBERTa (Liu et al., 2019) that the NSP task is harmful and longer sentence is helpful to improve performance in downstream tasks, we remove the NSP task and use the original corpus that is not split into sentences as inputs. For Fast-R2D2, WikiText103 is split at the sentence level, and sentences longer than 200 after tokenization are discarded (about 0.04‰ of the original data). BERT is pretrained for 60 epochs with a learning rate of 5 × 10 −5 and batch size 50 per GPU on 8 A100 GPUs. Fast-R2D2 is pretrained with learning rate of 5 × 10 −5 for the transformer encoder and 1 × 10 −3 for the parser. Please note that the batch size of Fast-R2D2 is dynamically adjusted to ensure the total length of sentences in a batch won't exceed a certain maximum threshold, to make the batch size similar to that of BERT, the maximum threshold is set to 1536. Because the average sentence length is around 30 for Wiki103, the average batch size of Fast-R2D2 is around 50 which is similar to that of BERT. We adopt a canonical multi-instance learning framework used in text classification proposed by Angelidis & Lapata (2018), in which each instance has a representation and all instances are fused by attention. The original work produces hidden vectors h i for each segment by GRU modules and computes attention weights a i as the normalized similarity of each h i with h a .
where C is the total class number, p i is the individual segment label prediction, p d is document level predictions. They use the negative log-likelihood of the prediction as an objective function: . We simply replace segment representations with span representations in our work as the experiment baseline. Specifically, we use the top-down representation e i,j as the tensor to be attended to and predict the label by e i,j : where D is the span set for a parsing tree. Please note the MIL model in our baselines is trained together with L bilm and L KL , whose final loss is L cls + L self .
A.7 MULTI-LABEL MULTI-INSTANCE LEARNING BASED ON FAST-R2D2
To support multi-label multi-instance learning, we refactor the above equations to enable them to support attention on different labels. For each label there is vector h The final objective function is L = − c∈T log p (c) − c∈F \T log(1−p (c) ). In the semi-supervised slot-filling and NER tasks, we let the model predicts labels first and then pick the span with the max attention weight for each label.
A.8 ABOUT THE CONDITIONAL INDEPENDENCE ASSUMPTION
We argue the independence assumption used in our objective actually is weaker than the one used in conventional multi-label classification tasks. Formally, conventional multi-label classification is the problem of finding a model that maps inputs x to binary vectors y; that is, it assigns a value of 0 or 1 for each element (label) in y. So the objective of multi-label classification is to minimize: − log P ( i∈T y i = 1, j∈O y j = 0|x), where T denotes the indices for golden labels and O denotes the indices not in T . It's impossible to tractably estimate it without introducing some conditional independence assumption. By assuming the states of labels are independent of each other, we have: P ( i∈T y i = 1, j∈O y j = 0|x) ≈ P ( i∈T y i = 1|x) · P ( j∈O y j = 0|x) log P ( i∈T y i = 1|x) ≈ log i∈T P (y i = 1|x) = i∈T log P (y i = 1|x) log P ( j∈O y j = 0|x) ≈ log j∈O P (y j = 0|x) = j∈T log P (y j = 0|x) which could finally be reformulated to the well-known binary cross entropy loss − iŷ i log y i + (1 −ŷ i ) log(1 − y i ), whereŷ is the ground truth and y is the output probability of a model.
The logic of Equation 2 is similar to the above equations. P (t [T ⊆Y(t)] |t) is equivalent to P ( i∈T y i = 1|x) and P (t [O∩Y(t) =φ] |t) is equivalent to P ( j∈O y j = 0|x). But we don't require the independence assumption to estimate the latter. | 2023-03-07T06:42:11.116Z | 2023-03-06T00:00:00.000 | {
"year": 2023,
"sha1": "1143ed5a664e0fc2443e0a24fbe0bf274a89b48c",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "1143ed5a664e0fc2443e0a24fbe0bf274a89b48c",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Computer Science"
]
} |
259439674 | pes2o/s2orc | v3-fos-license | The relationship between secondary education outcomes and academic achievement: a study of Finnish educational sciences students
ABSTRACT A multitude of research shows that the secondary education (SE) outcomes relate with university academic achievement (AA). More research is needed on SE outcomes subject-specifically. We examined the relationship between the Finnish matriculation examination (ME) results and AA, study success (grades) and study progress (credits), of educational sciences students (N = 1,214) who were pursuing a three-year bachelor’s degree in 2007–2017. The overall ME results explained 15% of the variance in study success and less than 1% in study progress. Success in mother tongue, mathematics, psychology and health education in the ME along with 1st- and 2nd-year study success related positively with higher study success throughout the three years. For the 3rd-year study progress, the results were more mixed, and progress in the 1st and 2nd year was statistically significant compared to the ME results. In conclusion, it is recommended to consider subject-specific ME outcomes regarding student admissions and curricula.
Introduction
The secondary education (SE) outcomes, such as high school grade point average (HSGPA), have been shown to predict well university students' academic achievement (AA) (see e.g.Richardson, Abraham, and Bond 2012).Field-specific prior knowledge is known to relate to high performance within a field in question (see e.g.Pinxten, Van Soom, and Peeters et al. 2019).A closer look at the literature on meritbased approaches to explaining AA reveals several gaps and shortcomings.First, the validity of a HSGPA has been questioned for not making visible subject-specific skills and knowledge (see e.g.Office of Qualifications and Examinations Regulation 2012).Second, the educational sciences are underrepresented in research settings compared to, for example, STEM (Science, Technology, Engineering and Mathematics) subjects and fields (see e.g.Corcoran and O'flaherty 2017).Third, there is less evidence on the relationship between SE outcomes and AA in the years between the 1 st year of university study and graduation (see e.g.Westrick, Lee, and Robbins et al. 2015).
Therefore, we examined students who studied educational sciences in 2007-2017 at a Finnish university (henceforth called 'students').We studied AA, both study success and study progress, in the three-year bachelor's degrees (BA) overall and year-specifically (the 1 st , 2 nd and 3 rd years) based on both overall and subject-specific results of the matriculation examination (ME) which is the national congruent test taken among SE students.To gain a longitudinal perspective, we examined the relationship between AA in the 1 st and 2 nd year with subsequent years of BA.
SE outcomes as one indicator at the gate to a university study
In some countries, youth study a wide range of different subjects (for Finland, see the Context section), while in others, youth study field-specific tracks already in SE (European Education and Culture Executive Agency [Eurydice] 2018).Hence, each applicant and enrolled student has an individual combination of SE outcomes, which can be considered their entry characteristics (Noyens, Donche, and Coertjens et al. 2017).
The admission criteria applied for admission to a university are different.Some criteria emphasise a background in a certain track (Vulperhorst, Lutz, and De Kleijn et al. 2017), a certain overall level of accomplishment in SE (e.g.HSGPA) (Steenman, Bakker, and Van Tartwijk 2016), or a certain level of success in standardised tests, e.g. the Scholastic Assessment Test (SAT) or American College Testing (ACT) (Noble and Sawyer 2002).Other admission systems open the doors for all applicants but grant study rights based on success in the first year of study (Brouwer, Jansen, and Hofman et al. 2016).Regardless of admission criteria, an admission process aims to identify and admit applicants who are most likely to complete the education (Noble and Sawyer 2002).
According to Noyens, Donche, and Coertjens et al. (2017), research on the transition to universities can be categorised in terms of phase, time, level or outcome.While the relationship of the SE outcomes with AA has been studied in other contexts, the Finnish educational sciences offer insight into an education system known for its highly selective student admission (OECD 2022), but which has considered the tests taken and grades attained in the national matriculation examination (ME) as an admission criterion in heterogeneous ways, if at all (Kunnari, Pursiainen, and Läärä et al. 2022).Students have broad possibilities for choosing subjects to include in the ME (Finnish National Agency for Education 2003); however, students in educational sciences have typical profiles (Kunnari, Pursiainen, and Läärä et al. 2022).
Evaluation of AA based on SE outcomes
The research field does not have a shared definition of AA (Andersson 2003).AA is usually measured with indicators for study success in terms of grades (study results in particular courses or a period of time) and study progress in terms of credits (e.g.number of European Credit Transfer and Accumulation System (ECTS) that a student has earned in a particular period of time) (see, e.g.Andersson 2003).However, Hattie (2009) identified 130 significant variables to explain AA in the context of schools, colleges, and universities.In this study, we concentrated solely on merit-based outcomes, i.e. grades in SE as well as grades and credits in universities.
A large number of studies in the broader literature have examined AA based on SE outcomes.Students with high grades in SE usually succeed better in university (see e.g.Corcoran and O'flaherty 2017).Also, the higher the success is in SE, the better the students perform during the 1 st year of study (Brouwer, Jansen, andHofman et al. 2016). De Koning, Loyens, andRikers et al. (2012) found that SE outcomes relate to higher achievements in the 1 st year at a high level and the 2 nd and 3 rd years at a moderate level in bachelor's studies.The 1 st year has also been identified as important for later study success (Allen, Mattern, and Ndum 2019).
Literature on the relationship of subject-specific knowledge with AA in university is less consistent.Prior knowledge (Dochy 1992) of STEM in SE was associated with higher success in the 1 st year of STEM studies (Pinxten, Van Soom, and Peeters et al. 2019), and psychology was associated with more successful completion of the 1 st year of psychology study (Diseth, Pallesen, and Brunborg et al. 2010).High success in a native language and mathematics in SE has been identified as related to high success in university studies across disciplines (De Koning, Loyens, and Rikers et al. 2012).
In Finland, discipline-specific studies on the relationship between the ME and study success of medical and law students (Kleemola and Hyytinen 2019;Lindblom-Ylänne, Lonka, and Leskinen 1999) have identified similar relationships between the ME and study success as international research on AA.Among students in educational sciences, success in the ME has been found to relate to study success in universities more than entrance tests (Kupiainen, Marjanen, and Ouakrim-Soivo 2018;Mankki, Räihä, and Joutsenlahti 2018).
The SE outcomes have been identified as relating less to study progress (De Koning, Loyens, and Rikers et al. 2012).Subject-specific ME results have been connected to field-specific study progress among law students (Kleemola and Hyytinen 2019) and students of social sciences and humanities (Kallio et al., 2018).Also, the 1 st -year study progress has been found to affect progress in later years across disciplines (Koster and Verhoeven 2017).
In this study, we approach the relationship of SE outcomes with AA in universities with a temporal comparison aspect (Wigfield, Eccles, and Möller 2020), where AA is examined considering previous achievement-related experiences (SE outcomes) (see Wigfield, Eccles, and Möller 2020, 658).The other way around, this also relates with 1983 expectancy-value theory and more precisely with the utility value of educational choices: SE students might have chosen the ME tests based on known subject matter in educational sciences in university (Wigfield and Cambria 2010).In this empirical study with registry data (Wolff, Helm, and Zimmermann et al. 2018) the temporal comparison aspect was utilised when investigating AA longitudinally at multiple waves (time and phase: Noyens, Donche, and Coertjens et al. 2017).Also, the educational outcomes was examined on two levels; ME results overall and subject specifically and AA year specifically and the 3 years overall (Office of Qualifications and Examinations Regulation 2012).
Research questions and objectives
Overall, there is scant research on educational sciences students' AA.This study investigated the relationship between SE outcomes and educational sciences students' AA.Students' ME results are interpreted as prior knowledge (see e.g.Dochy 1992) and is one entry characteristic (Noyens, Donche, and Coertjens et al. 2017) upon enrolment to university studies.We addressed the following research questions: (1) What kinds of entry characteristics did students possess based on the ME results?(2) How did entry characteristics relate to AA year-specifically (the 1 st , 2 nd and 3 rd years) of BA studies?(3) How did the 1 st -and 2 nd -year AA relate with AA in subsequent years of BA studies?
The Finnish general upper SE and the ME
The academically emphasised general upper SE culminates in the national ME and is the main path to Finnish universities (The Matriculation Examination Board 2020).The national curriculum (Finnish National Agency for Education 2003) includes 75 compulsory and optional courses (approximately 38 hours/course) in mother tongue, advanced or basic syllabus mathematics (henceforth called 'advanced mathematics' and 'basic mathematics'), advanced and basic syllabus languages, and subjects in humanities and natural sciences (Kunnari, Pursiainen, and Läärä et al. 2022).
Participants in this current study needed to accomplish within three consecutive times (e.g.spring -autumn -spring) the obligatory mother tongue and at least three other subjects in the ME.Previously, they had had to complete the compulsory and a number of optional courses in these subjects to receive the ME certificate (Finnish National Agency for Education 2003).The ME assesses whether students have reached an adequate level of maturity and the targeted learning outcomes defined in the national curriculum (Britschgi 2014).The assessment is based on seven grades: (descending in order) laudatur (L), eximia cum laude approbatur (E), magna cum laude approbatur (M), cum laude approbatur (C), lubenter approbatur (B), approbatur (A), and improbatur (I) (failed test) (The Matriculation Examination Board 2017).
Research-based educational sciences
The research-based Finnish educational sciences (Toom 2010) consist of theoretically emphasised courses and practical training in an everyday context (Tirri 2014).To be an eligible teacher, students need to, according to the European Bologna process, finish bachelor's and master's degrees in programmes designed to educate teachers.An exception is early childhood education, where BA is adequate for eligibility.The students at Finnish universities attain grades ranging from 1 (low) to 5 (high) or passed/failed, and study credits based on the ECTS for accomplished courses.A BA is composed of basic and intermediate studies.
The study was conducted at a faculty of education which offers study programmes that educate future teachers for kindergarten and primary schools (classes 1-9, ages 6-15), as well as professionals in education outside the teaching profession.The study programmes examined are intercultural teacher education, educational sciences, primary school teaching, artistic and practical subjects (including study programmes for music, handicrafts, and technology) and early childhood education (bachelor and master).
Data sources and participants
Originally, the registry data consisted of 1,579 students from 2006 to 2015.Data on BA achievements were obtained from the studied university and the ME results from the Matriculation Examination Board.The two datasets were combined by this university and analysed anonymously.
First, the data was limited to include three consecutive years of BA studies among enrolled and beginning students between 2007 and 2014; the data coverage extended until 2017 (N = 1,316).Enrolled students without any registered achievements during the three years (16 students) and students with no cumulative change in credits from the 1 st year forwards (37 students) were excluded from the data.No cumulative change in credits might indicate a drop-out and did not provide a coherent overall view for the analysis of the three-year BA studies.Students who had not received assessments on a scale of 1-5 (11 students) were excluded from the data.One student with no grade in mother tongue in the ME and one student with incoherent information on the year of enrolment (−3 gap years, most probably a registration error) was excluded.Extreme outliers in both the lower and upper ends of the 1 st , 2 nd and 3 rd study years (36 students) were not examined due to the credits being very different from the rest of the data (Field 2018) which may violate the assumptions (Tabachnick and Fidell 2007).After these restrictions and exclusions, the study population included 1,214 students, 83% of whom are women.
Dependent variables
Study progress was defined based on study credits gained year-specifically and the three-year period in BA.Student's study success in the courses included in the curriculum was assessed based on grades 1-5, during the same timeframe.
Independent variables
The MEs were considered based on overall success (similarly to the high school grade point average, HSGPA representing students' average success in a particular period of time).The tests selected for closer examination were obligatory mother tongue, advanced syllabus English (henceforth called 'advanced English'), mathematics (advanced, basic, or none), psychology and health education.Advanced English was chosen because most SE students take the test, and psychology and health education being the most popular tests in the humanities and natural sciences group (Kunnari, Pursiainen, and Läärä et al. 2022).Mathematics and mother tongue were considered important to examine due to their central role in explaining AA (see e.g.Allen, Mattern, and Ndum 2019).We transformed the grades into numerical values according to the following standard practice: I = 0, A = 2, B = 3, C = 4, M = 5, E = 6, and L = 7 (Lindblom-Ylänne, Lonka, and Leskinen 1996; The Matriculation Examination Board 2017).See Table 1 for descriptive statistics on the dependent and independent variables.
The selected ME tests were coded into four categories.The first category included grades I, A and B (low achievers), category two grades C and M (average achievers) and category three grades E and L (high achievers).The fourth category included those not taking a test on the subject in question (except for their mother tongue).For study success and progress, the first category included those who succeeded or progressed with one standard deviation below the mean (low success/progress), while the third category included those who succeeded or progressed with one standard deviation above the mean (high success/progress).Students between categories one and three were placed in the second category (average achievers).The fourth category covered those not gaining any credits (ECTS) or not attaining any grades.Mathematics was coded into seven categories.The low achievers in basic mathematics were included in category one, the average achievers in category two and the high achievers in category three.The low achievers in advanced mathematics were included in category four, the average achievers in category five and the high achievers in category six.The applicants who took neither advanced nor basic mathematics were included in category seven.The reference category was average achievers, and for mathematics average achievers in basic mathematics.
The overall ME results and the ME tests selected for closer examination correlated with each other (see Table 2).Also, the year-specific AA each correlated with the variable that considered three-year AA.These variables were considered to refer to the same constructs and were applied separately in different analyses (Field 2018).The correlations could not be computed for mathematics because the students had mostly taken either advanced or basic mathematics or neither of them.
Statistical analysis
Before conducting the statistical analysis, the relevant assumptions were tested.The large sample size was fitting for regression analysis and, after the exclusions of outliers, the values for skewness and homoscedasticity were within reference values (Tabachnick and Fidell 2007).
The distributions of the variables were summarised with descriptive statistics (RQ1).A simple regression analysis was applied to reveal the relationship between the average success in the ME (grades 0-7) and three-year AA (RQ2).Zero stands for failed tests (N = 11).Not taking a test in the ME was not considered in the analysis.To answer the second and third research questions, multiple linear regressions with categorical predictors were applied (Field 2018) using general linear model in SPSS.This rendered us to test the effects of several independent variables, individually and in combination, on a dependent variable.For RQ2, we examined the relationship of the categorical independent variables (mother tongue, advanced English, basic and advanced mathematics, psychology and health education) with year-specific AA in university.Similarly, for RQ3 we examined the relationship between the 1 st -and 2 nd -year study success and study progress with AA in the 3 rd year.
Tests taken and grades attained in the ME
The first research question on student's ME results is answered with descriptive results.On average, students' ME background (see Table 3) was relatively similar.Other tests in humanities and natural sciences were taken at most by 17%, intermediate Finnish or Swedish by 78%, advanced syllabi language by 0-6% and basic syllabi language by 0-10%.The average overall success in the ME was 4.6.The standard deviations (1.0 to 1.4) indicate that there was considerable variation for each examined ME test.
Relationship between prior knowledge and year-specific AA
We answered the second research question to understand the relationship between overall and subject-specific ME results and AA.A significant regression equation was found in the three-year study success (F(1, 1212) = 212.412;p < .001),with an R 2 of .149.Students predicted three-year study success was equal to 2.849 + 0.198 (the ME results) when the ME results were measured in grades 0-7.Students' three-year study success increased by 0.198 with each increase in the average grade in the overall ME results.
The relationship of subject-specific ME results with year-specific AA is reported in Table 4.The ME results explained 15% of the 1 st -year study success variation, 12% of the 2 nd -year study success and 7% of the 3 rd -year study success.In the 1 st year, all of the examined ME tests were statistically significant, in the 2 nd year all but advanced English and in the 3 rd year only mother tongue and mathematics.In general, the results show that in relation to the average performers, the higher the success was in all the examined ME results, for mathematics in advanced mathematics, students succeeded on average with higher grades throughout the BA.Mother tongue and average or high success in advanced mathematics have the highest Beta values.These values decreased in comparison to the reference group after the 1 st year.
Regarding study progress, a significant regression equation was found in the three-year study progress (F(1, 1212) = 10.38;p < .001),with an R 2 of .008.Students predicted three-year study progress would equal 154.65 + 4.006 (the ME results) when the ME results were measured in grades 0-7.Students' three-year study progress increased by 4.00 ECTS, with each increase in the average grade in the overall ME results.The relationship of subject-specific ME results with year-specific AA are reported in Table 5.The year-specific examinations of the tests chosen for a closer examination revealed that the R 2 remained low throughout the BA.Mathematics and health education in the 1 st year and mathematics in the 2 nd year were statistically significant.In the 3 rd year, none of the examined ME tests were statistically significant.In relation with the reference group, students gained more credits throughout the BA if they succeeded with high grades in advanced mathematics, psychology, and health education.Low achievers in health education succeeded better in the 2 nd and 3 rd year in relation with the reference group.Also, low achievers in psychology compared to the reference group did succeed better in the 2 nd year.Table 4. Bachelor's degree students study success in the 1 st , 2 nd and 3 rd year by tests taken and grades attained of selected matriculation examination tests by level of performance.Point estimates (beta) and standard errors (SE) of the regression coefficients with 95% confidence intervals (CI) in contrast to the reference category as defined by average success, and for mathematics, average success in B-level mathematics.
tongue, mathematics, and the 1 st -year study success were statistically significant variables.For the 3 rd -year study success, the ME together with the 1 st -and 2 nd -year study success explained 20% of the variation, statistically significant variables being mother tongue and the 1 st -and 2 ndyear study success.For study progress, the ME results together with the 1 st -and 2 nd -year study progress increased the R 2 to 7% for both the 2 nd and 3 rd years.For the 2 nd -year study progress, mathematics and the 1 st -year study progress were statistically significant while for the 3 rd -year study progress, the 1 st -year study progress and the 2 nd -year study progress were statistically significant.Changes in the parameter estimates (B, SE, and 95% CI) were minor, and the main principle of higher success or progress in the independent variables was related with higher success or progress in the 2 nd and 3 rd study year.
Discussion
First, the descriptive results showed that besides the test in obligatory mother tongue, the most frequently taken tests were advanced English, basic mathematics, psychology, and health education.The highest average grades were attained in these tests, except for advanced English.The entry characteristics were similar to those accepted nationally for educational sciences programmes (Kunnari, Pursiainen, and Läärä et al. 2022).
Second, the linear regression analysis showed that the overall ME results explained approximately 15% of study success and less than 1% of study progress in the three-year BA studies.Both results were statistically significant; however, the coefficients of determination (referring to the linear regression analysis result, R 2 ) remained rather low which is in line with research internationally (Richardson, Abraham, and Bond 2012).Research on the ME has had similar results on the correlations (Kupiainen, Marjanen, and Ouakrim-Soivo 2018) and coefficients of determination (Mankki, Räihä, and Joutsenlahti 2018).Multiple linear regression analysis was applied to examine the relationship of the ME results with the three-year BA year-specifically.For study success in BA, the results show that higher success in all the examined ME tests was related with higher success throughout the BA.This result is in line with research results by, for example, Corcoran and O'flaherty (2017).For study progress, high success in advanced mathematics, psychology, and health education was related with better progress throughout the BA.This is in line with, for example, Kleemola and Hyytinen (2019) research results on subject-and field-specific research results.Also, even low success in psychology was indicative of better progress in the 2 nd year and low success in health education with better progress in the 2 nd and 3 rd year.
These results could be indicative of prior knowledge that gives the readiness to acquire new knowledge and, hence, supports progress in BA (Diseth, Pallesen, and Brunborg et al. 2010).The novel insight to AA in this study is the use of the ME results on two levels (overall and subject specific), the context (Finnish universities), and the longitudinal approach through a transition phase from SE to educational sciences.It is also noteworthy that the meaning of grades is different for each individual throughout the study path (Cachia, Lynam, and Stock 2018;Wigfield and Cambria 2010).Also, on a contextual level, the grades are employed differently depending on the countries education system (Powell & Arora-Jonsson, 2021).Hence, it was fruitful to study educational sciences due to its highly selective student admission.It is important to highlight that these results provide an additional insight to prior research on AA from the perspective of temporal comparison (Eccles and Wigfield 2020) on an individual level (De Clerq et al., 2021).Considering the vast strands of research on educational transitions and AA (Noyens, Donche, and Coertjens et al. 2017), these results reinforce the importance of subject-specific prior knowledge in psychology and health education (Diseth, Pallesen, and Brunborg et al. 2010; Office of Qualifications and Examinations Regulation 2012) which tell more specifically the individuals' interest and learning than the overall average of success.Prior research has shown that subject-specific prior knowledge in the ME relates with admission to educational sciences (Kunnari, Pursiainen, and Läärä et al. 2022) and with AA among law sciences students (Kleemola and Hyytinen 2019).Together with the research results that the ME is better predictors of AA than the entrance tests (Mankki, Räihä, and Joutsenlahti 2018), we consider the subject-specific results, along with mother tongue and mathematics (De Koning, Loyens, and Rikers et al. 2012), to be good indicators for AA in educational sciences, Also, for RQ3, the multiple linear regression analysis was applied.The main result is that towards the 3 rd year of study, the share of statistically significant variables in the ME declined.For study success in the 2 nd year, only mother tongue and mathematics remained statistically significant when the statistically significant 1 st -year study success was added to the analysis.For study success in the 3 rd year, the only significant variables were mother tongue along with the 1 st -and 2 nd -year study success.For the 2 nd -year study progress, mathematics and the 1 st -year study progress were statistically significant while for study progress in the 3 rd year, only the 1 st -year study progress and the 2 nd -year study progress were statistically significant.These results are in line with prior research, see e.g.Allen, Mattern, and Ndum (2019).Despite the weak coefficient of determination between the ME and study progress, subject-specific aspects of progression were identified.Following Westrick, Lee, and Robbins et al.'s (2015) suggestion, it was meaningful to examine the years between entry into university and graduation.Our results clarify the role of SE outcomes and BA studies in explaining especially study success among students in the educational sciences (cf.Corcoran and O'flaherty 2017).
A few limitations can be found in this study.Due to the lack of courses graded 1-5 in the 1 st year, we could not carry out a detailed analysis of students' study success.Courses graded passed, failed, accomplished, or other non-numerical indicators throughout BA studies were not included in this examination.The data also lacked information on precise points of graduation; this was not examined.A subject's level of difficulty and type of knowledge it provides should be examined to understand their share in explaining AA and learning in BA.
Table 1 .
Descriptive statistics of independent and dependent variables.
1 IV = independent variable, DV = dependent variable. 2d on numerical scoring of the grades from lowest to highest: I = 0, A = 2, B = 3, C = 4, M = 5, E = 6 and L = 7 (The Matriculation Examination Board 2017).3Includesonlygrades1-5 4 Missing values represent those students who did not complete the test in question.5Missingvalues for GPA represent courses evaluated based on passed/fail or other non-numerical assessments.
Table 6 .
Bachelor's degree students AA in the 2 nd and 3 rd year by study success and study progress in the 1 st and 2 nd year.Point estimates (beta) and standard errors (SE) of the regression coefficients with 95% confidence intervals (CI) in contrast to the reference category as defined by average AA.2 nd -year study success, adjusted R 2 13% 3 rd -year study success.Adjusted R 2 18% | 2023-07-11T00:17:23.548Z | 2023-06-19T00:00:00.000 | {
"year": 2023,
"sha1": "480a1039e6abeb673df87ac567d4c49d03a1d094",
"oa_license": "CCBY",
"oa_url": "https://www.tandfonline.com/doi/pdf/10.1080/0309877X.2023.2222263?needAccess=true&role=button",
"oa_status": "CLOSED",
"pdf_src": "TaylorAndFrancis",
"pdf_hash": "d3f1faa9ca1880e5ec42a415cc17809f96f3ee24",
"s2fieldsofstudy": [
"Education"
],
"extfieldsofstudy": []
} |
216430344 | pes2o/s2orc | v3-fos-license | Effects of rumination on unwanted intrusive thoughts: A replication and extension
Studies indicate that rumination might play a role in obsessive–compulsive disorder. In a previous experimental study, rumination about an unwanted intrusive thought (UIT) maintained the urge to neutralize this thought. We sought to replicate and extend these findings with measures of behavioral and mental neutralizing. Additionally, we investigated possible mechanisms that might be involved in the effects of rumination on the UIT. We activated a UIT by asking students (N = 105) to write down a sentence stating that they wished a loved person would die in a car accident. Participants were randomly allocated to rumination about the UIT, rumination about negative mood, or distraction. As predicted, rumination about the UIT maintained the urge to neutralize the UIT, relative to rumination about negative mood and distraction. In addition, rumination about the UIT also maintained distress associated with the UIT compared to rumination about negative mood and distraction. The effects of rumination did not extend to behavioral or mental neutralizing. UIT frequency and vividness were unaffected by rumination. The present findings strengthen the confidence that rumination contributes to the maintenance of UITs.
Introduction
Individuals with obsessive-compulsive disorder (OCD) typically experience distressing obsessive thoughts (Rachman, 1997(Rachman, , 1998Salkovskis, 1985), defined as "recurrent and persistent thoughts, urges, or images that are experienced, at some time during the disturbance, as intrusive and unwanted, and that in most individuals cause marked anxiety or distress" (American Psychiatric Association, 2013, p. 237). Once an obsession enters the mind, there are multiple behavioral and mental strategies an individual with OCD can use to respond (Freeston & Ladouceur, 1997). Analyzing the thought, its reasons, and causes has been identified as one of these strategies (Freeston & Ladouceur, 1997). This indicates that obsessive thoughts might be followed by rumination about the obsessive thoughts. For example, individuals diagnosed with OCD might ruminate about why they cannot get rid of their obsessive thoughts, what reasons caused the obsessive thoughts in the first place, and what might happen if the obsessive thoughts persist.
Rumination is defined as passive repetitive thinking about symptoms of distress and its causes, meanings, and consequences (Nolen-Hoeksema, 1991). Although originally researched in the area of depression (Nolen-Hoeksema, 1991;Nolen-Hoeksema et al., 2008), rumination has been associated with a number of mental disorders, such as anxiety (Harrington & Blankenship, 2002), alexithymia (Di Schiena et al., 2011), eating disorders (Naumann et al., 2015), and sleeping disorders (Carney et al., 2013). In addition, Nolen-Hoeksema et al. (2007) demonstrated that rumination might convey a risk of the onset of psychopathology. In their study, prior rumination increased the risk of subsequent onset of major depression, recurrent binge eating, and substance abuse over a 3-year period.
A number of studies with nonclinical individuals have demonstrated associations between rumination and obsessive-compulsive (OC) symptoms. For instance, Grisham and Williams (2009) and Wahl, Ertle et al. (2011) showed that the tendency to ruminate was positively correlated with OC symptoms in student samples. Studies with clinical samples indicated that ruminative thoughts occur as frequently as obsessive thoughts in individuals diagnosed with OCD (Wahl, Schönfeld et al., 2011). Dar and Iqbal (2015) found positive correlations between rumination and OC symptoms in a mixed sample of individuals diagnosed with OCD or generalized anxiety disorder. Rumination was related to the unacceptable thoughts/neutralization domain of OCD when negative affect was controlled for in an unselected treatment-seeking sample (Raines et al., 2017). Overall, these studies indicate an association between rumination and OCD symptoms and raise the question of whether and in what way they might influence each other. Wahl et al. (2019) investigated the immediate rumination effects on the distress associated with an unwanted intrusive thought (UIT), the urge to neutralize it, depressed mood, and the frequency of this thought in an experimental study. Rumination about the UIT led to an attenuated decrease of the urge to neutralize the UIT in comparison to rumination about negative mood and distraction. This means that rumination might contribute to the maintenance of intrusive thoughts by reducing the natural decrease of the urge to act upon them. The authors did not find an effect of rumination about the UIT on the distress, depressed mood, or frequency of the UIT.
Several mechanisms by which rumination might influence the persistence of intrusive thoughts have been discussed. Raines et al. (2017) suggested that rumination might promote the misinterpretation of naturally occurring intrusive thoughts by changing the appraisals of these thoughts. This idea was supported in the previous experimental study (Wahl et al., 2019). Rumination about the UIT resulted in a stronger belief that the thought might come true relative to rumination about negative mood and distraction. This suggests that ruminating about one's UITs might strengthen dysfunctional appraisals of the UIT. In addition, rumination could result in the persistence of negative mood. Negative mood increases the frequency of intrusive thoughts (Clark, 2002;Reynolds & Salkovskis, 1991) resulting in a vicious circle of negative mood and intrusive thoughts (Wahl, Schönfeld, et al., 2011). Moreover, Grisham and Williams (2009) suggested that rumination might fuel the frequency of intrusive thoughts by increasing their accessibility as a result of a quicker spread of activation in the semantic network. Previous research has demonstrated a clear association between rumination and frequency of intrusive cognitions such as intrusive memories (James et al., 2016;Luo et al., 2013;White & Wild, 2016). For example, White and Wild (2016) showed that individuals who were trained to adopt an abstract mode of processing-such as rumination-in response to a traumatic film reported more intrusive memories than individuals who were trained to adopt a concrete processing style. Abstract processing was defined as rumination focused on the reasons, meanings, and consequences of the traumatic event and concrete processing was characterized by focusing on the concrete details of the event. In another study, rumination clearly correlated positively with intrusion frequency (Zetsche et al., 2009). Taken together, these results lead us to expect rumination about a UIT to affect not just the urge to neutralize the UIT but also the associated distress, depressed mood, frequency, and negative appraisals of the thought.
Two further candidates that might be involved in the effects of rumination on UITs are trait rumination and thought-action fusion (TAF). Trait rumination refers to the tendency to ruminate in daily life (Just & Alloy, 1997;Nolen-Hoeksema, 1991, 2000. Studies indicate that higher levels of trait rumination are associated with greater negative emotional outcomes (Moberly & Watkins, 2006;Watkins, 2004). TAF is the belief that experiencing an unacceptable thought is morally equivalent to acting according to the thought or that mere thinking about a particular event makes it more likely to happen (Rachman, 1997(Rachman, , 1998Rassin et al., 2001;Salkovskis, 1985;Shafran et al., 1996). The effects of rumination on UITs might be particularly pronounced for individuals who have a strong tendency to ruminate or who strongly endorse beliefs about TAF. In this way, trait rumination and TAF might moderate the effects of rumination on UITs.
Finally, rumination might change the vividness of a UIT. In one study, a majority (81%) of individuals with OCD reported having mental images (Speckens et al., 2007). These images were mainly visual and were experienced as distressing and vivid (Lipton et al., 2010). Intrusive images in OCD seem to occur more frequently than in anxiety disorders, are less often associated with past memories, and are typically viewed from a person's own vantage point rather than from an observer's perspective (Lipton et al., 2010). Rumination has been identified as one of the main triggers of intrusive images (Birrer et al., 2007), and one can assume that rumination affects the vividness of UITs (Birrer et al., 2007;Pearson et al., 2013).
The present study seeks to further clarify the imminent effect of rumination on UITs by replicating and extending previous findings by Wahl et al. (2019). The first aim was to replicate their main finding (rumination on a UIT attenuates the decline in the urge to neutralize) using an identical experimental paradigm in an independent sample. The second aim was to extend these findings by including measures of actual behavioral and mental neutralizing. While Wahl et al. (2019) investigated the urge to neutralize, an even stronger indication of the effects of rumination on UITs would be changes in actual neutralization. Additionally, we investigated several possible mechanisms that might be related to the influences of rumination on UITs.
We hypothesized that, relative to distraction and rumination about negative mood, rumination about a UIT would attenuate the natural waning of the urge to neutralize the UIT (replication of previous main result, Hypothesis 1). We additionally hypothesized that distress, depressed mood, and UIT frequency would decrease to a smaller degree after rumination about a UIT than after rumination about negative mood and distraction (Hypothesis 2a, b, and c). To extend the previous findings, we predicted that actual behavioral and mental neutralizing would be more pronounced after rumination about a UIT than after rumination about negative mood and distraction (Hypothesis 3a and b).
Concerning the potential mechanisms, we predicted that trait rumination and TAF, respectively, would moderate the relation between rumination about a UIT and the urge to neutralize (Hypothesis 4a and b). Specifically, we expected that the higher the level of trait rumination, the stronger the effects of rumination about a UIT on the urge to neutralize would be. Similarly, we predicted that the higher the TAF beliefs, the stronger the rumination about a UIT would affect the urge to neutralize. Furthermore, we hypothesized that rumination about a UIT would increase the negative appraisals of the UIT in comparison to rumination about negative mood and distraction (Hypothesis 5). Finally, we examined whether rumination about a UIT would affect the UIT vividness in comparison to rumination about negative mood and distraction in an exploratory analysis.
Method Participants
All participants (N ¼ 105) were undergraduate psychology students recruited at the University of Basel (M age ¼ 22.35 years, SD ¼ 4.94; 89 females, 16 males). For their participation, they received course credit. During the experimental manipulation, the participants were randomly allocated to rumination about a UIT (UIT rumination group; n ¼ 34), rumination about negative mood (mood rumination group; n ¼ 35), or distraction (n ¼ 36). The experimental groups did not differ in sociodemographic variables, positive or negative affect, depressive symptoms, OC symptoms, degree of brooding, or degree of TAF, all ps > .05 (see Table 1 for means and standard deviations [SDs]). The study was reviewed and approved by the institutional review board of the University of Basel (approval number: IRB 009-16-1).
Measures
Standardized questionnaires. The Positive and Negative Affect Schedule (PANAS; Watson et al., 1988; German version: Krohne et al., 1996) is a measure of positive (10 items) and negative (10 items) affect with good reliability and validity (Crawford & Henry, 2004;Krohne et al., 1996). In this study, we measured the affect experienced within the last 12 months.
Cronbach's a was high in the current sample (for positive affect, a ¼ .81; for negative affect, a ¼ .84).
The Obsessive-Compulsive Inventory-Revised (OCI-R; Foa et al., 2002;German version: Goenner et al., 2007) is an 18-item self-report measure of OC symptoms consisting of six subscales (washing, obsessions, hoarding, ordering, neutralizing, and checking). The scale possesses good psychometric properties (Goenner et al., 2007(Goenner et al., , 2008. In the current sample, the internal consistency of the total scale was high with Cronbach's a ¼ .85. The Beck Depression Inventory (BDI; Beck et al., 1979;German version: Hautzinger et al., 1995) is a 21-item self-report measure of depressive symptoms. The BDI is a widely used instrument in research with good reliability and validity (Beck et al., 1988). In this sample, Cronbach's a ¼ .84.
The Ruminative Responses Scale (RRS; Nolen-Hoeksema & Morrow, 1991;German version: Kuehner et al., 2007) is a 22-item self-report questionnaire that measures trait rumination. For this study, we used only the 5-item brooding scale (RRS-brood), which measures unproductive self-focused responses to sad mood. We chose this subscale because it is not contaminated by items focusing on depression (Treynor et al., 2003). For our study, we used RRS-brood to operationalize trait rumination. Psychometric properties of the brooding subscale are satisfactory (Treynor et al., 2003). Cronbach's a was acceptable in this sample (a ¼ .70).
The Thought-Action Fusion Scale (TAFS; Shafran et al., 1996;German version: Hansmeier et al., 2014) is a self-report measure of TAF consisting of two subscales: TAF-moral (12 items) and TAFlikelihood (7 items). The former focuses on a morality bias and the latter on a probability bias. The TAFlikelihood further differentiates between negative consequences to oneself (TAF-LS) and to others (TAF-LO). The psychometric properties of the scale are good (Hansmeier et al., 2014;Meyer & Brown, 2013;Rassin et al., 2001). In our study, Cronbach's a for the total TAFS score was .89. The Revised Obsessive Intrusions Inventory Part 2 (ROII Part 2; Purdon & Clark, 1993, 1994 was used to measure the appraisals of the activated UIT. We used 8 of the 10 appraisal items to assess unpleasantness of the UIT, a sense of guilt associated with the UIT, worry that the UIT would come true, unacceptability of the UIT, perceived likelihood of the UIT coming true, the importance of controlling the UIT, perceived harm or danger associated with the UIT, and perceived responsibility for the UIT coming true. We employed this measure to get an indication of the similarity in appraisal ratings between the induced UIT in our study and appraisals in individuals with OC symptoms. All items were rated on a 5-point Likert-type scale ranging from 0 (not at all) to 4 (extremely).
Assessment of distress, urge to neutralize, depressed mood, and vividness of UITs. Participants were asked to rate distress ("How distressed are you right now?"), urge to neutralize ("To what degree do you experience an urge to neutralize the UITs, that is, to undo the intrusive thought or to do something to prevent something bad happening?"), depressed mood ("How depressed are you right now?"), and vividness of the UITs ("Please indicate how vivid your intrusive thoughts are, that is, to what degree they appear as vivid images in your mind.") on visual analog scales (VASs) ranging from 0 (very low/not at all vivid) to 9 (very high/extremely vivid).
Assessment of UIT frequency. We assessed UIT frequency with a smartphone counter app. Participants were instructed to press the "þ" volume button whenever the UIT occurred. The display was covered so that participants could not see the counter app.
Manipulation checks. To check whether the experimental manipulation worked, the participants were asked to rate their concentration ("What percentage of time were you able to concentrate on the sentences shown?"), degree of self-focus ("While the statements were presented, to what degree were you thinking about yourself?"), and degree of UIT focus ("While the statements were presented, to what degree were you thinking about causes, meaning, and consequences of your intrusive thoughts or images?") on VASs ranging from 0% to 100%. If the manipulation was effective, participants in the distraction condition should score lower on both self-focus and UIT-focus variables in comparison to those in the rumination groups. In addition, participants in the UIT rumination group should be less self-focused and more UIT focused than those in the mood rumination group.
Assessment of behavioral and mental neutralizing. To assess behavioral and mental neutralizing strategies, we modified items of previous work in that area (Freeston et al., 1991;Goodman et al., 1989;Purdon & Clark, 1993, 1994Rachman et al., 1996), supplemented by items about leaving the room and inwardly calming oneself down. Items were ordered so that they assessed first behavioral (11 items) and then mental (9 items) forms of neutralizing. Examples of behavioral neutralizing are ripping the paper with the sentence or changing the name. Examples of mental forms of neutralizing are thinking about something positive or saying a prayer. First, the experimenter observed whether the participant had performed any behavioral neutralizing strategies. Subsequently, the experimenter asked the participant whether he or she had used any mental neutralizing strategies. The presence of behavioral or mental neutralizing was rated by the experimenter as either 0 (absent) or 1 (present). Given that the participants mostly performed only one behavior to neutralize the UIT, we analyzed behavioral neutralizing as a dichotomous variable (performed vs. did not perform). The final score for mental neutralizing was the number of different strategies used to neutralize the UIT per participant (sum score), since all participants but one used at least one mental neutralizing strategy. For a detailed description of the neutralizing assessment, see the Appendix.
UIT activation. The UIT activation was identical to that of the previous study (Wahl et al., 2019) and was based on a previous paradigm used to study characteristics of neutralizing in nonclinical samples van den Hout et al., 2001van den Hout et al., , 2002. First, we provided the participants with a pen and a sheet of paper and asked them to think of a loved person and to get a vivid image of that person in their mind. Once they had a clear picture in their mind, they were instructed to write down and subsequently to read aloud the following sentence: "I wish that [loved person] would die today in a horrible car accident." Three participants refused to write down the sentence and two did not read it aloud.
Thought monitoring. During the thought-monitoring phases, participants were instructed to observe their own thoughts for 5 min and to assess the UIT frequency using the counter app, in the same way as in Wahl et al. (2019). The instructions were as follows: During the next 5 min, you may think about anything you like. You might think of your target unwanted intrusive thought, but you do not have to. However, if at any time you think of your target unwanted intrusive thought, please record the occurrence of each thought by pressing the "þ" key on the smartphone once for each occurrence. It is important that you continue in the same way for the full 5 min. (adapted from Marcks & Woods, 2005) Experimental manipulation. Following Wahl et al. (2019), we randomly assigned participants to the three groups: UIT rumination, mood rumination, or distraction. During the experimental manipulation, they were instructed to focus their attention on statements presented to them on a computer screen for 8 min. Each experimental condition used 28 statements to induce a designated thinking style or distraction. The participants could use the mouse to navigate through the statements.
To induce rumination about the UIT, we used a modified version of the rumination task by Morrow and Nolen-Hoeksema (1990), which was identical to the one used by Wahl et al. (2019). The participants assigned to this condition were asked to focus on the reasons for, implications of, and possible consequences of their activated UIT (e.g., "Think about: the possible consequences of having intrusive thoughts or images"). In the rumination about negative mood condition, the participants had to focus on reasons for, meanings of, and possible consequences of their negative mood (e.g., "Think about: the way you feel inside"). This induction was based on the depressive rumination paradigm (Blagden & Craske, 1996;Huffziger & Kuehner, 2009;Lyubomirsky et al., 2003;Morrow & Nolen-Hoeksema, 1990). In the distraction condition, the participants were asked to distract themselves by thinking about everyday objects and situations. (e.g., "Think about: raindrops sliding down a window pane"). This condition was based on the original distraction task (Huffziger & Kuehner, 2009;Morrow & Nolen-Hoeksema, 1990).
Procedure
The experimental procedure of the study is depicted in Figure 1. To administer the study, we used the online survey tool Unipark (Questback GmbH, 2013). All participants gave their written informed consent and were tested individually in a quiet room, seated in front of a computer screen. At the beginning, participants were randomly allocated to one of the three experimental groups and completed a set of standardized questionnaires (PANAS, OCI-R, BDI, RRS-brood, TAFS), followed by the activation of the UIT. The activation of the UIT was immediately followed by the first ratings on the VAS (distress, urge to neutralize, and depressed mood, at T1). Participants were subsequently asked to monitor and register their thoughts for 5 min (baseline), followed by the second ratings (distress, urge to neutralize, and depressed mood) on the VAS (T2). Participants were then asked to follow one of the three instructions of the experimental manipulation for 8 min, succeeded by a third rating of the VAS scales (distress, urge to neutralize, depressed mood, and UIT vividness, at T3). The study continued with the second thought-monitoring phase (return to baseline), followed by the fourth VAS ratings at T4 (distress, urge to neutralize, and depressed mood) and the completion of the ROII items and manipulation checks. Finally, behavioral and mental neutralizing were assessed. The procedure was identical to the procedure used in the study by Wahl et al. (2019) with the addition of three components: TAF was assessed as part of the questionnaire set at the beginning of the study, UIT vividness was assessed as part of the VAS ratings at T3, and behavioral and mental neutralizing were assessed at the end of the study.
Statistical analysis
Data were analyzed with IBM SPSS Statistics 23. First, we used box plots and z scores to identify outliers and detected four in UIT frequency. To reduce the bias, we applied Winsorizing by replacing these outliers with the next highest score that was not an outlier (Field, 2013). Next, we examined the assumption of normality, allowing small violations because our sample size included more than 30 participants per group (Field, 2013). To test whether the experimental groups differed in sociodemographic variables, clinical characteristics, or degree of concentration (first manipulation check), we used one-way analyses of variance (ANOVAs). We performed a w 2 test to compare the groups on gender. To investigate whether the experimental groups differed in the expected directions during the experimental manipulation (second manipulation check), we conducted a 3 Â 2 mixed-model ANOVA with group (UIT rumination, mood rumination, and distraction) as between-subjects factor and content of thinking (self-focus vs. UIT focus) as a within-subject factor.
We additionally carried out this analysis just for the rumination groups with a 2 Â 2 mixed-model design, excluding distraction.
To examine Hypotheses 1 and 2 (a, b, and c), the outcome variables (distress, urge to neutralize, and depressed mood) were calculated as the mean difference between T2 and T3. We focused on these two time points as the crucial interval since they were immediately before and after the experimental manipulation and could also be directly compared with the Wahl et al.'s (2019) study. The outcome variable UIT frequency was calculated as the mean difference between the two thought-monitoring phases (baseline and return to baseline). Planned contrasts were conducted comparing the UIT rumination group with the combined mood rumination and distraction groups. Where necessary, we adjusted for heterogeneity of variances. To analyze the effect of rumination about the UIT on behavioral and mental neutralizing (Hypothesis 3a and b), we conducted a logistic regression and a Welch test (due to slight heterogeneity of variances on this variable), respectively.
To investigate the moderating effects of trait rumination and TAF (Hypothesis 4a and b), we conducted moderation analyses using PROCESS (Hayes, 2017). For the interactions, we report percentile bootstrap 95% confidence intervals (CIs), because this method is more robust than the standard CI (Field, 2013). Significant moderation effects were followed by a simple slopes analysis (Aiken & West, 1991;Rogosa, 1981). This analysis looks at the relation between the predictor and outcome at 1 SD above and below the mean value of the moderator. Specifically, we looked at the effect of rumination about the UIT on the urge to neutralize from T2 to T3 at lower (1 SD below mean), average (mean), and higher (1 SD above mean) levels of trait rumination and TAF, respectively. Urge to neutralize from T2 to T3 was calculated as the mean difference between these two time points. To analyze the effect of rumination about the UIT on UIT appraisals (Hypothesis 5), we carried out a multivariate ANOVA, which tests the difference between groups across multiple outcomes simultaneously and therefore controls better for multiple testing (Field, 2013). Last, changes in UIT vividness were analyzed with an ANOVA, followed by Tukey's post hoc test.
As effect sizes, we report Pearson's correlation coefficient (r; planned contrasts), partial eta-squared (Z 2 p ; one-way independent and mixed ANOVAs), odds ratios (ORs; logistic regression), and Cohen's (d; Tukey's post hoc test). Level of significance was set at .05.
Manipulation Check
First, we analyzed whether participants were able to concentrate on the statements provided during the experimental manipulation. Results showed that on average, all participants were able to focus to a high degree on the statements, with no significant group differences, F(2, 102) ¼ 0.36, p ¼ .696 (see Table 2 for all manipulation check means and SDs). Next, we investigated differences between groups on the content of thinking. We expected that distraction would lead to lower scores on self-focus and UIT focus in comparison to rumination about the UIT and rumination about negative mood. In addition, we predicted that the rumination groups would differ from each other in the content being ruminated on (self-focus vs. UIT focus), with rumination about the UIT being more UIT focused and rumination about negative mood being more self-focused. A significant interaction between the experimental group and the content of rumination with the expected patterns suggests that three distinct thinking styles were successfully induced, F(2, 102) ¼ 6.41, p ¼ .002, Z 2 p ¼ :11. The interaction remained significant when excluding distraction from the analysis, F(1, 67) ¼ 8.47, p ¼ .005, Z 2 p ¼ :11, meaning that the two types of rumination differed in the expected direction (self-focus vs. UIT focus). In short, the experimental manipulation was successfully implemented.
Replication
Hypothesis 1: Effect of experimental manipulation on urge to neutralize. Table 3 presents means and SDs for urge to neutralize for each time point. The analysis focuses on the comparison between T2 and T3. Urge to neutralize showed an attenuated decline in those participants who had previously ruminated about their UIT compared to participants who had ruminated about negative mood and those who were distracted, t(81.58) ¼ 2.74, p ¼ .007, r ¼ .29. For illustration and comparison with the study by Wahl et al. (2019), means and standard errors of the urge to neutralize are depicted graphically in Figure 2.
Hypothesis 2a, b, and c: Effects of experimental manipulation on distress, depressed mood, and UIT frequency. Table 3 presents means and SDs for distress, depressed mood, and UIT frequency. Similar to urge to neutralize, distress showed an attenuated decline from T2 to T3 in those participants who were in the UIT rumination group compared to participants who were in the mood rumination and distraction groups, t(102) ¼ 2.45, p ¼ .016, r ¼ .24. The effect of rumination about the UIT on depressed mood was not significant, t(102) ¼ 1.56, p ¼ .121, r ¼ .15. Figure 3 shows the effects for distress and depressed mood. With regard to UIT frequency, there was no significant difference between the experimental groups, t(102) ¼ 0.45, p ¼ .656, r ¼ .04.
Additional analysis: Rumination about negative mood versus distraction. Visual inspection of the data (Figures 2 and 3) and the means in Table 3 suggest that the two rumination groups followed a similar pattern, that is, a reduced decline in comparison to distraction. Thus, we decided to conduct an additional contrast to test the differences between rumination about negative mood and distraction for statistical significance. Relative to distraction, rumination about negative mood maintained the urge to neutralize, t(60.84) ¼ 3.42, p ¼ .001, r ¼ .40; distress, Table 3.
Mechanisms
Hypothesis 4a and b: Moderating effects of trait rumination and TAF.
When comparing rumination about the UIT with distraction, trait rumination did not moderate the effect of the experimental manipulation on the urge to neutralize, b ¼ À0.28, percentile 95% CI [À0.637, 0.027], t ¼ À2.25, p ¼ .027, since the percentile bootstrap CI included zero. TAF moderated the relation between the experimental group and the urge to neutralize from T2 to T3, b ¼ À0.09, percentile 95% CI [À0.155, À0.012], t ¼ À2.33, p ¼ .022, when comparing rumination about the UIT with distraction. When comparing rumination about negative mood with distraction, the moderation was not significant, b ¼ À0.07, percentile 95% CI [À0.145, 0.010], t ¼ À1.98, p ¼ .051. Further analysis showed that the effect of TAF occurred only in the distraction group and not in the two rumination groups (see Table 4). Specifically, when TAF score increased, so did the mean difference in urge to neutralize from T2 to T3; that is, urge to neutralize decreased to a greater degree.
Hypothesis 5: Effect of experimental manipulation on UIT appraisals.
We did not find evidence for an effect of rumination about the UIT on the UIT appraisals, V ¼ 0.15, F(16, 192) ¼ 0.96, p ¼ .504 (for means and SDs, see Table 3).
Exploratory analysis: Effect of rumination on UIT vividness. The effect of the experimental group on the UIT vividness was not significant, F(2, 102) ¼ 2.51, p ¼ .086, Z 2 p ¼ :05 (for means and SDs, see Table 3).
Discussion
The major goal of the study was to investigate whether rumination about a UIT reduces the natural decline of the urge to neutralize these thoughts relative to rumination about negative mood and Note. The moderation occurred only in the distraction group, which is highlighted by the significant t statistic. As TAF score increased, so did the effect of distraction on the urge to neutralize. TAF ¼ Thought-Action Fusion.
distraction, thereby replicating findings by Wahl et al. (2019). Consistent with Hypothesis 1, rumination about the UIT attenuated the general decline of the urge to neutralize the UIT relative to rumination about negative mood and distraction. Interestingly, we observed a similar effect on the urge to neutralize for individuals who had ruminated about negative mood, relative to those in the distraction group. In other words, individuals who ruminated-irrespective of the content of rumination-experienced a reduced decline in the urge to neutralize compared to individuals who were distracted. Findings partially replicate results by Wahl et al. (2019) who used a novel experimental paradigm for the first time. This replication in an independent sample further strengthens our confidence in the assumption that rumination about a UIT is involved in the maintenance of the urge to act upon these thoughts.
In contrast to the previous study (Wahl et al., 2019), rumination about the UIT also attenuated the decline of distress associated with the UIT, relative to rumination about the negative mood and distraction (Hypothesis 2a). These findings suggest that rumination about the UIT is involved not only in the maintenance of the urge to neutralize but possibly also in the persistence of the distress experienced with the UIT. It is possible that the nonsignificant effect on distress found earlier represented a power problem, as Wahl et al. (2019) assessed a smaller sample. In addition, rumination about negative mood maintained both the distress associated with the UIT and depressed mood, relative to distraction. The present findings suggest that the two types of rumination exerted similar effects on the urge to neutralize and distress. The reasons for this discrepancy in the effects of rumination about negative mood between the previous study (Wahl et al., 2019) and the current study are unclear. Manipulation checks in both studies indicate that two distinct types of rumination were successfully induced to a similar degree, using identical methodology. The differences between rumination on UIT and rumination on negative mood seem to be particularly relevant for individuals diagnosed with OCD (Wahl, Schönfeld et al., 2011), and future studies should retain this distinction and investigate whether they differ in their effects on OC symptoms.
Neither the current nor the previous study (Wahl et al., 2019) found an effect of the experimental manipulation on the frequency of UITs (Hypothesis 2c). Whether this can be interpreted as a robust finding, meaning that the frequency of UITs is unaffected by previous rumination, or whether the findings are attributable to methodological difficulties such as the reliable and valid assessment of such transient phenomena as UITs remain to be seen in future studies, which could develop valid assessments of UITs that also differentiate between frequency and duration of UITs.
The second aim was to extend the previous findings by investigating whether also actual behavioral and mental neutralizing are affected by rumination. Although the effect was going in the predicted direction (i.e., participants in the UIT rumination group performing behavioral neutralizing more often than participants in the distraction group), the experimental groups did not significantly differ in their effect on behavioral neutralizing (Hypothesis 3a). This means that participants in each group engaged in actual neutralizing behavior such as ripping the paper or crossing out the name of the loved person to a similar degree. Likewise, rumination about the UIT did not result in a higher frequency of mental neutralizing strategies compared to rumination about the negative mood and distraction (Hypothesis 3b). Each group engaged in a high total number of mental neutralizing strategies (UIT rumination: M ¼ 4.88, SD ¼ 1.45; mood rumination: M ¼ 5.23, SD ¼ 1.52; distraction: M ¼ 4.61, SD ¼ 1.89), such as saying silently to oneself that the sentence "does not count" since one was told to write it down to fulfill the requirements of the study. Thus, the lack of group differences might be explained by a ceiling effect, attributable to the strong activation of a UIT. In sum, results do not support Hypothesis 3a and b. Future studies should address this question with more power and a more refined assessment of actual behavioral and mental neutralizing.
Several potential mechanisms involved in the associations between rumination and UITs were addressed in this study in a conjunct analysis to better understand how exactly rumination impacts the maintenance of UITs. We did not find evidence of trait rumination or TAF moderating the effect of rumination about the UIT on the urge to neutralize (Hypothesis 4a and b). Thus, it appears that rumination about a UIT affects the urge to neutralize regardless of the tendency to ruminate in daily life, or the predisposition to misinterpret the occurrence and meaning of UITs. Additionally, we did not find that rumination directly affected the appraisals of the UIT (e.g., rumination did not increase the perceived likelihood of the thought coming true, relative to distraction), which is consistent with the finding on TAF. Future studies might focus on the assessment of key misinterpretations of UITs and their potential changes as a result of rumination.
Finally, we examined whether rumination about the UIT influenced the vividness of the UIT. Findings suggest that vividness did not differ between the groups. Future studies might want to include more aspects of imagery related to UITs to draw conclusions about their possible involvement in the effects of rumination on UITs.
There are several limitations to this study that should be addressed. First, the use of one-item measures as the main dependent variables poses questions about their psychometric properties. Second, the current and the previous study (Wahl et al., 2019) did not find an effect of rumination about the UIT on the frequency of UITs. These findings are surprising, given that in the current study rumination about the UIT had a broader impact on variables (urge to neutralize and distress). This raises the question of whether the use of a smartphone to assess frequency of UITs is a valid and reliable measure. Future studies might consider other measures of intrusive thoughts, for instance, the think-aloud approach (Zanov & Davison, 2010) or thought sampling (Hirsch et al., 2015). Finally, replications of key findings should ideally be conducted by an independent research group. Although the study was double blind, it cannot be completely ruled out that the previous experiences with the experimental paradigm or the researchers' expectations might have influenced the current results.
To conclude, the study replicated the main result from a previous study (Wahl et al., 2019) that rumination about a UIT attenuates the decrease in urge to neutralize compared to rumination about negative mood and distraction. Given the additional findings that rumination about the UIT affected distress, this strengthens the confidence in the argument that rumination has an influence on the maintenance of UITs. Findings are also consistent with previous studies suggesting that rumination might play a role in OCD (Dar & Iqbal, 2015;Grisham & Williams, 2009;Raines et al., 2017;Wahl et al., 2019). Findings warrant replication in individuals diagnosed with OCD to investigate whether rumination also affects obsessive thoughts in the same way as it affects UITs. | 2020-04-02T09:31:30.922Z | 2020-01-01T00:00:00.000 | {
"year": 2020,
"sha1": "bc5fde4a9d9a7de35489ebc284106ba466ec76d7",
"oa_license": "CCBYNC",
"oa_url": "https://journals.sagepub.com/doi/pdf/10.1177/2043808720912583",
"oa_status": "GOLD",
"pdf_src": "Sage",
"pdf_hash": "a99bcb73e9e928e2d8604fd43b48a28250ff7b4f",
"s2fieldsofstudy": [
"Psychology",
"Agricultural And Food Sciences"
],
"extfieldsofstudy": [
"Psychology"
]
} |
251637105 | pes2o/s2orc | v3-fos-license | Intraspecific variation in male mating strategies in an African ground squirrel (Xerus inauris)
Abstract Male mating strategies respond to female availability such that variation in resources that affect spatial distribution can also alter cost–benefit tradeoffs within a population. In arid‐adapted species, rainfall alters reproduction, behavior, morphology, and population density such that populations differing in resource availability may also differ in successful reproductive strategies. Here, we compare two populations of Cape ground squirrels (Xerus inauris), a sub‐Saharan species with year‐round breeding and intense mating competition. Unlike most mammals where males resort to aggressive interactions over females, male X. inauris are tolerant of one another, relying instead on other nonaggressive pre‐ and postcopulatory strategies to determine reproductive success. Our findings suggest that differences in resource availability affect female distribution, which ultimately leads to intraspecific variation in male reproductive tactics and sexual morphology. Sperm competition, assessed by reproductive morphometrics, was more pronounced in our high resource site where females were distributed evenly across the landscape, whereas dominance seemed to be an important determinant of success in our low resource site where females were more aggregated. Both sites had similar mating intensities, and most males did not sire any offspring. However, our low resource site had a higher variance in fertilization success with fewer males siring multiple offspring compared with our high resource site where more individuals were successful. Our results lend support to resource models where variations in female spatial distribution attributed to environmental resources ultimately impact male reproductive behaviors and morphology.
| INTRODUC TI ON
Many aspects of species' ecology and behavior, including social and reproductive strategies, are strongly influenced by environmental conditions (Clutton-Brock & Harvey, 1978;Millán et al., 2021;Schradin et al., 2017). Resource-based models suggest ecological parameters impact the social organization as females respond to the distribution and quality of resources and environmental risks (Emlen & Oring, 1977;Maher & Burger, 2011;Rémy et al., 2013). The spatial and temporal distribution and density of receptive females impact competition dynamics by altering the costs and benefits of acquiring partners and ultimately influence male reproductive decisions and physiology (Brashares & Arcese, 2002;Emlen & Oring, 1977;He et al., 2019;Schradin et al., 2010;Shuster & Wade, 2003). In species lacking male parental care, male reproductive success is generally limited by the number of acquired mates, such that males balance current and future reproductive opportunities depending on whether females are economically defendable in space and time (Clutton-Brock, 1989;Emlen & Oring, 1977;Schwanz et al., 2016).
When females are dispersed or when reproduction is asynchronous, traits that enhance male mate searching abilities and dominance hierarchies will be favored (Emlen & Oring, 1977;Schwanz et al., 2016). Alternatively, females clumped together with discrete breeding seasons favor traits that aid males in direct competition or monopolization (Lacey & Wieczorek, 2001;Waterman, 2007).
In promiscuous species, postcopulatory competition also explains many aspects of variation in male and female reproductive traits (Córdoba-Aguilara, 2005;Minder et al., 2005) and mating behaviors (DelBarco-Trillo & Ferkin, 2006;Dixson, 2021). Competition accounts for variation in sperm physiology and morphology (Dixson & Anderson, 2004;Gomendio et al., 2006), and male reproductive anatomy (i.e., testes and accessory glands) often is used as a proxy for the intensity of sperm competition (Dixson, 2021;Ramm et al., 2005). However, these reproductive traits are physiologically expensive and thus can be directly impacted by resources. In the Arabian spiny mouse (Acomys dimidiatus), rainfall increased follicle counts and gonad (i.e., testes and ovaries) mass and volume, and altered hormone profiles for both sexes (Sarli et al., 2016). As is generally the case in desert environments where there are unpredictable changes in water and/or food availability, this species maintains reproductive function throughout the year to ensure rapid responses to reproductive opportunities as they arise (Bronson, 2009;Sarli et al., 2016).
Ground-dwelling sciurids represent a continuum of social organization and mating systems, making them an ideal clade to explore variation in male reproductive strategies (Schwanz et al., 2016). The Cape ground squirrel (Xerus inauris) lies at one end of that spectrum with extreme promiscuity and sociality ( Figure 1). They differ from other social species by forming year-round, unrelated, all-male groups independent of females (Waterman, 1995;Waterman, 1997 Waterman, 1997), but they are rarely given the paradox of males competing for reproductive opportunities (Clutton-Brock, 2009;Trivers, 1972). In most polygynous species, intense sexual selection leads to male-male conflict ultimately driving male-biased mortality.
However, little is known about life-history tradeoffs and mating in species where males lack overt conflict (Bonduriansky et al., 2008).
Rather than aggressively competing to defend females or maintain territories, male X. inauris move throughout the landscape searching for estrous females in the presence of other unrelated competitors (Waterman, 1995(Waterman, , 1997(Waterman, , 1998. Mate order is determined by a linear dominance hierarchy maintained by nonaggressive displacements (Waterman, 1995(Waterman, , 1998. Males compete with other males via precopulatory competitive searching (Waterman, 1995(Waterman, , 1998 and sperm competition (Manjerovic et al., 2008). Litter sizes of one to two suggest the likelihood of multiple paternity is low, although females average 4 mates (Waterman, 1996(Waterman, , 1998, and multiple paternity is possible (Manjerovic & Waterman, 2015).
Here, we present data from two populations of X. inauris to examine how competition can vary within a species that lacks overt conflict. The sites are known to belong to the same phylogenetic clade (Herron et al., 2004) and do not vary in female reproductive output (Pettitt et al., 2008). However, the sites differ in rainfall and resource availability (LaFlèche & Waterman, 2020;Pettitt et al., 2008;Waterman, 1995), setting up a natural comparison to address how resources may influence reproductive competition among males. Male and female social groups live apart in complex, underground burrow systems or clusters, that consist of multiple burrow openings separated from adjacent clusters by areas without burrows (Herzig-Straschil, 1978;Waterman, 1995). We predict that as female family groups become more aggregated, male home ranges should shrink and males should be more likely to monopolize mates based on a dominance hierarchy, increasing precopulatory reproductive competition. However, as females become more evenly distributed across the landscape, monopolizing females becomes less advantageous, resulting in males competitively searching for females and increasing selection for postcopulatory strategies including sperm competition. Ultimately, as competition for mating opportunities intensifies, we predict greater variance in overall reproductive success among males. F I G U R E 1 Male Cape ground squirrels (Xerus inauris) of different ages often associate with each other in non-aggressive interactions 2 | ME THODS
| Site variation
We collected field data between 2002 and 2006 at two sites with known differences in both quantity and variability of rainfall (O'Brien et al., 2021;Pettitt et al., 2008) and presumably, resource availability as there tends to be a strong correlation between rainfall and primary productivity in arid and semi-arid environments (Happold & Happold, 1992). Our "high resource site," located in central South Africa (27°35′S, 25°35′E), receives an average of 502 mm annual rainfall and has a contiguous distribution of Eragrostis spp. of grasses (LaFlèche & Waterman, 2020;Pettitt et al., 2008;van Zyl, 1965). Our "low resource site," located in eastcentral Namibia (23°25′S, 18°00′E), averages 220 mm annual rainfall (LaFlèche & Waterman, 2020;Waterman, 1995) and is predominantly Acacia bush with patchy distributions of grasses dominated by Schmidtia kalahariensis (Waterman, 1995). Ground squirrels have been previously studied at this low resource site from 1989 to 1991 (Waterman, 1995(Waterman, , 1996(Waterman, , 1998) and during the current study (2002)(2003)(2004)(2005)(2006). While we know average total rainfall differs between sites (LaFlèche & Waterman, 2020;Pettitt et al., 2008), we also looked at the variance in rainfall between sites using a Levene's test with data collected at each site from 1980 to 2006.
To determine whether differences in rainfall reflect resource availability, we quantified percent cover at a subset of burrow clusters (n = 6) at each site in 2005 and 2006. We estimated percent cover at the same locations each year, collecting data at each site within 1 month of each other, prior to the rainy season, using a 1 × 1 m quadrat spaced north and south at 10, 50, 100, and 150 m from the center of the cluster. We tested for differences between sites and years using a two-way anova. To address the distribution of suitable habitats between sites, in 2006, we plotted all known burrow clusters in ArcMap v.9.3.1 (ESRI). Female social groups live in the same burrow cluster for multiple years, actively maintaining them such that burrow clusters rarely change between years (Ewacha et al., 2016;Herzig-Straschil, 1978). We used locations of individual burrows in a given burrow cluster to generate 95% minimum convex polygons (MCP) and calculated the total area of known burrow clusters and the average distance between burrow clusters.
We used the multi-distance spatial cluster analysis to calculate the dispersion of burrow clusters based on Ripley's K (Ripley, 1976). This analysis generates an expected pattern of complete spatial randomness compared with the observed burrow distances as an indication of clustering or dispersion (Wilschut et al., 2015).
Upon capture, we transferred animals into a handling bag to reduce stress (Koprowski, 2002). We recorded standard measurements including body mass (measured with a spring scale to ±5.0 g), sex, reproductive condition, and age; individuals were uniquely marked for short-and long-term identification using hair dye and pit tags (Hillegass et al., 2008;Waterman, 1995). We collected a 1-to 3-mm sample of tail tissue for genetic analysis and stored samples in 95% EtOH at room temperature (Manjerovic & Waterman, 2015). After handling, we released all animals at the site of capture. Handling was in accordance with the American Mammal Association guidelines (Gannon & Sikes, 2007) and was approved by the University of Central Florida's IACUC (#07-43W).
| Demographics
We used trapping and observation data collected between 2002 and 2006 to determine yearly social group composition; social groups are determined by individuals sleeping in the same burrow. We used t-tests to compare the number of breeding females per group and the number of females and males per hectare, calculated as the total number of squirrels in the study site divided by the size of the study area in hectares. Adult males either disperse at reproductive maturity or delay dispersal and remain with their natal group (O'Brien et al., 2021;Waterman, 1995Waterman, , 1997. Both dispersal tactics result in reproductive success (Manjerovic & Waterman, 2015); thus all adult males present were included, regardless of dispersal tactic, in our calculations for the number of males per hectare. Site comparisons were analyzed using a t-test.
| Mating behaviors
Between 2002 and 2006, we collected detailed behavioral data on days of estrus following methods established for this species at the low resource site during 1989-1991 (Waterman, 1995(Waterman, , 1997(Waterman, , 1998. We could tell estrus was imminent by increased activity by males towards the female coming into estrus (Waterman, 1997). We recorded all interactions and copulations; because copulations occur both above and below ground, we assumed copulation occurred below ground if a female was followed into a burrow by a male and they remained underground for at least 1 min (Waterman, 1998).
Estruses generally lasted approximately 3 h and were considered over when females left the area, rejected males, or if males stopped searching and started feeding (Waterman, 1998). After every estrous event, we immediately put out traps to capture the estrous female to look for evidence of copulatory plugs. We were unable to record data blind because we used only focal field animals. Methods used to gather mating behavior data did not differ between sites or study years.
Based on estrous events, we determined the mean duration of estrus and number of mates per female, and the mean copulatory success for males. Estrous events included individual females that were in the data set over multiple years. We accounted for multiple occurrences of the same female by using a generalized linear mixed model (GLMM) with a Poisson-distributed error, including female ID as a random variable and site as a predictor factor. This model also is commonly used if dependent variables violate normality assumptions (Agresti, 2002). Given that generally a single female was in estrus in an area at any one time, we calculated the operational sex ratio as the number of sexually active males present during each estrus. We used this metric to compare sexual selection intensity (Emlen & Oring, 1977;Waterman, 1998) but also calculated the opportunity for sexual selection at each site across the study period as the variance in copulatory success per individual divided by the squared mean of success (I s = SD 2 /mean 2 ; Shuster & Wade, 2003;Wade & Arnold, 1980). Although research has suggested that the "Jones index" (s' max ; Jones, 2009) outperforms the former by measuring how mating success varies with reproductive success (Henshaw et al., 2016), we were unable to calculate a direct correlation between mating and reproductive success, given limited mating observations and high estrus failures (Waterman, 1996).
| Male investment
Because dominance has been demonstrated to influence the male reproductive success (Waterman, 1998), we recorded all male-male approach-displacement interactions in both populations from 2002 to 2006 to calculate dominance relationships using Landau's index of linearity (Lehner, 1998;Waterman, 1998). We also recorded any evidence of copulatory competition including copulatory calls or mate guarding (Sherman, 1989), and compared sites using a chi-square test.
To assess male movement patterns and thus access to females, we radio-collared 16 males at our high resource site in 2006 (Model SOM-2380; Wildlife Materials, Inc.) but removed two males from analysis due to fewer than 50 locations. After collaring, we released all animals at the site of capture and waited at least 24 h before locating animals to allow time for acclimation. Between May and July 2006, we located animals a minimum of 50 times, split between day and night, and also included locations based on observations and trapping. Although other methods are widely used (Horne & Garton, 2006), we generated 95% MCP for comparison to the low resource site, which was calculated previously using the same methods from 1989 to 1991 (Waterman, 1995). We used ArcMap v.9.3.1 (ESRI) and the animal-movement extension (Beyer, 2004) to estimate male home ranges.
To assess reproductive investment, we measured internal male reproductive morphology in a subset of adult males in 2006. We euthanized animals on site with either halothane or chloroform overdose based on availability and country permit requirements (see Manjerovic et al., 2008). We used electronic calipers (Mitutoyo Inc.) to measure scrotal width and length to the nearest ±1.0 mm, which included both the epididymis and testis. We measured the mass (±0.1 g) of the testes and bulbourethral gland and calculated relative testes size following Kenagy and Trombulak (1986). We corrected measurements for body size before comparing between sites.
| Reproductive success
We extracted total genomic DNA from all captures between 2002 and 2006 using a DNeasy Kit (Qiagen Inc.) and genotyped all individuals using eight species-specific microsatellite loci (Manjerovic et al., 2009). Primer sequences are available on GenBank (accession nos. FJ823123-FJ823131), and polymerase chain reaction conditions and cycling parameters are described in Manjerovic et al. (2009). We amplified PCR products on a Beckman 8000 CEQ and used corresponding software to size alleles compared with internal standards. We tested for Hardy-Weinberg deviations and linkage disequilibrium using Genepop with α = .05 (Raymond & Rousset, 1995). To determine limits for individual identification, we used the program GIMLET v 1.3.3 (Valière, 2002) to calculate the probability that two individuals randomly show identical genotypes (P ID, Paetkau & Strobeck, 1994) and the P ID among siblings (P IDsib ; Evett & Weir, 1998). To quantify male reproductive success, we assigned the parentage of all juveniles and subadults using a likelihoodbased approach in CERVUS v.3.0 (Kalinowski et al., 2007;Marshall et al., 1998). This program assigns paternity based on confidence levels calculated using simulated data that includes population allele frequencies, proportion of population sampled and genotyped, and error rates. We ran a total of 100,000 iterations using a 1% genotyping error rate (Manjerovic & Waterman, 2015). We calculated a mean of four adult female candidates per social group and 11 candidate males based on the operational sex ratio at the time of breeding (Waterman, 1998). We calculated the proportion of sampled candidates as 10% for females and 30% for males by subtracting the proportion of unknown adults captured each subsequent year.
Regardless of whether or not males were recaptured, we included all adult males as candidates in each subsequent year of their initial capture. Accounting for this variation in sampling effort along with changes in reproductive ages of males each year resulted in different candidate male groups each year.
We calculated maternity and paternity using individuals typed at a minimum of 6 loci with at least an 80% confidence level (Wells et al., 2017). We did not exclude parents based on 1 mismatch with offspring, allowing identification of the most likely parent from among multiple nonexcluded parents (Kalinowski et al., 2007). For maximum reliability of paternity assignments, we only included juveniles with a 95% confidence in the assigned mother, and we only accepted parentage assignments when there was no more than 1 mismatch for assumed mother-fatheroffspring relationships. The total exclusionary power for the data set was over 98% (Jamieson & Taylor, 1997). We used genetic paternity assignments (i.e., numbers of sired offspring) to calculate variance in male reproductive success (V males ) between sites and included all adult males present regardless of whether they sired any offspring. We used variance in male reproductive success divided by the squared mean of male reproductive success to calculate the opportunity for sexual selection (I males ) (DuVal & Kempenaers, 2008;Shuster & Wade, 2003).
We tested that all data were normal and homoscedastic; data that did not meet those assumptions were either log-transformed, or tested using nonparametric statistics or models suitable for nonnormal response variables. All data were tested for significance either in JMP v.8 (SAS Institute Inc.) or RStudio (v1.1.463) and considered statistically significant at α = .05. Results are expressed as mean ± SE unless otherwise stated.
| Site variation
We found statistically significant differences in vegetation between burrow clusters in our high and low resource sites (two-way anova: F 1,12 = 33.3, n = 6, p < .01). As expected, our low resource site had less vegetation and greater variability among years differing by an order of magnitude from our high resource site and between years ( Figure 2). We found unequal variance of annual rainfall between sites (Levene test: F 1,52 = 6.45, n = 27 years, p = .01) with over twice as much yearly rainfall during the current study in the high resource site (mean: 546.4 mm ± 57.9) compared with the low resource site (mean: 238.9 mm ± 64.4; t test: t 4 = 7.9, p < .01). Using spatial analysis, we found the low resource site had a lower density of burrow clusters (0.26/ha) and statistically significant clustering across greater distances (0-1500 m) compared with the high resource site where there was a higher density of burrows (8.41/ ha) that were randomly distributed at distances <500 m (Figure 3).
| Mating behaviors
We recorded a total of 38 estrous events at the high resource site from 2002 to 2006. However, so few estrous events were observed in our low resource site during the current study (n = 4) that we included estrus data collected from this same site prior to this study (n = 31, 1989-1990) (Waterman, 1998). These data were collected using the same methods and we found rainfall and density of adult males and adult females per hectare to be similar between time periods (Table 1).
In addition, female groups lived in the same burrow clusters during both time periods; thus, we felt confident using these data in our final comparison. We found estruses were longer in the low resource site; however, this pattern was driven by a single outlier (600 min; mean: 197.77 ± 6.5 min) that, when removed, resulted in no differences in estrus duration (GLMM: t = −1.93, p = .06; Table 2). The intensity of male-male competition, as measured by the operational sex ratio, did not differ between sites (GLMM: t = −0.84, p = .41; Table 2). We also found similar opportunities for sexual selection at both sites as measured by variance in copulatory success divided by the squared mean of success (high resource I s = 2.9; low resource I s = 2.7). Despite these similarities, we found female X. inauris from the low resource site averaged three more copulations per estrus compared with females in the high resource site (GLMM: t = −1.99, p = .05; Table 2). Consequently, male copulatory success was greater in the low resource site where males averaged 0.75 copulations per individual per estrus compared with 0.47 copulations per individual per estrus in the high resource site.
We also found a higher number of unknown males in our high resource site with an average of at least 1 unmarked male attending each estrus at this site (GLM: t = 2.19, p = .03; Table 2).
| Male investment
We found no evidence of a dominance hierarchy among males in the high resource site between 2002 and 2006 with a mean linearity F I G U R E 2 Differences in vegetation percent cover (mean ± SE) support South Africa as our high resource site and Namibia as our low resource site. index of 0.08 ± 0.05 (range: 0.04-0.18). However, mate guarding was more likely to occur in this site compared with the low resource site (chi-square test: χ 2 = 5.87, p = .02) with males guarding females in 26% (10/38) of estruses compared with 6% (2/34) in the low resource site. In 2006, we recorded a mean of 104 locations per male (range: 59-166) resulting in a mean home range of 21.6 ± 4.2 ha (range: 6.7-61.0), which was almost twice that of males from the low resource site (12.5 ± 2.5 ha; Waterman, 1995). We found no differences in male body mass between the two sites in 2005 and 2006, but males in our high resource site had testes, epididymides, and bulbourethral glands that were approximately 30% larger than males in the low resource site (Table 3).
| Reproductive success
Between 2002 and 2006, we genotyped 387 individuals from the high resource site and 322 individuals from the low resource site. We found no deviations from Hardy-Weinberg equilibrium or evidence of linkage disequilibrium after Bonferroni correction (Table S1). We assigned paternity to 76 of 155 juveniles from our high resource site (49%) and 66 of 102 juveniles from our low resource site (65%) ( Table 4)
| DISCUSS ION
We found male X. inauris vary both in reproductive behavior and morphology between populations that differ in resource availability. Our site with higher rainfall had significantly higher percent cover and less variability in percent cover between years compared with our low resource site, supporting a link between rainfall and primary productivity (Happold & Happold, 1992;LaFlèche & Waterman, 2020). In arid environments, population densities can fluctuate widely with changes in rainfall altering social group dynamics (Waterman, 2002). For example, in striped mice (Rhabdomys pumilio), another rodent species from southern Africa, intraspecific variation in their social system is impacted by both resources and population density. Mice tend to remain solitary in populations with greater rainfall and in years when population density is low but form groups in areas of lower rainfall or when population density is high (Schradin et al., 2010;Schradin et al., 2020;Schradin & Pillay, 2005).
We found differences in density between sites with more adults per hectare in our high resource site but little variability from year to year within sites. We also found differences in suitable habitats with a greater density of burrow clusters spread out more evenly across the landscape in our high resource site. Burrows in our low resource site clustered together, resulting in a patchy distribution across the landscape. In great gerbils (Rhombomys opimus), another arid-adapted ground-dwelling species, significant clustering of occupied burrows was attributed to a combination of landscape and ecological factors influencing habitat suitability and dispersal behaviors (Wilschut et al., 2015). Female X. inauris tend to demonstrate strong site fidelity while males are more nomadic, moving around in search of females. Despite these differences, both males and females depend on these burrow systems daily and such differences in the distribution of burrow clusters between sites may affect the frequency that males and females associate with one another.
F I G U R E 3 Distance from the center of each
Males in our high resource site had an average home range that was almost twice that of males in our low resource site (Waterman, 1995) and therefore encountered more adult females. When resources influence mate availability in both time and space, different reproductive strategies often emerge (Shuster & Wade, 2003). In oribi (Ourebia ourebi), females form larger groups and smaller home ranges during periods of abundant grasses. Male oribi responds to these differences in female distribution and behavior by altering precopulatory mating behaviors, maintaining active territories when females were clustered and defending females when they ranged widely (Brashares & Arcese, 2002). In tropical ground squirrels, reproduction often is restricted by the seasonal rainy season or unpredictable periods of resource scarcity, while North American ground squirrels are constrained by short, discrete breeding seasons that last only a few weeks after females emerge from hibernation (Waterman, 1996). These highly synchronous breeding seasons result in multiple receptive females in a single day, such that males often leave to search and compete for additional females after the initial mating bout to gain greater reproductive advantages (Lacey & Wieczorek, 2001;Raveh et al., 2010;Sherman, 1989). Elongating the breeding season or distance between receptive females affects the costs and benefits of acquiring partners and alters which mating strategies are most successful (Brashares & Arcese, 2002;Schwanz et al., 2016;Shuster & Wade, 2003). X. inauris differ from other sciurids as they are not constrained by winters and can breed yearround (Waterman, 1996). There appears to be no predictability to female receptivity as multiple females rarely come into estrus on the same day and are asynchronous breeders with spontaneous ovulation (Bouchie et al., 2006;Waterman, 1996). Because receptive females are scattered more evenly across a greater distance in areas of higher resources, waiting for receptive females may be less advantageous if it reduces the likelihood of gaining additional breeding opportunities.
Males detect the onset of estrus and aggregate around the female during the days leading up to her estrus, associating regularly with one another (Waterman, 1997;Waterman, 1998 Note: Low resource site data also include data collected during a previous study from 1989 to 1991 (Waterman, 1995(Waterman, , 1998. Operational sex ratio is measured as the number of males present and attentive to an estrous female. a log-transformed for analysis; raw data presented. (Ramm et al., 2005). We frequently recovered copulatory plugs from high resource females but never from low resource females, although we cannot rule out the possibility of female removal of copulatory plugs as documented in tree squirrels (Koprowski, 1992).
TA B L E 3 Comparison of Xerus inauris male morphology between high resource (South Africa) and low resource (Namibia) populations
Postcopulatory mechanisms often are an attempt to deter other males from mating to minimize sperm competition. Guarding was significantly more likely to occur in our high resource site after mating and is often seen when receptive females are further apart such that locating additional mating opportunities is time-consuming (Sherman, 1989). These conditions may lead to a last male advantage, as seen in Idaho ground squirrels (Spermophilus brunneus), where unguarded females mate with additional males and the last guarding male sires most of the offspring (Sherman, 1989). By contrast, male Belding's ground squirrels (S. beldingi) do not range as widely as for females, and males are more likely to resume mate searching after copulation (Sherman, 1989). We observed several occurrences of postcopulatory calls in our high resource site but never in the low resource site. In black-tailed prairie dogs (Cynomys ludovicianus), mating calls are given both before and after copulating and are thought to be directed at both sexes. Although the first copulating male was significantly more likely to call, these calls did not deter other males (Grady & Hoogland, 1986). In other species, like the fallow deer (Dama dama), calls are an intrasexual threat directed at competitors (McElligott & Hayden, 2001). Given the low probability for multiple X. inauris females to come into estrus on the same day (Waterman, 1996) and the differences in density and distance between sites, these postcopulatory mechanisms may be a response to increased sperm competition while not imposing a cost to males in terms of a fitness tradeoff.
Differences in competitive strategies, such as territoriality or dominance, often result in extreme variance in male fertilization success, where a small number of males are responsible for the majority of matings (DuVal & Kempenaers, 2008). Approximately two-thirds of males at both sites never sired an offspring among sampled juveniles. We did find a higher variance in fertilization success with fewer individuals siring the majority of the offspring in our low resource site where dominant males obtain a greater proportion of copulations (Waterman, 1998). Of the successful males, 60% of males sired more than 1 offspring in our low resource site compared with 30% in our high resource site. Such 18 (56) 10 26 uneven distribution of reproductive success among males is not surprising given that X. inauris have asynchronous, short periods of female receptivity (Shuster & Wade, 2003;Waterman, 1998) and consequently one of the highest operational sex ratios among sciurids (11 M:1F; range 3-18; Waterman, 1997;Waterman, 1998).
We found less variance in copulatory success among individuals compared with fertilization success at both sites. In our species, over 70% of all breeding events fail to wean offspring and therefore there is a low likelihood of paternity for each male attending an estrus (Pettitt et al., 2008;Waterman, 1996). Low resources can alter conditions for mating and the opportunity for sexual selection, especially in arid-adapted species. In Iberian red deer (Cervus elaphus hispanicus), reproductive timing and behavior are closely tied to rainfall patterns (Millán et al., 2021). When environmental conditions are poor (i.e., low rainfall), males delayed rutting and decreased rutting intensity in response to females. This change in turn favored a higher degree of polygyny and increased opportunity for sexual selection (Millán et al., 2021). While resource availability does not appear to influence female reproductive output in X. inauris (Pettitt et al., 2008) or the number of breeding females per social group, male reproductive output did differ between sites. Males at both sites had estimates of sexual selection intensity similar to lekking species, with mating extremely skewed towards specific individuals within the populations.
Resources play a pivotal role in physiological tradeoffs between reproductive and behavioral strategies especially when body condition and/or the maintenance of secondary sexual characteristics affect male reproduction. When female oribi are dispersed over a larger area, males spend more time and energy traveling greater distances and consequently spend less time resting compared with males that maintained territories (Brashares & Arcese, 2002). In striped mice, females who maintain smaller home ranges also experience delayed reproduction and dispersal (Schradin & Pillay, 2005). While increased investment in reproductive anatomy may be attributed to higher resource availability, male X. inauris also have different dispersal tactics that are impacted by rainfall (O'Brien et al., 2021).
Sexually mature males either disperse at reproductive maturity and join a male band (Waterman, 1995), or delay dispersal and remain with their natal group (O'Brien et al., 2021;Waterman, 1995;Waterman, 1997). Dispersal tactics result in similar reproductive success (Manjerovic & Waterman, 2015), but physiological and behavioral differences are affected by resources (O'Brien et al., 2021;Scantlebury et al., 2008). Band males are more mobile, with higher resting metabolic rates and larger home ranges and consequently spend less time feeding (Manjerovic & Waterman, 2015;Scantlebury et al., 2008). Despite these higher costs, band males are only in bet-
CO N FLI C T O F I NTE R E S T
The authors state that there is no conflict of interest.
DATA AVA I L A B I L I T Y S TAT E M E N T
The data that support the findings of this study are available from the corresponding author upon reasonable request. All data are associated with tables and figures: Dryad https://doi.org/10.5061/ dryad.jsxksn0cv. | 2022-08-18T15:11:27.085Z | 2022-08-01T00:00:00.000 | {
"year": 2022,
"sha1": "806f4bf6373a535ecefcdd9dd5ffbfcad70e7ea8",
"oa_license": "CCBY",
"oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1002/ece3.9208",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "37bcc6cbbf6eb4d6d365e9893d29cd897235e271",
"s2fieldsofstudy": [
"Environmental Science"
],
"extfieldsofstudy": [
"Medicine"
]
} |
249314637 | pes2o/s2orc | v3-fos-license | Approach toward In Vitro-Based Human Toxicity Effect Factors for the Life Cycle Impact Assessment of Inhaled Low-Solubility Particles
Today’s scarcity of animal toxicological data for nanomaterials could be lifted by substituting in vivo data with in vitro data to calculate nanomaterials’ effect factors (EF) for Life Cycle Assessment (LCA). Here, we present a step-by-step procedure to calculate in vitro-to-in vivo extrapolation factors to estimate human Benchmark Doses and subsequently in vitro-based EFs for several inhaled nonsoluble nanomaterials. Based on mouse data, the in vitro-based EF of TiO2 is between 2.76 · 10–4 and 1.10 · 10–3 cases/(m2/g·kg intake), depending on the aerodynamic size of the particle, which is in good agreement with in vivo-based EFs (1.51 · 10–4–5.6 · 10–2 cases/(m2/g·kg intake)). The EF for amorphous silica is in a similar range as for TiO2, but the result is less robust due to only few in vivo data available. The results based on rat data are very different, confirming the importance of selecting animal species representative of human responses. The discrepancy between in vivo and in vitro animal data in terms of availability and quality limits the coverage of further nanomaterials. Systematic testing on human and animal cells is needed to reduce the variability in toxicological response determined by the differences in experimental conditions, thus helping improve the predictivity of in vitro-to-in vivo extrapolation factors.
■ INTRODUCTION
Nanotechnology has been recognized as one of the Key Enabling Technologies of the 21st century, thanks to its revolutionary applications in multiple sectors, ranging from energy to healthcare. 1 According to the International Organization for Standardization (ISO), nanomaterials are defined as materials "with any external dimension in the nanoscale or having internal structure or surface structure in the nanoscale". 2 In parallel to the enthusiasm for their novel functions, the inclusion of nanomaterials in products has also raised concerns about their potential impacts on the health of workers, consumers, and in general humans exposed to them along the product life cycle. 3 Life Cycle Assessment (LCA) is the preferred methodology to assess the environmental impacts of nanoenabled products and compare them with existing alternatives, accounting for the negative but also positive impacts that a new technology may have on the overall environmental profile of the product (e.g., increased toxicity for humans but reduced greenhouse gas emissions). 4,5 In LCA, impacts are calculated by linking all emissions occurring during a product life cycle to their corresponding characterization factors, which define the incidence of negative health/ecological effects caused by the emission of a substance. For toxicological impacts (on humans as well as on the ecosystem), the LCA community agreed on the use of USEtox as a common consensus model. 6 Within USEtox, a characterization factor is calculated as a combination of 1) a fate factor, which indicates how a substance is distributed in the environmental compartments following its emission; 2) an exposure factor, which describes the human uptake of the substance from the environmental compartments via multiple exposure pathways; 3) and an effect factor (EF), which relates the uptake of the substance to potential negative health effects. 7 USEtox and its calculation principles have been developed for organic chemicals and metal ions 6,8 and is thus not adequate for nanomaterials in its original setting. 9 A nanospecific fate model has been developed to calculate the fate factor for nanomaterials, 10 while the exposure factor is either calculated according to existing methodologies or disregarded. 11 The EF is calculated from animal toxicological studies using those extrapolation factors (e.g., the interspecies extrapolation factor) needed to convert the animal results to a human chronic ED 50 , i.e., the lifetime dose generating a 50% increase in disease probability for humans. 6 Since these extrapolation factors have been obtained based on data for organic chemicals, their validity for nanomaterials is yet to be proven. 12,13 However, a bigger challenge lies upstream: animal testing is being reduced in favor of alternative methods, resulting in a scarcity of toxicological data compared to the number of newly developed nanomaterials. 14 A potential solution to this could be to use in vitro data, i.e., the results of toxicological studies conducted on human cells, as the data pool for the calculation of human toxicity EFs, as suggested by several authors. 15,16 Salieri et al. 17 proposed an approach to calculate EFs for soluble nanoparticles from in vitro data, based on the fact that the toxic effects are mainly caused by the dissolved ions rather than the particle itself. This approach is though not fit for nonsoluble particles.
Recently, we proposed that a combination of models could be used for the calculation of EFs from in vitro data, 18 and we developed a model to ease the application of this strategy for the specific case of inhaled spherical nanomaterials and their effects on the lung. 19 In this paper, we provide a proof of concept of the estimation of in vitro-to-in vivo extrapolation factors, and we use these preliminary factors to calculate in vitro-based EFs for titanium dioxide, amorphous silica, crystalline silica, and cerium oxide.
■ MATERIALS AND METHODS
Overview of Methodology. The calculation of in vitro-toin vivo extrapolation factors and in vitro-based EFs follows multiple steps, depicted in Figure 1. The first step is the collection of toxicity data from animal studies and from in vitro studies using animal and human cells (1). Then dosimetry models are applied to find the deposited doses per well area and the retained doses per alveoli area corresponding to the doses used in vitro and in vivo (2). The obtained doses are transformed into surface area doses, and a Benchmark Dose (BMD) is then calculated for each dose−response data set (3). The in vitro-to-in vivo extrapolation factors are then calculated as the ratio between the in vivo and in vitro animal data (4). A human Benchmark Dose is extrapolated from the in vitro human data using the in vitro-to-in vivo extrapolation factor, following the parallelogram approach, which states that the relationship between animal data and animal cell data is maintained also for humans and human cells (4.1). 20 Finally, the human toxicity EFs are calculated through the traditional extrapolation procedure from the USEtox methodology 6 (5).
The following low-solubility nanomaterials were included in this work: titanium dioxide, in the anatase, rutile, and P25 mixture (≈80% anatase 20% rutile) forms, cerium oxide, amorphous silica, and finally crystalline silica as a representative of a high-toxicity nanomaterial. Titanium dioxide data were grouped for the calculation of the extrapolation factor, to avoid the factor to be based on a single or few data points, while the EF was calculated for both the grouped and the single type TiO 2 . For both cases, i.e., in vitro and in vivo, lung inflammation−the release of (pro-)inflammatory factors−was chosen as the relevant end point, since it is considered an important mode of action through which nanomaterials cause toxic effects; moreover, multiple studies showed a correlation between in vitro and in vivo indicators of inflammation, 21−24 Environmental Science & Technology pubs.acs.org/est Article suggesting that in vitro tests may be able to measure early events leading to acute lung inflammation. 25 Acute inflammation may become chronic if the exposure is not halted and the inflammation resolved, 26 and more serious diseases such as lung fibrosis may develop. 27−29 Data Collection. In Vitro Data. A literature search was conducted using Google Scholar and Scopus, using various combinations of the following keywords: "nanomaterial name", "in vitro", "inflammation", "toxicity", "macrophages"; to find data for human or animal cells, these additional keywords indicating the species or macrophages cell line were used: "mouse", "rat", "murine", "THP-1", "RAW264.7", "J774A.1", "HMDM", "NR8383". Moreover, the data set published in Romeo et al. 19 was also used as a data source.
The criteria for inclusion of data from a study were as follows: a) used a monoculture of human, rat, or mouse macrophages; b) tested spherical particles; c) tested the release of pro-inflammatory cytokines (TNF-α, IL-1β, IL-6, IL-8, MIP-2); d) included at least two doses plus negative control; e) included all parameters needed for the use of the Combined Dosimetry model CoDo, as described in Romeo et al. 19 From 26 publications, we extracted 141 dose−response data sets, 59 using human cells, 35 for rat cells, and 47 for mouse cells.
In Vivo Data. In vivo data was collected from the literature and from the data set published in Romeo et al. 19 using a combination of the following keywords: "nanomaterial name", "rat", 'mouse", "in vivo", "toxicity", "lung", "inhalation". The inclusion criteria were as follows: a) rat or mouse as the animal; b) at least two doses tested in addition to the negative control; c) neutrophil (PMN) influx as a number or percentage in Bronchoalveolar Lavage Fluid (BALF) as the end point; d) the exposure time lasted at maximum 1 week; e) the postexposure time was at maximum 72 h if the particles were delivered via an intratracheal instillation; f) either the specific surface area of the particles or the primary particle diameter was reported.
155 dose−response data sets, 109 using rats and 46 using mice, were extracted from 30 publications.
Simulation of Particle Deposition and Retention. For in vitro data, the Combined Dosimetry model CoDo was used to simulate the deposition of the particles on the cells, determined by sedimentation and diffusion processes. 19 For in vivo data, when the particles were administered via inhalation, the Multiple-Path Particle Dosimetry model (MPPD) 30,31 was used to calculate the amount of particles retained in the animal alveoli, while for instillation, we assumed 100% deposition in the lung. The parameters used for both models are reported in the Supporting Information. Whenever possible, the retained dose was preferred to the deposited dose as it has been shown to better correlate with the effects measured in the animal. 32 Both in vitro and in vivo deposited/retained doses were normalized by the surface area of the cell culture well or the animal alveoli, respectively.
Calculation of Benchmark Doses. The Benchmark Dose (BMD) approach was chosen due to its recognition in the risk assessment and LCA communities as a way to determine toxicological dose descriptors from dose−response data sets. 33−35 In this approach, a dose−response curve is fit on the data, and the dose causing a certain response over the control (the Benchmark Response (BMR)) is identified, together with the uncertainty of such a value. 33 Since the surface area was identified in multiple studies as a more relevant dose metric than mass, 36,37 the deposited/ retained doses were transformed from mass to surface area doses using the specific surface area (SSA) of the particles; when not reported, the SSA was calculated from the primary particle diameter of the particles by assuming a perfectly spherical shape. A Benchmark Response (BMR) of 20% was chosen for the BMD calculation, done with the PROAST software. 38,39 The percentage of neutrophils in BALF was considered a quantal response, while other end points were considered continuous responses. Whereas Pennington et al. 34 proposed the use of the ED 10 or BMD 10 for the linear extrapolation of risk at low doses when calculating an EF (in place of the ED 50 ), we chose a BMD 20 , equivalent to the ED 20 , since such change is considered a sign of low inflammation 40,41 and still resides in the low-dose region of the dose−response curve. 42 Calculation of In Vitro-to-In Vivo Extrapolation Factors. The calculation of in vitro-to-in vivo extrapolation factors is done in parallel for each nanomaterial, for rat and mouse animal and cell lines and for the two in vivo end points (number of PMN and PMN percentage). For each group of data, we calculated the ratio between each combination of in vivo and in vitro BMD 20 values. A nonparametric bootstrapping procedure was applied to estimate the distribution of the median in vivo−in vitro ratio. The use of the median is more robust compared to the mean for non-normal distributions. 43 Then, we removed the outliers according to the 1.5·IQR rule, which identifies as outliers those points that have a distance from the 0.25 and 0.75 quantiles of at least 1.5 times the interquartile range (IQR). 44 The in vitro-to-in vivo extrapolation factor estimated via the bootstrapping procedure is the arithmetic mean of the estimated population of ratios, after the removal of outliers.
Calculation of Human Toxicity EFs from In Vitro Data. For each particle, the calculation of the in vitro-based EFs was done following these steps: 1. Calculate the median BMD 20 from in vitro human data via nonparametric bootstrapping; 2. Multiply by the in vitro-to-in vivo extrapolation factor to obtain the human BMD 20 in dose per cm 2 lung; 3. Multiply by the human alveoli surface area to obtain the total retained dose in the lung; 4. Divide by the retention rate to find the intake dose. The retention rates were calculated via the MPPD model for particles with an aerodynamic diameter between 10 nm and 1 μm; since the retention rate is not constant over time, a 7-day continuous exposure was chosen (same exposure limit as for the selection of animal studies).
The maximum and minimum rates were then used to obtain a range of intake doses; 5. Divide by 7 to find the daily intake dose; 6. Divide by 5 to extrapolate from subacute BMD 20 to chronic BMD 20 with the extrapolation factor from Vermeire et al.; 45 7. Convert to lifetime intake by multiplying by 365 days and 70 years; 8. Convert the lifetime chronic BMD 20 from cm particle 2 lifetime intake to (m particle 2 /g particle )·kg intake , so that the unit of the EF will be consistent with the one of published EFs (see the SI for conversion steps); 34 where EF = 0.1/ED 10 ). Calculation of Human Toxicity Effect Factors from Animal Data. As a comparison, EFs were calculated from the collected animal data: 1. Calculate the median BMD 20 from animal data via nonparametric bootstrapping; 2. Multiply the median animal BMD 20 by the animal alveolar surface area to find the total retained dose; 3. Extrapolate to the retained dose in human using the ratio between the human alveoli surface and the animal alveoli surface, as in Fransman et al. 46 After obtaining the human BMD 20 as retained dose, the EF was calculated following steps 3 to 8 from the previous section.
Calculation of Uncertainties. The uncertainty of LCIA extrapolation factors is expressed by the dispersion factor k, which indicates how much a factor x might deviate from the median (M), with a 95% probability (P): 47 For the in vitro-to-in vivo extrapolation factors, we calculated the dispersion factors from the 95th percentile of the bootstrap distribution, after the removal of outliers, with the formula from Huijbregts et al., 48 which does not require any assumption on the shape of the data distribution: The uncertainty of the final EF was calculated as a combination of the dispersion factors of the extrapolation factors, according to Slob. 47 ■ RESULTS AND DISCUSSION Benchmark Dose Values. 109 BMD 20 values were obtained from the in vitro data, most of them regarding human and mouse cells; the values ranged over multiple orders of magnitude, in particular for the larger data sets, i.e., amorphous silica and titanium dioxide ( Figure 2 and Table S1). Such differences were due to the collected data rather than the deposition simulations: the particle concentrations used in the studies ranged from 1 × 10 −5 to 1.7 mg cm 3 , and the deposited doses ranged from 1 × 10 −6 to 0.7 mg cm 2 or 1.27−8.8 × 10 3 cm 2 /cm 2 when using the surface area dose. We did not observe any trend based on the cytokine considered (TNF-α, IL-1β, IL-6, IL-8, MIP-2), supporting our choice of aggregating them in a unique end point called "cytokine release".
For in vivo data, we obtained 103 BMD 20 values, 59 considering the absolute number of neutrophils as end point ( Figure 3) and 44 considering the percentage of neutrophils as end point ( Figure S1). Also in this case, the BMD 20 values had a very broad range (Table S2), which is linked to the original Figure 2. Distribution of the BMD 20 in particle surface area per cell culture area calculated from in vitro data for the cytokine release end point, for each particle and cell species. The colored boxes represent the interquartile range, and the whiskers represent 1.5 times the interquartile range. Figure 3. Distribution of the BMD 20 in particle surface area per lung surface area calculated from in vivo data for the neutrophil influx end point, for each particle and animal species. The colored boxes represent the interquartile range, and the whiskers represent 1.5 times the interquartile range.
Environmental Science & Technology pubs.acs.org/est Article data rather than the deposition calculation, since the deposition rate was set as constant in the case of instilled nanomaterials and only spread over an order of magnitude for the administration via inhalation. The wide range of both in vitro and in vivo BMD 20 can only be explained by the differences in material properties and experimental conditions of the original studies. The impact of such factors on the biological response has been highlighted in multiple publications. 19,41,49,50 While the nanomaterials we considered are the most studied, our constraints for the inclusion of data are quite stringent. Only studies with a comprehensive characterization of the particle physicochemical properties were included, since this information was necessary for the simulation of the particle behavior in the in vitro system. For example, multiple studies had to be discarded because they did not report the diameter of the agglomerated particle in the media. For the calculation of the BMD 20 , at least two doses plus the control were needed to fit a dose−response curve over the data, thus excluding those studies where only one dose was tested (This was often the case for in vivo studies.). Last, those data sets without a clear dose−response relationship were discarded as well by the BMD modeling process. This explains why some nanomaterials and species only have a few data points.
In Vitro-to-In Vivo Extrapolation Factors. Table 1 reports the in vitro-to-in vivo extrapolation factors calculated from rat and mouse data and considering the number of PMN as end point. Cerium oxide had to be excluded since it did not have corresponding in vitro and in vivo data. The extrapolation factors calculated for neutrophil percentage are available in Table S3.
The extrapolation factors obtained from the ratios of in vivo and in vitro BMD 20 values (Figures S2 and S3) via bootstrapping are different for each particle and show a level of uncertainty dependent on the number and variability of the BMD 20 values. Depending on the species considered, amorphous silica and titanium dioxide follow opposite trends. Such a difference does not support the hypothesis that a unique extrapolation factor might be valid for low-toxicity lowsolubility particles; however, given the wide spread of BMD 20 values, some data sets (e.g., for amorphous silica) are so small that it is questionable whether they correctly represent the distribution of the BMD 20 . A more reliable approach to test this hypothesis would be to have triads of in vivo data, in vitro data using mouse cells, and in vitro data using human cells obtained by testing in (as much as possible) the same exposure conditions and using the same nanomaterial. In this case, the comparison of the ratios of multiple nanomaterials would not suffer from the large variability of the BMD 20 values. Unfortunately, the current lack of such fit-for-purpose data prevents us from applying this approach today.
Human Toxicity Effect Factors from In Vitro Data. The EFs have been calculated from in vitro human data as ranges ( Table 2), to account for the effect that the aerodynamic particle size (which in our case was unknown) has on the retention of the particles in the human lung. Considering particles with an aerodynamic diameter between 10 nm and 1 μm, the retention rate ranged between 6% and 24% of the intake dose considering 7 days of continuous exposure.
The calculated EFs significantly differ depending on which species was used to calculate the in vitro-to-in vivo extrapolation factors. In the case of rat, the calculated EFs correctly represent the higher toxicity of crystalline silica, but amorphous silica and titanium dioxide, both considered lowtoxicity materials, show a great difference in potency, with the latter multiple orders of magnitude more toxic. Looking at the EFs using mouse data, both particles show a similar low toxicity, though no data is available to compare it with crystalline silica. This difference is explained by the fact that titanium dioxide is reported in the data we collected as very inflammogenic for rats in vivo, while the same effect was not observed for amorphous silica or for mice.
The higher susceptibility to inhaled nanomaterials of rats compared to mice due to a faster lung overload and a stronger inflammatory response is well-known. 51,52 This suggests that, despite being frequently used in animal studies, the rat might be a precautionary choice rather than a representative one for the effects of particles on human lungs.
Comparison between In Vitroand In Vivo-Based Human Toxicity Effect Factors. Table 3 and Figure 4 show the comparison between the in vitro-based EFs, the in vivobased EFs calculated from the same animal data used for the in vitro-to-in vivo extrapolation factors (see also Table S4), and the EFs available from the literature and obtained from chronic or subchronic animal studies. The results show a good agreement between the in vitro EFs and the in vivo EFs we calculated; while these values are partially correlated, since the in vivo BMD 20 values are used to calculate the in vitro-to-in vivo extrapolation factors, it is also true that the extrapolation factor depends also on the in vitro animal BMD 20 values and that different extrapolation procedures are used for the two data sources to calculate the EFs. When comparing also with the published EFs for titanium dioxide, both our in vitro and in vivo EFs fall in the same range when mouse data is used, while the EFs based on rat data confirm the strong response this species has to this nanomaterial. No EFs have been published for the other materials, but the good correspondence between our calculated in vitro and in vivo EFs for amorphous silica and titanium dioxide from mouse data suggests this might be a representative result as well.
Another interesting point is that while our data was restricted to short-term inflammation, published data referred to longer term studies looking at a variety of effects such as alveolar epithelial cell hypertrophy, cell necrosis, histopathological findings, and neutrophil levels in BALF and often considered the No Observed Adverse Effect Level (NOAEL) or the Lowest Observed Adverse Effect Level (LOAEL) rather than an ED 50 . This seems to suggest that short-term effects might be predictive of more chronic effects (for which they are suggested to be necessary but not sufficient precursors, see, Table 3. Comparison between the In Vitro-Based EFs (Highlighted in Light Blue), the In Vivo-Based EFs Calculated from Our Data Set, and the EFs Available from the Literature a a All EFs are reported as cases/(m 2 /g·kg intake). When the EF was not reported with respect to the particle surface area, a default specific surface area of 48 m 2 /g was used, as in Buist et al. 53 Figure 4. Comparison between the ranges of EFs (on a log scale) calculated from in vitro rat and mouse data, in vivo rat and mouse data (our data sets), and the EFs available from the literature. When the EF was not reported with respect to the particle surface area, a default specific surface area of 48 m 2 /g was used, as in Buist et al. 53 Crys. SiO 2 = crystalline silica.
Environmental Science & Technology pubs.acs.org/est Article e.g., the proposed Adverse Outcome Pathway for lung fibrosis 56 ), even though more analyses are needed to confirm this hypothesis. Implications for Further Research. Published experimental studies suggest that inflammation might be a promising predictive end point to be tested in vitro. 22,57−59 Other studies point out that the surface area better correlates with the lung effect of nanomaterials compared to mass doses. 24,36,60,61 Multiple studies address the importance of considering the deposited dose in vitro instead of the nanomaterial concentration for a better characterization of the dose−response relationship. 62−64 We put together these pieces of information in developing our EF calculation strategy and tested it with data collected from the literature. The goal was to calculate in vitro-to-in vivo extrapolation factors, which can then be used similarly to any other extrapolation factor for the estimate of human toxicity EFs. Ideally, once an in vitro-toin vivo extrapolation factor has been estimated and its predictive power confirmed for multiple nanomaterials, there would be no need for animal and in vitro animal data but only for in vitro human data.
The comparison with published EFs can be used as a benchmark for the in vitro-based EFs, to understand whether this new data pool provides comparable results. This was the case for titanium dioxide when using the in vitro-to-in vivo extrapolation factor based on mouse data, where the EF is in the same range as published values. The good coverage of the in vivo and in vitro data used to calculate the extrapolation factor for titanium dioxide makes the factor more robust, since the real distribution of the BMD 20 values is better approximated by our samples.
Despite this promising result, the difficulty in calculating the extrapolation factors for the other nanomaterials shows the limitations of applying our approach with the currently available data. The main challenge we face is the quality and consistency of the toxicological data. For example, only a few BMD 20 values were available for amorphous silica in vivo, which questions the reliability of the extrapolation factor; even worse, for cerium oxide there were no corresponding in vivo and in vitro data, preventing the calculation of any factor. The wide toxicity range of in vitro and in vivo data confirms that the particle properties and the experimental conditions can have a huge impact on the results, hindering their comparison. Using a median BMD 20 obtained from a large data set is a better choice than using a single value from a specific study and allows keeping track of the BMD 20 uncertainty, but it can produce skewed results when the data is scarce and is not a representative sample of the BMD 20 distribution.
For the in vitro-to-in vivo extrapolation factors, coupled in vitro and in vivo data (i.e., obtained using similar particles and experimental conditions) for animals, animal cells, and human cells are needed to verify the parallelogram approach and investigate whether a single extrapolation factor might be valid for multiple particles. Moreover, removing the variability connected to the differences in experimental conditions would reduce the amount of data required to describe the distribution of the BMD 20 values, as we would expect the values to be more precise. However, on a provisional level and keeping in mind the factor 10 uncertainty, an in vitro-to-in vivo extrapolation factor of ca. 5 × 10 −1 (halfway between the TiO 2 and amorphous silica factors) might be used to compare nonsoluble particles, especially if the in vitro human data has a high level of comparability (e.g., same experimental conditions).
In conclusion, we are not yet there for a consistent and systematic calculation of in vitro-based EFs. However, we showed a promising method to calculate these factors and identified which further steps are needed to reduce the uncertainty and improve and expand the results. One example above all, the conduction of fit-for-purpose in vitro experiments on human and animal cells, could be done systematically for multiple particles, thus providing a way to refine and test further our procedure without the need for animal testing.
We believe that our work not only can help direct future interdisciplinary efforts to tackle the critical aspects of the use of in vitro data in LCIA but also be of interest for the Risk Assessment community, which is facing similar challenges in extrapolating human responses without the use of animal data. | 2022-06-04T06:23:12.248Z | 2022-06-03T00:00:00.000 | {
"year": 2022,
"sha1": "bb3147e10e23902734bf4a9805704afdec951872",
"oa_license": "CCBYNCND",
"oa_url": "https://www.dora.lib4ri.ch/empa/islandora/object/empa:29464/datastream/PDF/Romeo-2022-Approach_toward_in_vitro-based_human-(published_version).pdf",
"oa_status": "GREEN",
"pdf_src": "PubMedCentral",
"pdf_hash": "7aeb90bee08e7e1b11229d49cf00521f12687827",
"s2fieldsofstudy": [
"Environmental Science"
],
"extfieldsofstudy": [
"Medicine"
]
} |
5789046 | pes2o/s2orc | v3-fos-license | 1H Nuclear Magnetic Resonance (NMR) Metabolomic Study of Chronic Organophosphate Exposure in Rats
1H NMR spectroscopy and chemometric analysis were used to characterize rat urine obtained after chronic exposure to either tributyl phosphate (TBP) or triphenyl phosphate (TPP). In this study, the daily dose exposure was 1.5 mg/kg body weight for TBP, or 2.0 mg/kg body weight for TPP, administered over a 15-week period. Orthogonal signal correction (OSC) -filtered partial least square discriminant analysis (OSC-PLSDA) was used to predict and classify exposure to these organophosphates. During the development of the model, the classification error was evaluated as a function of the number of latent variables. NMR spectral regions and corresponding metabolites important for determination of exposure type were identified using variable importance in projection (VIP) coefficients obtained from the OSC-PLSDA analysis. As expected, the model for classification of chronic (1.5–2.0 mg/kg body weight daily) TBP or TPP exposure was not as strong as the previously reported model developed for identifying acute (15–20 mg/kg body weight) exposure. The set of majorly impacted metabolites identified for chronic TBP or TPP exposure was slightly different than those metabolites previously identified for acute exposure. These metabolites were then mapped to different metabolite pathways and ranked, allowing the metabolic response to chronic organophosphate exposure to be addressed.
Introduction
Environmental exposure to organophosphates (OP) continues to be a concern due to the prevalent use of these chemicals in industrial applications.The identification of OP in almost every environmental matrix, including surface and ground water, air, soil, sewage and sludge, demonstrates the extent and persistence of these pollutants [1][2][3][4][5][6][7].It has also recently been demonstrated that urinary metabolites of OP can be seen in the general human population at background levels [8].Tributyl phosphate (TBP) and triphenyl phosphate (TPP), the focus of this paper, are used in aircraft hydraulic fluids and lubricant oils, flame retardant substitutes for halogenated compounds in plastics and resins, non-flammable plasticizers in acetate, polyester and polyurethane films, and antifoaming agents in concrete.TBP is also used during solvent extraction of nuclear waste and reprocessing of nuclear material based on the PUREX (Plutonium-Uranium Reduction Extraction) process.
Numerous studies suggest that TBP and TPP are neurotoxic (delayed neurotoxicity) and may have possible teratogenic effects [9][10][11][12][13][14][15].An overview of these findings is available from the International Programme of Chemical Safety (IPCS).On the other hand, the metabolism of TBP and TPP has seen fewer investigations.A 14 C-labeling study showed that there are 11 different phosphate containing metabolites produced directly from TBP, with the mono-and di-butyl phosphates being the dominant metabolite species produced [18,19].Additional studies identified sulfur containing metabolites, implying that glutathione-S-transferase is involved in the metabolism of TBP [18,19].A single study involving the metabolism of TPP in liver homogenates determined that diphenyl phosphate is the primary product [20].The metabolism of other organophosphates (primarily OP pesticides) commonly yields dialkyl phosphates, and as such these metabolites are used as biomarkers [21].
More recently, our group used nuclear magnetic resonance (NMR)-based metabolomics studies to explore correlating environmental exposure of TBP or TPP to changes in the urine metabolite profile of rats [22,23].Metabolomic/metabonomics is a very powerful tool in determining the response of an organism to chemical intake or exposure.Metabolomics couples advanced spectroscopic detection techniques with multivariate or chemometric analysis to identify the metabolite signature associated with some environmental chemical exposure.Several excellent articles are available describing the use of NMR as applied to metabolites [24][25][26][27][28][29].We have previously reported the metabolomics response of rats to an acute (one time) TBP or TPP exposure (TBP, 15 mg/kg body weight and TPP, 20 mg/kg body weight).In the case of TBP, there were three directly produced metabolites, dibutyl phosphate (DBP), N-acetyl-(S-3-hydroxybutyl)-L-cysteine and N-acetyl-(S-3-oxobutyl)-L-cysteine identified in the urine of treated rats [23].In addition, it was shown that changes in the endogenous urinary metabolites could also be correlated with TBP exposure.A multivariate/chemometric analysis of the NMR spectra of urine from rats exposed to either TBP or TPP has also been reported.Using orthogonal signal correction (OSC)-filtered partial least squares discriminate analysis (OSC-PLSDA) a series of important metabolites were identified and ranked based on their ability to provide classification during the analysis [22].The endogenous metabolites contributing to the exposure classification were taurine, betaine, 2-oxoglutarate, creatine and citrate; suggesting an impact on the citrate (TCA) cycle.
In this paper, we present an extension to these 1 H NMR metabolomic studies of acute TBP and TPP exposure by evaluating the metabolomic response to a chronic, lower dose TBP and TPP exposure in rats over a 15-week period.The OSC-PLSDA method previously employed for the acute studies was also used here, thus providing a direct comparison of metabolites responsible for identification of acute and chronic organophosphate exposure.
1 H NMR of Urine Following Chronic TPP and TBP Exposure
The normalized 1 H NMR spectra of urine samples collected from TBP exposed (5 rats), TPP exposed (5 rats) and control animals (7 rats) for seven different time points during the total 15-week study are shown in Figure 1.As expected for urine samples, resonances for numerous metabolites are observed, with changes in the overall metabolite profile occurring as a function of both exposure time and exposure class.Even though previous studies of acute TBP and TPP exposure have identified specific spectral regions and metabolites that are impacted by exposure, we were unable to identify simple unique NMR spectral signatures that correlated with a given exposure class over all animals in that set.Note that the resonances previously assigned to the dibutylphosphate (DBP), an intermediate metabolic degradation species of TBP, would be observable at δ = +0.9ppm.This spectral region does not reveal any significant intensity variation during the chronic exposure studies, suggesting that at this level of insult, the native detoxification mechanism(s) reduce the concentration of DBP below the NMR detection limits.There are also some metabolite spectral signatures that increase and then decrease during the exposure process, such as the singlet resonance at δ = +1.31ppm, and the sharp singlet at δ = +1.91 ppm (Figure 1).Similarly, there are no large changes in the aromatic region that could be associated with the production of the diphenyl phosphate byproduct of TPP, again suggesting that the concentration of this metabolic degradation species is below NMR detection limits.To identify spectral regions that correlate with OP exposure a chemometric analysis of the entire exposure data set was undertaken, as described below.There are numerous multivariate methods that could be applied, including principal component analysis (PCA), soft independent modeling of class analogy (SIMCA), linear discriminant analysis (LDA), partial least squares (PLS), PLS-discriminant analysis (PLS-DA), along with non-linear methods such as hierarchical cluster analysis (HCA), self-organizing maps (SOMs), non-linear mapping (NLM) and genetic programming (GP) [29][30][31].For this paper we have elected to utilize the same OSC-PLSDA method previously implemented in the analysis of acute TBP and TPP exposure [22], allowing direct comparison between the two studies. .600 MHz 1 H nuclear magnetic resonance (NMR) spectra of rat urine collected from control (blue), tributyl phosphate (TBP) (magneta) or triphenyl phosphate (TPP) (red) exposed animals over a 15 week period with chronic administered doses.The NMR spectra were referenced using the internal chemical shift indicator DSS (500 µM), with the overall signal intensity normalized using the quotient method.The water spectral region (4.1 to 5.5 ppm) was removed prior to analysis.
OSC-PLSDA Model Development for Chronic TPP and TBP Exposure
Orthogonal signal correction (OSC)-filtered partial least squares discriminate analysis (OSC-PLSDA) was employed as previously described [32].In this study, three classes were utilized: TBPexposed (class 1), TPP-exposed (class 2) and control (class 3).The OSC-PLSDA method attempts to identify the spectral regions that are responsible for separation of the different classes and predicts the identity of each sample with a 1 designating a sample is included within a class, and a 0 if it is not being included in that class.During the development of the OSC-PLSDA model the number of latent variables (LV) required for classification needs to be identified.This was estimated by monitoring the change in classification error (% of sample misclassified) for each sample in the data set as a function of the number of latent variables.The classification error commonly drops quickly as the first few LV are incorporated into the model, followed by a slower reduction for higher number of latent variables.This allows the user to choose the number of latent variables to employ based on the magnitude classification error acceptable.For the present study we have chosen a classification error of <5% prior to cross validation.To prevent over fitting of the data set the minimum number of latent variables based on this error criteria was employed.
The classification error as a function of the number of LV for prediction of each exposure type during the 15-week chronic exposure is shown in Figure 2. The OSC-PLSDA model was originally evaluated using the NMR data from the entire 15-week sampling period (black symbols).Between six and eight LV were required to reduce the classification error below 5%, and in many instances produced perfect classification when using eight or more LV.The TBP classification error was slightly higher than the error for TPP or control classifications.The number of LV for this chronic model is similar to the six LV employed for the acute TBP and TPP exposure modeling [22], but for the acute model only six LV were required to obtain a classification error of zero (all samples correctly classified).
The classification errors obtained during cross validation (CV), using a venetian blind method (see experimental for details), are shown in Figure 2B and represents an average over all CV trails.There is clearly an increase in the error for all three classes, with a higher number of LV required to obtain the desired target 5% error level.For this chronic exposure NMR data set, between 10 and 14 LV were required to obtain classification errors below 5%.An exception is the identification of the TPP-exposed animals which hovers near 10% classification error (under CV) until over eighteen LV were employed.Table 1 provides a select summary of the classification errors with increasing number of LV.The high number of variables required for classification following chronic exposure is somewhat disappointing, and is in contrast to the six LV required to provide excellent classification in the acute exposure [22].This result suggests that while the metabolic response to TBP and TPP exposure as monitored by 1 H NMR analysis of the rat urine is present, it is not particular strong, nor is it an easily recognized response.The question has been raised whether the exposure to TBP or TPP produces a delayed metabolic response, and if classification might be improved by only considering the NMR data response from later weeks?To explore this concern, the classification error was determined as a function of the number of latent variables (Figure 2) for subsets of the data involving weeks 2 through 15 (red), weeks 4 through 15 (green) and weeks 8 through 15 (blue).For classification of the control and TPP animals there is a slight improvement in the prediction error when only data from week 4 through 15 is included in the analysis, perhaps suggesting a delayed (but weak) metabolic response to TPP exposure.For example, the error in TPP prediction using 12 LV drops from 13.3 % to 5.8%, while the control classification error drops from 3.95 to <0.1%, by only evaluating the later exposure weeks.This delayed metabolic response was also observed in the variation of VIP scores as a function of time for the important metabolites (Figure S1, supplemental material), and discussed further in section 2.3.
In contrast, the TBP classification error increases when analyzing data sets that include the later weeks of chronic exposure.For twelve LV, the TBP classification error jumps from 1.2% to almost 18.5% by going from the full (1 through 15 week) data set to a reduced (8 to 15 week) data set (see Table 1).This increasing prediction error with truncation of the first few weeks of exposure argues that for chronic TBP exposure, the metabolic changes that allow for classification are the strongest in the early weeks following TBP exposure.While it is possible to tailor the OSC-PLSDA model for optimal TPP or TBP classification by altering the sampling subset following exposure, we have elected to utilize the entire data set for the remaining analysis discussed below.
Figure 3 shows the OSC-PLSDA classification results using twelve latent variables on the complete 15-week data set.For the original model, the separation of the three classes is excellent with all 118 urine spectra being correctly classified (classification errors < 1%, Table 1).A perfect classification score of 1.0 (positive) or 0.0 (negative) is shown as a green line in Figure 2 to provide a visual reference.Under cross validation there is an increased scatter in the error observed (Figure 3B) with some samples being misclassified (classification score < 0.6).The TPP classification errors reveal the greatest degree of scatter, which is consistent with the prediction errors shown in Table 1.
The data in Figure 3 is also grouped in time series for each exposure type.For example, samples 1 -49 represent control rats (no TBP or TPP exposure), with week one on the left ending with week 15 on the right.Samples 50-83 are from the TPP exposed rats, with week one on the left and week 15 on the right.Similarly, urine samples 84-118 are from the TBP exposed animals, with week one on the left and week 15 on the right.Inspecting the time variation in the predicted classification score within each group (left to right), there were no large increases/decreases in the performance of the model with time observed.This again supports the argument that there were not any delayed metabolic responses to chronic exposure that become dominant in controlling the classification, and that the entire data set over the exposure period should be employed in the analysis.
Important Metabolite Identification Using VIP Scores
The spectral regions responsible for the classification of TBP or TPP exposure were identified using the variable importance in projections (VIP) coefficients obtained during OSC-PLSDA (See experimental section for definition of VIP, Equation 1).Spectral regions with high VIP coefficients are more important in providing class separation during analysis, while those with very small VIP coefficients provide little contribution to classification.VIP coefficients were obtained for each exposure class; control, TBP-and TPP-exposed.Mapping of these VIP coefficients onto the 1 H NMR spectra is shown in Figure 4 (only the TBP-and TPP-exposed classes are shown), with the colors representing the scaled VIP scores observed for each spectral region, allowing the identification of important metabolites.While there are similarities between the VIP coefficients in each class, inspection of Figure 4 shows that indeed different spectral regions are employed during the classification process.It is important to note that it is not the spectral intensity reflecting the VIP scores (like would be seen in a loadings plot), but the color coding associated with each frequency.For example, in Figure 4 the acetate resonance at δ = +1.91 ppm has a very large VIP score (red), even though the peak intensity for that particular spectrum is small.
The 1 H NMR spectral regions with the highest ten VIP coefficients obtained from the chronic OSC-PLSDA are summarized in Table 2.The top VIP-identified spectral regions identified during the previous acute exposure studied are also provided in Table 2 for comparison [22].Identification of the metabolites responsible for resonances in these spectral regions was accomplished using the CHENOMX NMR Suite metabolite spectral library.Some of the important metabolites identified are noted in Figure 4, and in Table 2.There are a few spectral regions with high VIP scores that were not assigned due to the inability to uniquely identify or resolve spectral features in highly overlapped regions.2D NMR experiments such as COSY, TOCSY or HMQC could be pursued to help in identification of these regions, but were not obtained for the current samples.For classification of TPP exposed animals, the spectral regions with the highest two VIP scores were δ = +3.04 and +3.25 ppm, which have been assigned to succinate and betaine, respectively.The remaining top five spectral regions identified from these VIP scores (Table 2) are assigned to the endogenous metabolites acetate and creatine.For the TBP-exposed animals the top two spectral regions identified were δ = +3.0ppm and +2.43 ppm, corresponding to 2-oxoglutarate.The other important metabolites also identified for TBP exposure were acetate, betaine and taurine (Table 2).There are also several other metabolites that have intermediate VIP scores (0.25 to 0.6) that are incorporated into the developed model: these include succinate, citrate and creatine.
The variation of the different VIP scores for the dominant metabolites as a function of exposure week is shown in Figure S1 (supplemental material).It should be emphasized that there is no single metabolite that provides complete classification for chronic TBP or TPP exposure over the entire exposure study.Instead it is a combination of variation in several metabolite profiles that give rise to the classification.While there are large variations in the relative importance of the different metabolites, a few trends should be noted.For the classification of TBP and TPP exposure the VIP scores for acetate and succinate begin relatively low.With increased exposure time the VIP scores for these metabolites increase, becoming >0.3 after 4 to 8 weeks of exposure, and a maximum near week 14.The VIP classification scores for 2-oxoglutarate shows very high values between week 2 and week 8, then decreases while the acetate and succinate VIP scores become more important.While these classification dynamics with exposure time are interesting, it is important to recall that the milestone of this paper was to identify metabolites that could be used for exposure classification.For this reason, the analysis of the entire time series simultaneously is the focus of our results shown in Table 2.
Comparison of Metabolites for Acute and Chronic Exposure
Table 2 summarizes the important metabolites and corresponding VIP scores for both acute and chronic TBP or TPP exposure.The top three metabolites have been color coded for easy comparison between the different exposure classes.Many of the identified metabolites are similar for both acute and chronic classes, with differences in the relative ranking of importance.The chronic exposure does uniquely identify acetate as being important for both TBP and TPP classification, and was not previously observed during the acute exposure studies.Inspection of Figure 1 reveals this metabolite produces the sharp spectral signature (δ = +1.91 ppm) that appears in the later weeks of exposure, but then disappears again by week 15.
Identification of Impacted Metabolite Pathways
To determine what metabolic pathways were being impacted by chronic exposure to TBP and TPP, the metabolites with VIP scores > 0.1 (Table 2) were mapped using the MetaboAnalyst 2.0 software (www.metaboanalyst.ca/MetaboAnalysts).For analysis, the rat (Rattus norvegicus) pathway library and the hypergeometric test and the out-degree centrality algorithms were employed.The software provided a fit coefficient (p) from pathway enrichment analysis and an impact factor from pathway topology analysis for each analyzed pathway.The mapping of 14 different metabolic pathways is shown in Figure 5, with the top five pathways for TBP or TPP exposure being summarized in Table 3.While the metabolite list employed (Table 2) is rather limited and provides only two or three metabolite hits for each pathway, the mapping does allow a ranking of the relative importance and identification of different possibilities.The citrate cycle (TCA cycle) was identified as having the highest -log(p) value for both TBP and TPP chronic exposure.The TCA pathway involves changes in the cellular energy metabolism, and was previously identified as an impacted metabolic pathway for acute TBP and TPP exposure.Mapping to the glyoxylate and dicarboxylate pathway is also indicated as relatively important following chronic TPP exposure, and is related to the TCA cycle.The other identified pathways include the alanine, aspartate and glutamate metabolism, pyruvate, taurine and hypotaurine, and glycolysis metabolic pathways.This is consistent with the perturbation of the creatine production in the liver of roaches exposed to the OP pesticide fenitrothion, reported using non-targeted 1 H NMR metabolomic studies [33].Creatine synthesis is initiated in the kidney, and then completed in the liver.These same fenitrothion studies also revealed perturbations of the phenylalanine and tyrosine metabolite levels.While OP exposure is known to impact cellular metabolism in a variety of different tissues [34], the observed metabolite perturbations following chronic TBP and TPP exposure involved generic metabolic pathways, and does not provide information concerning the specific mechanisms of toxicity or targeted organs.Additional detailed organ or tissue specific studies would be required to address these chronic effects.
Animal Studies
The chronic exposure studies were performed on male Sprague-Dawley rats weighing 200-220 g (Harlan Sprague-Dawley Inc., Indianapolis, IN, USA) which were acclimatized for two weeks prior to the first dose in the animal care facility at the University of Texas Medical Branch (UTMB) Galveston.Seventeen rats were divided into three exposure classes: tributyl phosphate (TBP) exposure (5 rats), triphenyl phosphate (TPP) exposure (5 rats), and a control group (7 rats).Tributyl phosphate (TBP-98.0%purity, Sigma Aldrich, USA) was dissolved in 1 ml corn oil and was administered by gavage to the rats using a daily 1.5 mg/kg body weight dose, while the TPP (98.0%purity, Sigma Aldrich, USA) was administrated daily at a 2.0 mg/kg body weight dose.The control rats received 1 mL of corn oil only.This dosing regime was repeated for a total of 15 weeks (Monday through Friday).On each Friday, the rats were transferred overnight to metabolic cages, with the urine collected Saturday afternoon (~20-22 hours after last weekly dose), and stored at -80 °C for further analysis.The rats were then returned to a normal cage for the following dose week.The 5 day dose, 2 days non-dose regime was used to reflect possible occupational exposure.The proton ( 1 H) NMR data was only collected for urine samples from weeks 1, 2, 4, 8, 12, 14 and 15.The NMR spectra for one sample in week eight was corrupted during acquisition and was not included in the data set, with the total data set including 118 NMR spectra.
NMR Studies
The NMR analysis was performed on samples obtained from mixing 100 µl of urine with 650 µl of phosphate buffer giving a final concentration of 50 mM phosphate (pH = 6.0), 10% D 2 O, containing 500 µM DSS (2,2-dimethyl-2-silapentane-5-sulfonic acid) as a chemical shift indicator.The 1 H NMR spectra were obtained using a Varian Unity Plus 600 with a three channel 1 H-13 C-15 N (HCN) 5mm probe at 25 °C.A standard 1D NOESY pulse sequence, with a 1s recycle delay, a 1s water presaturation, 4 dummy scans, 256 scan averages, a 6 µs π/2 pulse width and a 100 ms mixing time (τ m ) was employed.A spectral width of 20 ppm, with 28k complex data points, zero-filling to 64k points prior to Fourier transformation, and apodization using a 0.5 Hz exponential line broadening was used for all experiments.The NMR spectra were transformed, phased, chemical shift referenced (DSS δ = 0 ppm), and baseline corrected using CHENOMX NMR 7.0 Suite (Edmonton, Canada).The processed NMR data was binned to 0.001 ppm sections prior to analysis.This retains the spectral fine structure which is important during the variable importance in projection (VIP) score analysis described below.No peak alignment algorithms were employed on these datasets.The water region (4.2 to 5.5 ppm) was removed using the CHENOMX Suite prior to analysis.Due to the nonquantitative signal intensity resulting from the proton exchange between water and urea (which varies with the performance of water saturation), the urea spectral region (δ = 4.50 to 5.98 ppm) is commonly removed.In the present study, the urea spectral region was retained as it did not make a major impact on the PLSDA analysis.This impact is most easily seen in Figure 4, where the VIP score of urea remains relatively low (~0.25), is not a critical spectral region for classification.
The processed NMR spectra were transferred at full resolution (no binning) for analysis in MATLAB2010b (The Mathworks) using PLS Toolbox 6.7 (Eigenvector Research, Inc.).The data sets were normalized using the Probabilistic Quotient Normalization (PQN) method [35] followed by mean centering.The PQN normalized data gave a small improvement in the observed cross-validation errors in comparison to integral normalization or constant sum normalization [36], and was used for all of the analysis presented here.
Chemometric Analysis
This orthogonal signal correction (OSC) filtered partial least squares discriminate analysis (OSC-PLSDA) method has also been previously described [32,37], and was used to classify exposure in this NMR data set.The OSC-PLSDA method attempts to identify what spectral variations contribute to the identification of the designated classes.Orthogonal signal correction (OSC) was applied to remove non-correlating spectral variations (2 components) that were not contributing to classification.The OSC filtering can be integrated directly into the regular PLS-DA modeling, allowing the orthogonal variations to be analyzed separately [30,38,39].This extended method is commonly referred to as O-PLSDA and should not be confused with the pre-filtering OSC-PLSDA method employed for the current analysis.Cross validation was performed using a Venetian Blind process, with the number of data splits equal to the nearest integer of the square root of the total number of samples in the data set.This number changed when a subset of weeks was analyzed.The VIP scores [40] were obtained from the OSC-PLSDA analysis and mapped onto the original NMR spectra.VIP coefficients reflect the importance of each spectral frequency to each variable in the PLS model.The VIP coefficient for the k-th parameter (frequency) is the sum over all PLS dimensions (a) of the contribution VIN (variable influence) (1) where is equal to the squared PLS weight of that parameter multiplied by the percent explained sum of squares for that PLS dimension.
Conclusions
These results demonstrate that 1 H NMR metabolomics can be used to identify rats that have had long term chronic exposure to particular organophosphates; either tributyl phosphate or triphenyl phosphate.Using OSC-PLSDA chemometric modeling it was possible to classify and separate TBPtreated versus TPP-treated animals based on the NMR spectra of the urine.Unfortunately, the developed models were slightly disappointing requiring greater than twelve latent variables in order to keep the classification error below 10% under cross-validation analysis.A set of metabolites that were important for chronic TBP or TPP exposure classification were identified, and show some unique impacted metabolites in comparison to the set of metabolites obtained from the analysis of acute TBP or TPP exposure.These results demonstrate that metabolite response to environmental chemicals can provide a signature for identification of exposure.
Figure 1
Figure1.600 MHz 1 H nuclear magnetic resonance (NMR) spectra of rat urine collected from control (blue), tributyl phosphate (TBP) (magneta) or triphenyl phosphate (TPP) (red) exposed animals over a 15 week period with chronic administered doses.The NMR spectra were referenced using the internal chemical shift indicator DSS (500 µM), with the overall signal intensity normalized using the quotient method.The water spectral region (4.1 to 5.5 ppm) was removed prior to analysis.
Figure 2 .
Figure 2. Classification error for identification of control, TPP-or TBP exposed rats as a function of the number of latent variables in the OSC-PLSDA model using the entire 15 week exposure data set.The error results are shown for A) the original model developed, and B) following cross validation (CV).
Figure 3 .
Figure 3. Prediction for sample identification into either the control, TPP-exposed or TBPexposed class using the entire 15 week exposure data set, and 118 samples.The results for A) the original model involving 12 latent variables, and B) following venetian blind cross validation (CV).
Figure 4 .
Figure 4. Urine 1 H NMR spectra with color mapping showing the relative orthogonal signal correction-filtered partial least squares discriminate analysis (OSC-PLSDA) variable importance in projection (VIP) coefficients for sample classification.Representative spectrum is for a control animal in week 1.Expansion of different spectral regions with the VIP color coding with the TBP-treated (a,c) and the TPP-treated (b,d).Representative metabolites are labeled: A: Dibutyl phosphate, B: Acetate, C: Succinate, D: 2-Oxoglutarate, E: Citrate, F: Creatine, G: Betaine, H: Taurine, I: Urea, J: Benzoate.
Figure 5 .
Figure 5. Metabolome view following metabolite pathway mapping of the impacted metabolites identified during chronic TBP or TPP exposure.The analysis was performed using the MetaboAnalyst software.
Table 1 .
Error of prediction for original model and after cross validation (CV) for the identification of control, TPP-exposed and TBP-exposed samples as a function of latent variables (LV) in the model and specific weeks evaluated.
Table 2 .
[22]tification of important spectral regions and metabolites based on the VIP scores and OSC-PLSDA classification following both acute[22], and chronic organophosphate exposure.The top five VIP ranked metabolites are listed in parenthesis, with the top three for identification of each class color coded for quick comparison.
Table 3 .
Metabolic pathway mapping of the important metabolites identified for chronic TBP and TPP exposure obtained using the MetaboAnalyst software. | 2016-03-22T00:56:01.885Z | 2012-07-24T00:00:00.000 | {
"year": 2012,
"sha1": "31a22570944d5ffa77cd57bd0fc980611b6c6922",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2218-1989/2/3/479/pdf?version=1343136217",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "93c68847299864c6f35bce3d2d10f8d7da2843e3",
"s2fieldsofstudy": [
"Chemistry",
"Biology"
],
"extfieldsofstudy": [
"Chemistry",
"Medicine"
]
} |
246413753 | pes2o/s2orc | v3-fos-license | A submucosal bladder stone in a 65-year-old woman
We describe a case of a 65-year-old woman with a submucosal bladder stone with no apparent underlying cause. She presented with lower abdominal pain, dysuria, and a history of recurrent cystitis. Ultrasound and computed tomography of the lower abdomen revealed the presence of lithiasis nearby to the left ostium. During ureterorenoscopy, stone was found neither in the bladder nor in the left ureter. Instead, we observed a macroscopic bulge close to the left ureter. After opening of the mucosa, a stone became visible and could be removed. Mostly, imaging of the calculus was done some time before surgery. During this time interval, it is a possibility that the stone has already passed the urinary tract. Our case illustrates that it is important for each patient to search for the calculus extensively to prevent needless symptoms and investigations.
INTRODUCTION
Urolithiasis is a highly prevalent disease worldwide, with an incidence rate of 5%-9% in Europe. [1] Bladder stones (BSs) comprise 5% of all urolithiases and usually occur secondary to obstruction, foreign bodies, or infection. [2] The most common cause is an enlargement of the prostate; hence, they are not often seen in female patients. We describe a case of a bladder stone in a 65-year-old woman with no history of urologic procedures, whose stone was found inside the bladder mucosa, next to the ureter ostium, during endoscopic surgery. After a review of the literature, we have found only one other case of a submucosal bladder stone. [3]
CASE REPORT
A 65-year-old Caucasian woman was referred to the hospital with lower abdominal pain and dysuria of 5 months' duration. Routine laboratory investigations ordered by the general practitioner were within normal limits. Urinalysis revealed pyuria (11-20 white blood cell/ visual field), hematuria (3-5 red blood cells/visual field), and positive leukocyte esterase and was negative for nitrites. Culture showed the presence of Pseudomonas aeruginosa. Both fosfomycin and nitrofurantoïne were prescribed by the general practitioner but had no effect on the symptoms. Four months later, the patient was referred to our urology department (the delay was contributed to the COVID-19 pandemic). Anamnesis revealed a history of recurrent cystitis, use of a contraceptive spiral, and a hysterectomy with anterior and posterior colpopexy in 2006. The patient had no further history of stone disease, smoking, radiation exposure, or previous urological surgical procedures. Family history consisted of one first-degree family member with a history of nephrolithiasis. One year before the first presentation at the general practitioner, the calculus was noticed by chance on pelvic radiographic imaging. Since no symptoms were present at that time, the patient was not referred to a specialist. During the consultation, a urologic ultrasound of the lower abdomen showed a small lithiasis at the vesicoureteric junction, presumably incapsulated in the bladder mucosa. Computed tomography (CT) scans reported bilateral small nonobstructive lithiasis in the kidney calyx, no signs of hydronephrosis, and a calculus of approximately 9 mm in the bladder bottom close to the left trigonum [ Figure 1]. Ureterorenoscopy (URS) was performed, during which lithiasis was seen neither inside the bladder nor in the left ureter. There was macroscopic bulge visible medial of the left ostium [ Figure 2]. After reevaluation of the abdominal CT, it was decided to open the bladder mucosa at the area of bulging. Hereby, a stone became visible and could be resected easily [ Figure 3]. Postoperative bladder irrigation was installed. The stone was successfully removed, and there was no damage to bladder integrity.
DISCUSSION
Vesicular calculi represent 5% of urinary calculi. [2] BS can be divided into primary, secondary, and migratory stones. [4] In adults, BSs rarely occur spontaneously. Most common predisposing factors found in women include bladder outlet obstruction, infection, female pelvic surgery, neurogenic bladder, or foreign bodies. [2] In the literature, we found numerous reports of BS secondary to intravesicular migration of intrauterine devices. [5] Migratory BSs form in the upper urinary tract. In our case, we were not able to determine whether the BS was primary formed in the bladder or migratory, since no stone analysis is available. Our literature search revealed only one other case which described a submucosal BS. In the case reported by Singh et al., a BS is found in a 40-year-old male patient. [3] The calculus measured approximately 20 mm and was removed during open cystolithotomy; the exact location of the stone was not mentioned neither where any medical antecedents.
Patients with BS usually present with lower urinary tract symptoms, lower abdominal pain, and terminal hematuria. [4] A history of urinary calculi increases the risk of bladder cancer. [6] The calculus removed from our patient may have lodged in the wall of the urinary bladder, causing an inflammatory reaction which in its turn caused the mucosa to grow over it and making the calculus submucosal. A differentiation should be made with a stone inside ureterocele or paraureteric bladder diverticulum. A ureterocele is a congenital cystic dilatation of the distal part of the ureter. [7] A bladder diverticulum, congenital or acquired, is a herniation of the mucosa through the smooth muscle layer and can be seen with retrograde cystography CT or cystoscopy. [8] In our case, based on the à blanc CT images and the cystoscopy, neither a bladder diverticulum nor a ureterocele was suspected. We consider both entities as highly unlikely as our stone was found embedded in the bladder mucosa and a clear distinction could be made between the ureteric orifice and the mucosal bulging in which the BS was found.
Often, when no stone is found during URS, it will be perceived that the stone has already passed through the urinary tract. It is not always possible to clearly determine where the stone is located based solely on imaging. This case provides a good example that a thorough search for an impacted stone is important to avoid unnecessary imaging and operations. | 2022-01-22T16:42:20.558Z | 2022-01-01T00:00:00.000 | {
"year": 2022,
"sha1": "a689db7e4b3c2bcda75ae00bf9d9e176a943770d",
"oa_license": "CCBYNCSA",
"oa_url": "https://doi.org/10.4103/ua.ua_155_20",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "ae8f5cf96cd32775b7aaed61c788d4669ab838ed",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
16129905 | pes2o/s2orc | v3-fos-license | Delayed brainstem auditory evoked potential latencies in 14-year-old children exposed to methylmercury
To determine possible exposure-associated delays in (BAEP) latencies as an measure of toxicity
toxicity can produce widespread adverse effects within the nervous system, especially when exposures occur during brain development. [2][3] Early adverse effects have been characterized by administering neurobehavioral tests to children exposed in utero from maternal seafood diets. [4][5][6] Thus, a National Research Council (NRC) committee 7 recently concluded that intrauterine MeHg exposure was the most critical and emphasized the findings from a prospective birth cohort study carried out in the Faroe Islands. 5 The damage to the developing nervous system is thought to be potentially irreversible. 7 The possibility also exists that exposure during postnatal development may induce brain lesions; clinical 2,8 and experimental 9 information suggests that such effects would tend to be more focal and particularly involve the sensory cortex and the granular layer of the cerebellum. 13,14 and in laboratory animals. 15 In contrast to neuropsychological test outcomes, this measure is thought to be independent of socioeconomic covariates. 16 As illustrated by environmental exposure to lead, EP abnormalities constituted important objective evidence on neurotoxic effects in children. 17 I n a n e auditory evoked potentials (BAEPs) at age 14 years. We previously showed that increased intrauterine MeHg exposures were associated with delayed peak III latencies at age 7 years. 5,18 We hypothesized that these delays would remain at age 14 and that BAEP latencies would also 5 be sensitive to MeHg from adolescent seafood diets.
Study Population and Follow-up
A cohort of 1,022 births was assembled in the Faroe Islands during a 21-month period of 1986-1987. 19,20 The primary indicator of intrauterine exposure to MeHg was the mercury concentration in cord blood, and concentrations in maternal hair at parturition were also determined. 19 MeHg exposures varied considerably: 15% of the mothers had hair mercury concentrations above 10 g/g, while 4% were below 1 g/g, a level that corresponds to the exposure limit recommended by the NRC committee. 7 Concomitant exposure to polychlorinated biphenyls (PCBs) was determined from the concentration in umbilical cords from 438 cohort members. 5 The first follow-up examination was carried out seven years later and included hair-mercury assessment, evoked potentials and pediatric examination. 5 Hair samples were again obtained, and the proximal 2-cm segment was analyzed by flow-injection cold-vapor atomic absorption spectrometry after digestion of the hair sample in a microwave oven. 5 The total analytical imprecision for this analysis was estimated to be 4.3% and 5.5% at mercury concentrations of 4.7 g/g and 11.1 g/g, respectively. Accuracy was 6 ensured by participation in the Canadian Hair Mercury Quality Control Program; all our results were within one SD of the adjusted mean. The high analytical quality is comparable to previous performance. 5,19 Results in g may be converted to nmol by multiplying by 5.0.
The study protocol was approved by the ethical review committee for the Faroe Islands and the institutional review board at the U.S. institution, and parental informed consent was obtained.
Neurological examination
A thorough pediatric examination included otoscopy and assessment of neurological optimality. We used a four-channel electromyograph (Medelec Sapphire-4ME) also employed previously. 5,21 Click signals at an intensity of 65 dB HL (0.1 ms impulses of alternating polarity) were presented to the right ear through shielded ear phones at 20 Hz and 40 Hz (sampling time, 0.01 ms); the other ear was masked with white noise at an intensity of 45 dB HL. A frequency of 50 Hz was also attempted, but peak I was poorly defined at this click rate. EPs were recorded using three standard EEG electrodes placed on the vertex, the right mastoid ipsilateral to stimulation and the left mastoid (ground). While 1,024 responses were used seven years before, 5,21 the number was increased to 2,048 to improve the definition of peak I. Amplification and filtration were unchanged, and one replication of each condition was again carried out for calculation of average peak latencies. Peaks I, III, and V are thought to reflect the volume-conducted electric activity from the acoustic nerve, pons (superior olivary nucleus), and midbrain (inferior colliculi), respectively. 16
A u d i o m e t r y w a s c a r r i e d o u t b y a t r a i n e d n u r s e u s i n g I n t e r a c o u s t i c s D i a g n o s t i c
Audiometer AD229 with a Peltor H7A headphone in a sound-insulated room. The patient-controlled Hughson-Westlake procedure was used in accordance with ISO 8253-1. A threshold was defined as two out of three correct responses in a procedure with 5 dB increases and 10 dB decreases. Pure-tone air-conduction hearing thresholds were measured at 125, 250, 500, 750, 1000,1500, 2000, 3000, 4000, 6000, and 8000 Hz. Two children did not complete their audiometry examination.
Data analysis
Pearson's correlation coefficients were used to assess bivariate relationships between exposure parameters. Regression analysis was used to determine the association of MeHg exposure with the outcome variables. Age and sex may be important predictors of BAEP latencies 16,21 and were therefore included as independent variables along with the exposure parameters. In addition, confounders previously included in the analysis of neuropsychological test results 5 were screened for possible associations with the outcomes in the present study, but no pattern was found. Further models included as an independent variable the latency result obtained 7 years previously along with the age at that examination. Additional analyses also incorporated PCB dose-response curves at low dose levels and for determining exposure limits. 7, 18 The BMD is the dose of a substance that increases the risk of an abnormal response by a benchmark response (BMR), i.e., from P 0 (usually 5%) for an unexposed child to P 0 + BMR for a child exposed at the BMD. 23 The NRC committee used a BMR of 5% so that an exposure at the corresponding BMD will double the risk of an abnormal response. 7 To take the statistical uncertainty into account, a lower 95% confidence limit (BMDL) for the BMD is also determined. Using linear dose-response models, BMDLs expressed as the maternal hair mercury concentration were about 10 g/g for the most sensitive neuropsychological and BAEP outcomes in the Faroese children at age 7 years. 7,18,24 For comparison with these dose-response associations, we used the same default settings when calculating BMDL results for BAEP outcomes at age 14 years.
Prolonged Peak III and Peak V Latencies at Higher Prenatal MeHg Exposures Were Due to
Increased I-III Intervals That Were Prolonged Already 7 Years Before exposure had increased since the previous examination (p < 0.001). Approximately half of the children now exceeded the hair-mercury limit of 1 g/g, but the average corresponded to only one-fourth of the concentrations in maternal hair at child birth. Nonetheless, the different sets of exposure biomarkers correlated well.
T h e B A E P l a t e n c i e s w e r e s i m i l a r t o t h e r e s u l t s o b t a i n e d a t a g e 7 , 5,18,21 and again differed as expected 16 between boys and girls. Age had no effect within the limited range studied.
I n t r a u t e r i n e M e H g e x p o s u r e b i o m a r k e r s showed several statistically significant associations with the BAEP latencies, especially peaks III and V at both frequencies (Table II).
The same tendency was seen for the interpeak I-III latency, despite being affected by the greater imprecision of peak I determinations. Because peak I and interpeak III-V latencies were clearly not associated with the intrauterine exposure level, MeHg appeared to affect mainly the I-III interval. Neither sex nor age was associated with MeHg exposure levels, and confounder adjustment therefore did not affect the mercury regression coefficients. (Table II, Fig 2). This association was not affected by inclusion of prenatal exposure biomarkers, and neither did the lower mercury concentrations at age 7 seem to affect this outcome parameter. At the same time, this interpeak variable was significantly associated with all other peak latencies, except for the peak I latency. (Table III). The association with the peak III latency (Table III) (Table II) BMDLs averaged about 5 g/g for the child's hair-mercury concentration at age 14 years.
DISCUSSION
The developing brain is thought to constitute the most vulnerable organ in regard to MeHg exposure. 1,7 Emphasis in risk assessment has therefore been placed on neurological functions of children with intrauterine exposure to this neurotoxicant, and previous studies have applied neuropsychological function as a key measure of adverse effects. [4][5][6] In parallel, neurophysiological tests, such as BAEP assessment, have found use in population studies as highly standardized, rapid, painless, and inexpensive procedures. 16 a n t d e t e r m i n a n t s c a n n o t be controlled a priori. However, the present study involved a large birth cohort that has been followed prospectively for 14 years and characterized in substantial detail with regard to developmental MeHg exposure levels. The participation rate at age 14 years was very high, thereby reducing the concern that the results may have been affected by differential follow-up rates. An important strength of this study is that the examinations relied on the same methodology as 7 years before, and the same examiner, who was blinded in regard to exposure data and prior peak latency results. The validity of the results was supported by extensive quality assurance data. In addition, the outcome measures were confirmed to be independent of socioeconomic confounders. The known 16 BAEP peak latency difference between boys and girls was replicated, but sex was not associated with MeHg exposure and therefore did not cause confounding. At age 7 years, 21 to PCBs, which occur in whale blubber sometimes eaten in the Faroes, did not influence the BAEP outcomes. Developmental exposure to PCBs is now thought to affect primarily cochlear function and impact on BAEP amplitudes rather than latencies. 32 In addition, the lead exposures were comparatively low and not associated with mercury. 19 The generalizability of this study would therefore not seem to be limited by concomitant exposures to other neurotoxicants. indicate that recent MeHg exposure as assessed at age 14 years is associated with EP delays that differ from those incurred from exposure in utero. The possibility that peak latencies may distinguish between effects incurred prenatally and postnatally deserves attention in future studies.
The potential postnatal vulnerability of the brain would mean that children ought to be protected against MeHg exposure to the same extent as pregnant women.
We are grateful to the cohort families for their loyal support, to the highly competent clinical staff in Tórshavn, and to Dr David A Otto for advice regarding the quality assurance for the BAEP measurements. | 2014-10-01T00:00:00.000Z | 2004-02-01T00:00:00.000 | {
"year": 2004,
"sha1": "45c6d0d00e5223f4dd0447b9524c7981e2c30f46",
"oa_license": "CCBY",
"oa_url": "https://dash.harvard.edu/bitstream/1/34787242/1/grandjean_delayed_2004%20(dragged).pdf",
"oa_status": "GREEN",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "16ce5cd2f7bffebeedbaac4b7cec8c9eda91d95a",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
54506998 | pes2o/s2orc | v3-fos-license | Interpretation of Non-Genetic Oral and Maxillofacial Osteogenic Conditions in the Basis of New Findings in the Field of Osteoblastogenesis and Osteoclastogenesis
The ratio of nuclear factor kB ligand (RANKL) to osteoprotegerin (OPG) is not the only tool to explore the interaction between osteoblasts and osteoclasts. Sphingosine 1-phosphate (S1P) and Sclerostin work on osteoblasts in the opposite way. Eight hormones (Growth Hormone, Calcitriol, Androgen, Estrogen, Calcitonin, Thyroxin, Cortisol and Parathyroid Hormone) regulate behaviors of osteoblasts and osteoclasts at different levels. Seven cytokines (Macrophage colony-stimulating factor, Tumor necrosis factoralpha, Histamine, Interleukin-1, 4, 12 and 13) affect their behaviors in different pathways. Lifespans of osteoblasts, osteoclasts, fibroblasts, and endothelium are different, while hormone levels of Growth Hormone, Estrogen, Androgen and Thyroxin change with aging. The unbalanced behavior of osteoblasts and osteoclasts could induce the excessive bone formation and the abnormal bone resorption, which project on a radiography as radiolucency or radiopacity or mixed lucent-opaque lesions. The paper tried to reorganize new findings for clinicians to identify systemic and local risks of non-genetic oral and maxillofacial osteogenic conditions (OMOCs).
Introduction
The ratio of nuclear factor kB ligand (RANKL) to osteoprotegerin (OPG) is not the only tool to explore the interaction between osteoblasts and osteoclasts. Sphingosine 1-phosphate (S1P) and Sclerostin work on osteoblasts in the opposite way. Eight hormones (Growth Hormone, Calcitriol, Androgen, Estrogen, Calcitonin, Thyroxin, Cortisol and Parathyroid Hormone) regulate behaviors of osteoblasts and osteoclasts at different levels. Seven cytokines (Macrophage colony-stimulating factor, Tumor necrosis factoralpha, Histamine, Interleukin-1, 4, 12 and 13) affect their behaviors in different pathways. Lifespans of osteoblasts, osteoclasts, fibroblasts, and endothelium are different, while hormone levels of Growth Hormone, Estrogen, Androgen and Thyroxin change with aging. The unbalanced behavior of osteoblasts and osteoclasts could induce the excessive bone formation and the abnormal bone resorption, which project on a radiography as radiolucency or radiopacity or mixed lucent-opaque lesions. The paper tried to reorganize new findings for clinicians to identify systemic and local risks of non-genetic oral and maxillofacial osteogenic conditions (OMOCs).
Osteoblasts differentiate from mesenchymal osteoprogenitors who are governed by the RUNX2/CBFA1 transcription factor network and the WNT/ beta-catenin signal pathway [1]. Osteoblasts synthesize, transport and arrange proteins of bone matrix and initiate the process of mineralization. The osteoblast will transform into an osteocyte when it is surrounded by newly deposited bone matrix. Common markers for active osteoblasts are Alkaline Phosphatase (ALP) and Osteocalcin [2,3]. Osteoclast precursors differentiate from hematopoietic progenitor cells that are probably governed by the ERK-Akt pathway through c-Fms and c-Kit receptors [4]. An osteoclast precursor (mononuclear osteoclast) will transform into a mature osteoclast (multinuclear osteoclast) when the NF-kB signaling and the tyrosine kinase receptor are activated. Multinuclear osteoclasts digest proteins of bone matrix and initiate the process of demineralization. Common markers for mature osteoclasts are Tartrate-Resistant Acid Phosphatase (TRAP) and Cathepsin K [2,5].
Interaction Between Osteoblasts and Osteoclasts (Illustration 1)
An osteoblast presents a receptor activator of nuclear factor-kB ligand (RANKL) and Macrophage colony-stimulating factor (M-CSF) on the surface, while a mononuclear osteoclast expresses RANKL receptor (RANK) and M-CSF receptor (a type of tyrosine kinase receptors). When they bind together the NF-kB signaling and the tyrosine kinase receptor are activated. A multinuclear osteoclast matures1. Osteoblasts take a leading role in the maturation of osteoclasts.
An osteoblast also produces osteoprotegerin (OPG), which is a "decoy" receptor binding RANKL. It blocks RANKL to bind RANK [6]. It is a negative control to the maturation of osteoclasts. An osteoclast produces Sphingosine 1-phosphate (S1P), which binds the S1P receptor of an osteoblast [7]. The signal promotes osteoblasts to produce bone matrix. It is a positive control to the function of osteoblasts. However, an osteocyte produces Sclerostin to bind the low-density lipoprotein receptor (LDLR)-related protein of an osteoblast [8]. It inhibits osteoblasts to produce bone matrix. It is a negative control to the function of osteoblasts.
RANKL and OPG, both from osteoblasts, oppose one another. S1P and Sclerostin work on osteoblasts but in the opposite way. Who controls the direction? They are endocrine molecules (e.g. hormones) and paracrine molecules (e.g. cytokines).
Hormones (Endocrine Molecules) (Table 1) and (Illustration 1)
Bone Formation as a Priority: a) Growth hormone (GH) binds GH receptors (GHR) of osteoblasts and osteoclast precursors to promote their differentiation but also stimulates osteoblasts to produce OPG resulting in the bone formation as a priority [9]. b) Estrogen binds estrogen receptors (ER) of osteoprogenitors to promote the differentiation of osteoblasts and binds ERs of osteoblasts to inhibit osteoblasts to present RANKL resulting in the bone formation as a priority [10,11].
c)
Androgen binds an androgen receptor (AR) of osteoprogenitors to promote the differentiation of osteoblasts and binds ERs to stimulate osteoblasts to produce OPG resulting in the bone formation as a priority [12,13].
Mineral Homeostasis as a Priority:
a) Parathyroid hormone (PTH) binds PTH receptors of osteoblasts to increase the expression of RANKL and to decrease the production of OPG (E/C) resulting in the maturation of osteoclasts and following with the bone resorption as a priority, but another function is activating alpha-hydroxylase to promote Calcitriol formation. Hypocalcemia is the stimulator of PTH [14]. b) Calcitriol binds Vitamin D receptors (VDR) of osteoprogenitors to promote the differentiation of osteoblasts [15], also works as a suppressor of inhibiting osteoblasts to express RANKL [16] resulting in the bone formation as a priority, but the well-known function is helping the intestine and kidneys to absorb calcium. Hypocalcemia and PTH are stimulators of Calcitriol.
c)
Calcitonin binds a Calcitonin receptor (CTR) of a mature osteoclast to inhibit the production of Sphingosine 1 phosphate (S1P) [7,17]. However, the S1P binds an S1P receptor of an osteoblast to promote the production of bone matrix. Hypercalcemia is the stimulator of Calcitonin.
Negative Effect on Bone:
a) Thyroxin binds thyroid hormone receptors (THR) of osteoblasts and osteoclasts to promote their functions, but it shortens the bone remodeling cycle resulting in a loss of about 10% bone mass per cycle [18]. b) Cortisol inhibits osteoclastogenesis and osteoblastogenesis by the FOXO antioxidant pathway and binds the Glucocorticosteroid receptor (GR) of endothelium to inhibit angiogenesis resulting in the osteoporosis [19,20].
Hormones control the behavior of osteoblasts and osteoclasts through specific receptors in different directions. The natural course of bone formation and bone resorption is changed by the imbalance of hormone levels. Aetiologies of the abnormal secretion of hormones are complicated. For example, hypocalcemia usually is associated with Gastrointestinal conditions and renal dysfunctions, while hypercalcemia usually is related to renal failure and systemic granulomatous diseases. Practically, the systemic risks of OMOCs could be identified from the medical history and the laboratory examination.
Osteocytes and inflammatory cells produce cell signaling proteins when bearing physiological mechanical force. a) Inflammatory cells produce TNF-alfa [27]. b) Osteocytes decrease the production of Sclerostin [28].
Paracrine molecules affect osteoblasts and osteoclasts through common receptors in different pathways. Those injurious stimuli, ischemia, inflammation and mechanical force, are common in the oral cavity. For example, a dental procedure usually is performed under a local anesthesia containing epinephrine that increases the risk of local ischemia, while occlusal force during chewing, biting or clenching is not uncommon in periodontal regions. Practically, local risks of oral and maxillofacial osteogenic lesions could be identified from the dental history and oral examinations.
Timings
When clinicians list significant findings of oral and maxillofacial osteogenic conditions the information about timings tells us when did it start? where was the initial problem? what should be linked to the systemic risk? and how did a local risk exacerbate the condition? Timing is the key to open a multifactorial program.
Lifespans of Osteoblasts, Osteoclasts and Fibroblasts:
Bone formation or bone resorption is achieved by an osteogenic multicellular group (OMG) including osteocytes, osteoblasts, osteoclasts, fibroblasts, endothelium, mesenchymal and hematopoietic precursors [1,29,30]. The lifespan of osteoblasts is around three months, while the two-week cycle is for osteoclasts [31]. The lifespan of fibroblasts is around two months, but the half-life of osteocytes is around 25 years. Endothelium and precursors only survive several days. In the system osteoblasts and fibroblasts are cornerstones, while the most changeable cells are osteoclasts and endothelium. The balance of osteoblastogenesis and osteoclastogenesis is already programmed by coupling factors that work as activators or inhibitors from endocrine or paracrine systems (Illustration 1). Therefore, the system can be reprogramed on condition that unbalanced coupling factors consistently work with the brand-new OMG for 3-6 months. (Charts 1 & 2): Levels of PTH, Calcitriol, and Calcitonin basically change when hypocalcemia or hypercalcemia exists. The level of Cortisol changes in a day and is affected by stress, rather than aging. The level of Thyroxin usually remains stable and starts to decrease in age 60 and significantly drops at 70 years old (C) [32]. GH has a peak range between 15 y/o and 25 y/o and gradually declines with the time. Insulin-like growth factor (IGF) is the main mediator of GH. We check the level of IGF instead of GH clinically (C) [33]. Male and Female have Estrogen and bioavailable Testosterone (a type of androgen) both, but their levels change with aging in different ways. It is less likely to argue a conclusion that the normal osteoblastogenesis in women is gradually weakening after 45 years old if not enough Vitamin D is supplied. Degradation of Paracrine Molecules: Paracrine molecules spontaneously degrade within few days if the local etiology is eliminated. Theoretically, cytokine antagonists not only inhibit the local inflammation or the angiogenesis but also change the process of osteoclastogenesis.
The Tone Between Radiopacity and Radiolucency (Illustration 2):
The abnormal bone remodeling is a result of the imbalance between osteoblastogenesis and osteoclastogenesis. Osteoblasts take a leading role in the maturation of osteoclasts, while the lifespan of osteoclasts is one-sixth of the lifespan of osteoblasts. Therefore, it is possible that osteoclasts are influenced by cytokines becomes the initial event of a local osteogenic lesion. A radiolucency will form if the osteoclastogenesis is accelerated by cytokines. A radiopacity will develop when active osteoclasts lead osteoblasts on its function by the S1P pathway. Osteoblasts' function presents with the deposition of new bone on a pre-existing surface. It is called an appositional growth [1]. This entangling relation allows oral and maxillofacial osteogenic conditions to take on an appearance of a central radiolucency with a peripheral radiopacity or a central radiopacity with a peripheral radiolucency. That is the tone between radiolucency and radiopacity. Osteoblastogenesis and Osteoclastogenesis (Illustration 1) and (Table 1) For a healthy adult, the osteoblastogenesis maintains stable that is supported by normal levels of GH, Estrogen, Androgen and Calcitriol and is controlled by a negative feedback of Sclerostin. The balance could be broken if the level of female Estrogen drops with age or the level of Calcitriol drops due to the life style. For an adult with hypocalcemia or hypercalcemia, the systemic osteoclastogenesis is triggered by abnormal levels of PTH and Calcitonin. Cytokines are released because of local irritations, such as ischemia, mechanical force, and inflammation; however, some cytokines activate the local osteoclastogenesis and others in the opposite way. The sum outcome relies on the level of the general osteoblastogenesis. The scale tilts to the osteoclastogenesis side if the level of the general osteoblastogenesis is low and to the osteoblastogenesis side if the general osteoblastogenesis is high.
Hypothesis: Triple-Hit Frame (Illustration 3)
The systemic imbalance of hormones' levels and the local accumulation of paracrine molecules consistently working with an osteogenic multicellular group for 3-6 months are considered non-genetic risks of oral and maxillofacial osteogenic conditions (Illustration 3).
Examples: Two Oral and Maxillofacial Osteogenic Conditions
Periapical and Focal Cemento-Osseous Dysplasia: They are common in females with a predilection for the third and sixth decades [34]. The common feature is a periapical radiolucency with a peripheral corticated margin. The low level of estrogen maintains a low level of osteoblastogenesis. A routine periapical mechanical force causes cytokines accumulating in apical regions. Osteoclastogenesis takes a priority resulting in a radiolucency formation in the center. The activated osteoclast stimulates osteoblasts on its function by the S1P pathway. The peripheral bone formation occurs.
Cementoblastoma and Osteoblastoma: They are common in adults younger than 30 years old [34]. The common feature is a central radiopacity with a peripheral radiolucent rim. The high level of growth hormone maintains a high level of osteoblastogenesis. A routine periapical mechanical force causes cytokines accumulating in apical regions. Osteoblastogenesis takes a priority resulting in a radiopacity formation in the center. The activated osteoblast stimulates osteoclasts on its function by the RANKL pathway and the M-CSF pathway. The peripheral bone resorption occurs.
All in all, the triple-hit frame (timings, systemic and local risks) provides a tool to explore non-genetic risks of oral and maxillofacial osteogenic conditions. The tone between radiolucency and radiopacity in jaws show some patterns that we can follow. | 2019-03-18T14:02:41.084Z | 2018-07-03T00:00:00.000 | {
"year": 2018,
"sha1": "165be01c7adfae56afd906430506ce5a5b5e7f69",
"oa_license": "CCBY",
"oa_url": "https://biomedres.us/pdfs/BJSTR.MS.ID.001332.pdf",
"oa_status": "GOLD",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "23f19f9b0974ac40310dad6ce1b8d01bb020c727",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
258212547 | pes2o/s2orc | v3-fos-license | All-condition pulse detection using a magnetic sensor
A plethora of wearable devices have been developed or commercialized for continuous non-invasive monitoring of physiological signals that are crucial for preventive care and management of chronic conditions. However, most of these devices are either sensitive to skin conditions or its interface with the skin due to the requirement that the external stimuli such as light or electrical excitation must penetrate the skin to detect the pulse. This often results in large motion artefacts and unsuitability for certain skin conditions. Here, we demonstrate a simple fingertip-type device which can detect clear pulse signals under all conditions, including fingers covered by opaque substances such as a plaster or nail polish, or fingers immersed in liquid. The device has a very simple structure, consisting of only a pair of magnets and a magnetic sensor. We show through both experiments and simulations that the detected pulsation signals correspond directly to the magnet vibrations caused by blood circulation, and therefore, in addition to heartrate detection, the proposed device can also be potentially used for blood pressure measurement.
A plethora of wearable devices have been developed or commercialized for continuous non-invasive monitoring of physiological signals that are crucial for preventive care and management of chronic conditions. However, most of these devices are either sensitive to skin conditions or its interface with the skin due to the requirement that the external stimuli such as light or electrical excitation must penetrate the skin to detect the pulse. This often results in large motion artefacts and unsuitability for certain skin conditions. Here, we demonstrate a simple fingertip-type device which can detect clear pulse signals under all conditions, including fingers covered by opaque substances such as a plaster or nail polish, or fingers immersed in liquid. The device has a very simple structure, consisting of only a pair of magnets and a magnetic sensor. We show through both experiments and simulations that the detected pulsation signals correspond directly to the magnet vibrations caused by blood circulation, and therefore, in addition to heartrate detection, the proposed device can also be potentially used for blood pressure measurement. 2 Continuous non-invasive monitoring of physiological signals such as heart rate (HR), blood pressure (BP) and blood oxygen level are crucial for preventive care and management of chronic conditions 1,2 . A plethora of devices have been developed for this purpose, notably photoplethysmography (PPG) which can detect the blood volume changes in the microvascular bed of tissue and arteries through measuring the intensity variation of either transmitted or reflected light from the tissue 3 . From the detected optical signals, we can extract physiological signals such as heart rate and blood oxygen level 4 . Due to its high cost-performance and ease of use, the PPG has become the de facto standard non-invasive vital sign monitoring device in both clinical and nonclinical settings. Despite the high penetration rate, the PPG has several drawbacks such as motion artifacts, skin tone effect, and unsuitability for circumstances where the skin is covered by opaque materials (e.g., plaster or nail polish). Therefore, there is always a need to develop non-optical detectors which can replace or supplement the PPG.
One of the promising techniques that can potentially overcome some of the difficulties facing by the PPG is the magnetoplethysmography (MPG) 5,6 . The early work on MPG was motivated by the fact that oxygenated haemoglobin (HbO2) and deoxygenated haemoglobin (Hb) exhibit different magnetic properties; the former is diamagnetic whereas the latter is of paramagnetic nature 7 . Phua et al. have demonstrated a wrist-type MPG device consisting of a permanent magnet and a Hall sensor which can clearly detect the pulsatile signals in both in-vitro and in-vivo configurations 5,6 . The authors attributed the MPG signals to modulated magnetic signature of blood (MMSB) 6,8 , based on the hypothesis that the magnetic field of permanent magnet affects the blood flow dynamics, which in turn perturbs the magnetic field distribution in the surrounding area. Since then several research groups have demonstrated MPG prototype devices using either the giant magnetoresistance (GMR) 3 or Hall sensor which successfully detected the pulsatile signals, demonstrating good reproducibility of the detection method [9][10][11][12][13][14][15][16] . Considering the difference in magnetic responses of HbO2 and Hb, the initially proposed MMSB mechanism seems plausible.
However, given the small difference (on the order of several ppm) in magnetic susceptibility between HbO2 and Hb 7 , it remains controversial whether the MPG signal originates entirely or in part from the MMSB mechanism. Simulations by Sinatra revealed that the disturbance to magnetic field caused by the magnetic properties of blood is on the order of 10 -5 , much smaller than the earth's magnetic field 17 , although it shows close correlation with the blood flow velocity. The in-vitro experiments performed by Zhang et al. suggest that the MPG signal is mainly derived from the mechanical disturbance of the sensor detection axis caused by blood flood, but quantitative discussion is lacking 18 . Similar results were also reported by Li et al., though it is not clear if the results were obtained from in-vivo or in-vitro measurements 19 . Without a proper understanding of the detection mechanism, it would be difficult to further develop the MPG for practical applications.
Here, we propose a fingertip-type magnetic device which allows to measure the magnetic and vibrational signals simultaneously using a magnetic sensor and a laser Doppler vibrometer. Unlike all previous experiments which place a single magnet on the radial artery, we place two magnets on the opposite sides of a fingertip where the mutual attraction between the magnets significantly improves the stability of both magnetic and vibrational signals. We found that the magnetic and vibrational signals are closely correlated with each other, and the average deviation between the two signals is less than 5%. Since the magnetic signal is diminishingly small when the magnets are not in contact with the skin, we can safely conclude that the detected signal is dominated by vibration of the magnet, and the contribution from the MMSB 4 mechanism is negligible. Both analytical and numerical simulation results are in good agreement with the experimental observations. We show that such kind of device can detect clear pulsation signals under all finger conditions -a task which has proven to be rather challenging for existing pulse detectors. Since the measured signal is not originated from the MMSB mechanism, hereafter we call it magnetically detected vibration sensor (MDVS) instead of MPG.
Results
The MDVS device used in this work consists of two cylindrical magnets attached to a fingertip, one of which is firmly attached to a mechanical fixture and the other is 5 movable in response to blood flow-induced skin movement (Fig.1). A magnetic sensor is placed near the movable magnet (approximately 1.5 cm), which detects changes in the magnetic field of the magnet. Depending on the magnet chosen and positioning of the sensor, various types of magnetic sensors can be used, including the recently developed spin-orbit torque enabled magnetic sensor [20][21][22] . To facilitate mounting of the sensor, the data presented in this paper are collected using a commercial tunnel magnetoresistance sensor with a dynamic range around 100 Oe and sensitivity of 1.28mV/Oe. The prototype is designed such that the movable magnet can be directly probed by a laser Doppler meter (Polytec VibroGo). The VibroGo signal acquisition unit has two input channels which can be used to measure the vibration and magnetic signals simultaneously. This greatly facilitates the comparison of two signals as the time difference caused by measurement electronics is presumably negligible. The experiments were performed on multiple fingers of healthy subjects of different gender and age. Good reproducibility has been obtained in all measurements conducted.
Synchronized detection of magnetic and vibration signals and statistical analysis. that the periodicity of the signal is corresponding to the heartbeat rate of the subject. Figure 2b shows the corresponding average single pulse signal with the mean subtracted out and normalized by the standard deviation. The overall shape resembles well the PPG waveform with clear systolic peak, dicrotic notch and diastolic peak 3 . A high degree of similarity with PPG is also seen in the acceleration pulse waveform (or second derivative), as shown in Fig.2c, which is useful for evaluating cardiovascular using two different models of vibrometer is to check the correlation between magnetic and vibrational signals. The correlation coefficient between magnetic and vibrational signals in Fig.4a is 0.99, indicating the same origin of the two signals. To quantify the difference statistically, we add more datasets to produce the Bland-Altman plot in (Fig.4d). The maximum deviation between the two measurement methods is 7.12% and 2.99%, respectively, for vibration measurements using VibroFlex QTec and VibroGo. In addition to the use of different types of vibrometer, we have also repeated the measurements using different types of magnetic sensors. These results suggest that the close correlation between magnetic and vibrational signal is a generic phenomenon which does not depend on the vibrometer or magnetic sensor used to measure the signals.
Analytical model of MDVS sensor.
In what follows, we examine more quantitatively the correlation between magnetic and vibrational signals using both analytical models and simulation. As discussed in the introduction, the MPG signal was initially interpreted as magnetic signature induced by blood flow. Later studies suggest that change of magnetic sensor's detection axis caused by blood flow may also play a role, but quantitative understanding is lacking. In the present setup, the magnets are attached to a fingertip which consists of nail, skin, muscle, bones, arterial/vein vessels, and 11 capillaries 26 . When the blood flows through the vessels, it generates pressure wave on the vessel walls 27 , which can propagate through the thick tissue and cause vibration of the skin. Therefore, without losing generality, the output signal of the magnetic sensor may be expressed as where is the sensitivity of the sensor and ̂ is its sensing axis direction, ⃗ is the magnetic field at the sensor location, ⃗ ( ⃗ ) is the position of the magnet (sensor), ⃗ is the magnetic moment of the magnet, is the magnetic susceptibility of the blood, and ∆( ⃗ − ⃗ ), ∆ ⃗, and ∆ are the corresponding changes induced by the blood flow.
The first two terms of Eq. (1) are corresponding to signals induce by parallel displacement and rotation of the magnet, respectively, whereas the last term is due to change in magnetic susceptibility of the blood. Before performing numerical simulations to calculate the overall signal, it is instructive to examine the first two terms analytically using the magnetic dipole approximation, which is valid when the distance between the magnet and sensor | ⃗ − ⃗ | is much larger than the size of the magnet.
Signal due to magnet displacement. We first consider the effect of parallel displacement of the magnet, i.e., ∆ ⃗ = 0. For simplicity, we assume that the sensor is stationary, and only one of the magnets is moving, which describes well the actual Signal due to magnet rotation. Next, we consider the contribution due to rotation of the dipole which is initially aligned in z-direction, i.e., ⃗ = (0,0,1) . Without losing generality, we assume that the yaw, pitch and roll angles are , , and , respectively, and the change of ⃗ due to rotation is given by ∆ ⃗ = ( , , ) ⃗ − ⃗ , where ( , , ) is the rotation matrix. The signal induced by the dipole rotation can be calculated directly from Eq. (1) which reads where ∆ , ∆ , and ∆ are the changes of magnetic moment in the three coordinate axis directions. When , , and are very small and is much larger than and , ∆ is approximately given by ∆ = ( 2 ⁄ )( ⁄ ) . The result shows that the magnetic signal induced by the rotation is directly proportional to the 13 pitch angle , i.e., the angle of rotation around y-axis. Although the dipole approximation is valid in calculating the magnetic signal, to estimate the displacement signal measured by the vibrometer, we have to consider the finite size of the magnet.
Here, we assume that the laser beam is initially focused at ⃗ = ( , , ) on the top surface of the magnet with respect to the rotational center and its direction is parallel to z-axis. In this case, the displacement in z-direction is given by ∆ = • ( , , ) ⃗ − ⃗ and when , , and are small, it reduces to ∆ ≈ − .
Therefore, the displacement signal contains two components which are proportional to the pitch and roll angles, respectively. In other words, depending on the rotation directions, the displacement measured by the vibrometer may not be exactly the same as that of the output signal from the magnetic sensors. The ratio between ∆ and ∆ is on the order of (∆ / )⁄ . Based on the displacement measured by the vibrometer, ∆ / is on the order of 10 − 10 , which corresponds to a rotation angle of 0.06 − 0.6 , which means that ∆ can be comparable to or even larger than ∆ , depending on the particular measurement setup. This may explain why the agreement between measured magnetic and vibration signals is generally better in the systolic than the diastolic half-cycle of the heartbeat ( Fig.2 and Fig.4), as the sudden decrease of blood pressure after the systolic peak may result in not just parallel displacement but also rotation of the magnet. To facilitate the discussion, we re-set the coordinate system as follows (as we now have two magnets): x-axis and y-axis are parallel and transverse to the finger, respectively, z-axis is perpendicular to the cylindrical magnet surface, and the coordinate origin is at the middle of the magnet with z-axis passing through the rotational center of both magnets. Figure 5a shows To maximize the detection signal, the sensor must be placed at a location where the field strength is below the sensor's dynamics range (criterion 1) and at the same time the change caused by sensor displacement or rotation is largest (criterion 2). For the particular senor used in the experiment which has a dynamic range of ±100 Oe, the simulation results in Fig.5a show that criterion 1 is satisfied as long as the sensor is placed in the region where the contour lines are shown (| | < 100 Oe). However, when the sensor is too far away from the magnet, it may not be able to detect the small change of field caused by the blood flow, which may originate from magnetic displacement/rotation or blood susceptibility change, or the combination of all these factors. We first calculate the field change (∆ ) caused by magnet displacement or rotation under typical measurement conditions. Figure 5b shows the contour of ∆ caused by the movement of the top magnet in z-direction by 50 m, which corresponds to the typical displacement of magnet measured by the vibrometer due to blood flow.
Similarly, Fig.5c shows the change caused by rotating the magnetization of the top magnet by 3.3 × 10 rad away from z axis around y-axis (i.e., = 0.19 ). The rotation angle is chosen such that the maximum deflection of the edge of the magnet is comparable to the vertical displacement used for obtaining the results in Fig.5b. From both figures, we can see that maximum change appears along the middle line of the top magnet in x-direction with its magnitude decreasing quickly away from the magnet.
Within the range of 15 mm < | | < 20 mm and 5 mm < < 10 mm, it is possible to obtain a change of 0.5 -1 Oe for , which is detectable by the sensor.
We next simulate the field change caused by susceptibility change of the blood.
Due to the difference in magnetic responses of HbO2 and Hb, the magnetic susceptibility of oxygenated and de-oxygenated blood varies slightly, 7 and its difference relative to water Δχ can be approximated as follows 28 : where is the volume fraction of red blood cell, Δ is the susceptibility difference between fully oxygenated blood and water, Δ is the susceptibility difference between fully oxygenated and deoxygenated red blood cells, and HbO2 ranges from 0 (fully deoxygenated) to 1 (fully oxygenated) 29 and results are shown in Fig.5d. As can be seen, the change in is on the order of 10 Oe in the region of interest, i.e., 15 mm < | | < 20 mm and 5 mm < < 10 mm, which is too small to be detected by the magnetic sensor used in this work (note: the unit in Fig.5d is 10 -4 Oe). The above results demonstrate clearly that the pulsatile signals detected by the magnetic sensor are dominantly of mechanical origin, the MMSB contribution, if any, is negligible. Fig.6b for pitch angle . Solid-lines are results obtained from the dipole approximation and symbols are from COMSOL simulations. As expected, good agreements are obtained when both ∆ and are small, and is about 3 times larger than that of the radius of the magnet. Therefore, the dipole approximation can be used to optimize the position of the sensor with respect to the magnet. 18
Discussion
We have proposed and designed a pulse detection device which allows us to conduct a comparative study to determine the origin of so-called MPG signals through simultaneous measurements using a magnetic sensor and a vibrometer. Based on both experimental and simulation results, we conclude unambiguously that the detected signal is not originated from change of blood magnetic properties but rather from mechanical motion of the magnet induced by blood flow, including both vertical displacement and rotation. Apart from the determination of signal origin, the proposed fingertip-type magnetically detected vibration sensor is much more robust and simpler than the previously report wrist-type MPG sensor, and thus is more suitable for practical applications. It has clear advantages over existing pulse detectors in coping with unusual monitoring conditions. In addition to heartrate detection, its direct relevance with the blood pressure wave makes it promising for blood pressure measurement.
Methods
Signal processing and statistical analysis. The detected signals are processed with MATLAB as follows. First, the raw data with a sampling rate of 240Hz is segmented into sequences with a duration of 30s. Second, a bandpass filter with a passband of 0.8Hz -10Hz is used to remove high-frequency noise and baseline shift. Third, z-score analysis is used to transform two types of signals with different amplitudes and units into similar range, as shown in Figure 2a, 2d and 2g. Fourth, single pulse is extracted by averaging all pulses of the period of data sequence, and the results are shown in Figure 2b, 2e and 2h. Figure 2c and 2f are second derivative of the single pulse signals.
Finally, Bland-Altman plot is analyzed based on 60 pulses from 3 sequences of signals. 19 Simulation setup. Simulation of magnetic signals was performed using COMSOL Multiphysics® software. The initial distance between the body centers of the two magnets is set at 14 mm with a finger diameter of 9 mm. When simulating the effect of vertical vibration, only the position of magnet in positive z-axis is varied to simulate the expansion of finger due to blood circulation. A cuboid is used to simulate the susceptibility change inside the finger. When simulating the effect of rotation, only magnet in positive z-axis is rotated around the center of bottom surface. | 2023-04-20T01:15:45.354Z | 2023-04-19T00:00:00.000 | {
"year": 2023,
"sha1": "9d3fdc3c2879cf130921d4477bd6759f8f7af206",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "9d3fdc3c2879cf130921d4477bd6759f8f7af206",
"s2fieldsofstudy": [
"Engineering"
],
"extfieldsofstudy": [
"Engineering"
]
} |
49432336 | pes2o/s2orc | v3-fos-license | Quasi-Newton approaches to Interior Point Methods for quadratic problems
Interior Point Methods (IPM) rely on the Newton method for solving systems of nonlinear equations. Solving the linear systems which arise from this approach is the most computationally expensive task of an interior point iteration. If, due to problem's inner structure, there are special techniques for efficiently solving linear systems, IPMs enjoy fast convergence and are able to solve large scale optimization problems. It is tempting to try to replace the Newton method by quasi-Newton methods. Quasi-Newton approaches to IPMs either are built to approximate the Lagrangian function for nonlinear programming problems or provide an inexpensive preconditioner. In this work we study the impact of using quasi-Newton methods applied directly to the nonlinear system of equations for general quadratic programming problems. The cost of each iteration can be compared to the cost of computing correctors in a usual interior point iteration. Numerical experiments show that the new approach is able to reduce the overall number of matrix factorizations and is suitable for a matrix-free implementation.
Introduction
Let us consider the following general quadratic programming problem where x, c ∈ R n , b ∈ R m , Q ∈ R n×n and A ∈ R m×n . We will suppose that the rows of A are linearly independent. Define function F : R 2n+m → R 2n+m by where X, Z ∈ R n×n are diagonal matrices defined by X = diag(x) and Z = diag(z), respectively, and e is the vector of ones of appropriate size. First order necessary conditions for (1) state that, if x * ≥ 0 is a minimizer, then there exist z * ∈ R n , z * ≥ 0, and λ * ∈ R m such that F (x * , λ * , z * ) = 0. Primal-Dual IPMs try to solve (1) by solving a sequence of relaxed constrained nonlinear equations in the form of where µ ∈ R is called the barrier parameter, which is associated with the logarithmic barrier applied to the inequalities x ≥ 0 used to derive the method [1,2]. As µ → 0 more importance is given to optimality over feasibility. Systems of type (3) are not easy to solve. When µ = 0, they can be solved by general algorithms for bounded nonlinear systems [3,4]. In this case, a suitable merit function, usually F (x) , has to be used to select the step-sizes. IPMs try to stay near the solution of (3), called the central path, and reduce µ at each iteration. Instead of solving (3) exactly, one step of the Newton method is applied. Thus, given an iterate (x k , λ k , z k ), in the interior of the bound constraints, i.e.
x k , z k > 0, the next point is given by (x k+1 , λ k+1 , z k+1 ) = (x k , λ k , z k ) + (α P ∆x k , α D ∆λ k , α D ∆z k ), (4) where (∆x k , ∆λ k , ∆z k ) is computed by solving some Newton-like systems where v ∈ R 2n+m and J : R 2n+m → R (2n+m)×(2n+m) is the Jacobian of F , defined by Standard predictor-corrector algorithms solve (5) twice: first the affine scaling predictor is computed for v = −F (x k , λ k , z k ) and then the corrector step is computed using v = 0 0 σ k µ k e T , with σ k ∈ (0, 1), µ k = x k T z k /n. Additional correctors can be computed in one iteration to further accelerate convergence, such as second order correctors [5] or multiple centrality correctors [6]. Scalars α P and α D are selected such that x k+1 > 0 and s k+1 > 0, respectively. The most expensive task during an interior point (IP) iteration is to solve (5). The coefficient matrix J(x, λ, z) is known as unreduced matrix and has dimension (2n+m)×(2n+m), but its nice structure allows efficient solution techniques to be used. The most common approaches for solving the linear system in IPMs are to work with augmented system or normal equations. If we eliminate ∆s in (5), we have the augmented system for which we can solve directly using matrix factorizations or compute adequate preconditioners and solve iteratively by Krylov subspace methods. If matrix Q is easily invertible, or Q = 0 (linear programming problems), it is possible to further eliminate ∆x and solve the normal equations by Cholesky factorization or by Conjugate Gradients, depending on the size of the problem. For both approaches it is known that computing good preconditioners or computing the factorization can be most expensive part of the process. Therefore (5) can be solved several times for the same J(x k , λ k , z k ) with different right-hand sides, in a classical predictor-corrector approach [5] or in the multiple centrality correctors framework [1,7]. In this work we will extensively use the fact that the backsolves in (5) are less expensive than computing a good preconditioner or factorization.
Although J(x, y, s) is unsymmetric, under reasonable assumptions Greif, Moulding and Orban showed that it has only real eigenvalues [8]. Based on those results, Morini, Simoncini and Tani [9] developed preconditioners for the unreduced matrix and compared the performance of interior point methods using unreduced matrices and augmented system. The unreduced matrix has also two more advantages, when compared to augmented system and normal equations. First, small changes of variables x or z result in small changes in J(x, λ, z). Second, J is the Jacobian of F , so it is possible to approximate it by building models or evaluating F on some extra points. These two characteristics are explored in this work.
Since J is the Jacobian of F , it is natural to ask if it can be approximated by evaluating F in some points. Function F is composed by two linear and one nonlinear functions. Therefore, the only part of J which may change during iterations is the third row. Moreover, it can be efficiently stored by just storing A, Q, x and z. Since computing and storing J is inexpensive, the only reason to use an approximation B of J is if system (5), using B k instead of J(x k , λ k , z k ), becomes easier to solve. That is where quasi-Newton methods and low rank updates become an interesting tool in interior point methods.
Quasi-Newton methods are well known techniques for solving large scale nonlinear systems or nonlinear optimization problems. The main motivation is to replace the Jacobian used by the traditional Newton method by its good and inexpensive approximation. Originally, they were useful to avoid computing the derivatives of F , but they have become popular as a large scale tool, since they usually do not need to explicitly build matrices and enjoy superlinear convergence. Classical references for quasi-Newton methods are [10,11] for nonlinear equations and [12] for unconstrained optimization.
In the review [11] about practical quasi-Newton methods for solving non-linear equations, Martínez suggests that there is room for studying such techniques in the interior point context. The author points to the work of Dennis Jr., Morshedi and Turner [13] which applies quasi-Newton techniques to make the projections in Karmarkar's algorithm cheaper. The authors write the interpolation equations associated with the linear system in interior point iterations and describe a fast algorithm to compute updates and also to update an already existing Cholesky factorization. When solving general nonlinear programming problems by IPMs, a well known approach is to replace the Hessian of the Lagrangian function by low rank approximations [12]. In 2000, Morales and Nocedal [14] used quasi-Newton arguments to show that the directions calculated by the Conjugate Gradient algorithm can be used to build an automatic preconditioner for the matrix under consideration. The preconditioner is a sequence of rank-one updates of an initial diagonal matrix. Such approach is efficient when solving a sequence of linear systems with the same (or a slowly varying) coefficient matrix. Based on those ideas, a limited memory BFGS-like preconditioner for positive definite matrices was developed in [15] and was specialized for symmetric indefinite matrices in [16] . Recently, Bergamaschi et al. [17] developed limited-memory BFGS-like preconditioners to KKT systems arising from IP iterations and described their spectral properties. The approach was able to reduce the number of iterations in the Conjugate Gradient algorithm, but the approximation deteriorates as the number of interior point iterations increase. Also, extra linear algebra has to be performed to ensure orthogonality of the vectors used to build the updates.
In all works, with exception of [13], the main focus was to use low rank updates of an already computed preconditioner such that new preconditioners are constructed in an inexpensive way and reduce the overall number of linear algebra iterations. In the present work, our main objective is to work directly with nonlinear equations and use low rank secant updates for computing the directions in the IP iterations. We use least change secant updates, in particular Broyden updates, and replace the Newton system (5) by an equivalent one. Some properties of the method are presented and extensive numerical experiments are performed. The main features of the proposed approach are: • Low rank approximations are matrix-free and use only vector multiplications and additions; • The quasi-Newton method for solving (5) can be easily inserted into an existing IPM; • The number of factorizations is reduced for small and large instances of linear and quadratic problems; • When the cost of the factorization is considerably higher than the cost of the backsolves, the total CPU time is also decreased.
In Section 2 we discuss the basic ideas of quasi-Newton methods, in particular the Broyden method, which is extensively used in the work. In Section 3 we show that, if the initial approximation is good enough, least change secant updates preserve most of the structure of the true coefficient matrix and a traditional IP iteration can be performed with the cost of computing correctors only. New low rank secant updates, which are able to exploit the sparsity of J are also discussed. In Section 4 we describe the aspects of a successful implementation of a quasi-Newton interior point method. In Section 5 we compare our approach with a research implementation of the primal-dual IPM for solving small-and medium-sized linear and quadratic problems. Finally, in Section 6 we draw the conclusions and mention possible extensions of the method.
Notation. Throughout this work we use F k and J k as short versions of vector F (x k , λ k , z k ) and matrix J(x k , λ k , z k ), respectively. The vector e denotes the vector of ones of appropriate dimension.
Background for quasi-Newton methods
Quasi-Newton methods can be described as algorithms which use approximations to the Jacobian in the Newton method in order to solve nonlinear systems. The approximations are generated using information from previous iterations. Suppose that we want to findx ∈ R N such that F (x) = 0, where F : R N → R N is continuously differentiable. Given the current pointx k at iteration k, Newton method builds a linear model of F aroundx k in order to findx k+1 . Now, suppose thatx k andx k+1 have already been calculated and let us create a linear model for F aroundx k+1 : The choice B k+1 = J k+1 results in the Newton method for iteration k + 1. In secant methods, B k+1 is constructed such that M k+1 interpolates F atx k and x k+1 , which gives us the secant equation where s k =x k+1 −x k and y k = F (x k+1 )−F (x k ). When s k = 0 and N > 1 there are more unknowns than equations and several choices for B k+1 exist [11,18]. Let B k be the current approximation to J k , the Jacobian of F atx k (it can be J k itself, for example). One of the most often used simple secant approximations for unsymmetric Jacobians is given by the Broyden "good" method. Given B k , a new approximation B k+1 to J k+1 is given by Matrix B k+1 is the closest matrix to B k , in Frobenius norm, which satisfies (8).
The update of the Broyden method belongs to the class of least change secant updates, since B k+1 is a rank-one update of B k . As we are interested in solving a linear system, it may be interesting to analyze matrix B −1 k+1 = H k+1 , which is obtained by the well known Sherman-Morrison-Woodbury formula: where u k = s k − H k y k and ρ k = s T k H k y k . We can see that H k+1 is also a least change secant update of H k . To store H k+1 , one needs first to compute H k y k and then store one scalar and two vectors. Storing u k is more efficient than storing H k s k when H k+1 is going to be used more than once. According to (10), the cost of computing H k+1 v is the cost of computing H k v plus one scalar product and one sum of vectors times a scalar. After updates of an initial approximation B k− , current approximation H k is given by Instead of updating B k and then computing its inverse, the Broyden "bad" method directly computes the least change secant update of the inverse: where V k = I − y k y T k ρ k and ρ k = y T k y k . Similarly to B k+1 in (10), H k+1 given by (11) is the closest matrix of H k , in the Frobenius norm, such that H −1 k+1 satisfies (8). The cost of storing H k+1 is lower than that of (10), since vectors s k and y k have already been computed. The cost of calculating H k+1 v is higher: it involves one scalar product, two sums of vector times a scalar and H k v. After updates of an initial approximation H k− , current approximation H k is given by Approach (11) has some advantages over (10). First, it does not need to compute H k v for constructing the update. When H k is a complicated matrix, this is a costly operation. Second, unlike (10), matrices V j depend solely on y j and s j for all j = 1, . . . , , so it is possible to replace H k− by different matrices without updating the whole structure. This is suitable to be applied in a limited-memory scheme [16]. Third, the computation of H k v can be efficiently implemented in a scheme similar to the BFGS update described in [12], as we show in Algorithm 1. Unfortunately, the Broyden "bad" method is known to behave worse in practice than the "good" method [10]. To avoid the extra cost of computing H k y k in (10) it is common to compute a Cholesky or LU factorization of B k− and work directly with (9), performing rank-one updates of the factorization, which can be efficiently implemented [19].
The class of rank-one least change secant updates can be generically represented by updates of the form where w T k s k = 0. Setting w k = s k defines the Broyden "good" method and w k = B T k y k defines the Broyden "bad" method. Several other well known quasi-Newton methods fit in update (13), such as the Symmetric Rank-1 update used in nonlinear optimization, which defines w k = y k − B k s k . See [10,18] for details on least change secant updates.
A quasi-Newton approach for IP iterations
According to the general description of primal-dual IPMs in Section 1, we can see that, at each iteration, they perform one Newton step associated with the nonlinear system (3), for decreasing values of µ. Each step involves the computation of the Jacobian of F and the solution of a linear system (5).
Our proposal for this work is to perform one quasi-Newton step to solve (3), replacing the true Jacobian J(x, λ, z) by a low rank approximation B. The idea might seem surprising at first glance, since, for quadratic problems, J(x, λ, z) is very cheap to evaluate. In this section we further develop the quasi-Newton ideas applied to interior point methods and show that they might help to reduce the cost of the linear algebra when solving (1).
It is important to note that F and J discussed in Section 2 will be given by (2) and (6), respectively, in the interior point context, which highlights the importance of using the unreduced matrix in our analysis. Therefore, variablē x in Section 2 is given by (x, λ, z) and, consequently, N = 2n + m.
Initial approximation and update
Suppose that k ≥ 0 is an interior point iteration for which system (5) was solved and (x k+1 , λ k+1 , z k+1 ) was calculated, using any available technique. Usually, solving (5) involves an expensive factorization or the computation of a good preconditioner associated with J k . Most traditional quasi-Newton methods for general nonlinear systems compute B k by finite differences or use a diagonal matrix as the initial approximation. According to Section 2, it is necessary to have an initial approximation of J k in order to generate approximation B k+1 of J k+1 by low rank updates. Most of traditional quasi-Newton methods for general systems compute B k by finite differences or use a diagonal matrix. Since J k have already been computed, we will define it as B k , i.e., the perfect approximation to J k . It is clear that, in such case, H k = J −1 k is the approximation to J −1 k . In order to compute B k+1 , vectors s k and y k in secant equation (8) have to be built: The use of J k as the initial approximation ensures that the first two block elements of B k s k − y k are zero. This is a well known property of low rank updates given by (13) when applied to linear functions (see [10,Ch. 8]). In Lemma 1 we show that rank-one secant updates maintain most of the good sparsity structure of approximation B k when its structure is similar to the true Jacobian of F . Lemma 1. Let J be the Jacobian of F given by (2). If the least change secant update B k+1 for approximating J k+1 is computed by (13) Proof. By the definition of s k and y k in (14) it is easy to see that Using the secant update (13), we have that the first two rows of B k are kept the same and By Section 2 we know that Broyden "good" and "bad" updates are represented by specific choices of w k and, therefore, enjoy the consequences of Lemma 1. Unfortunately, not much can be said about the structure of the "third row" of B k+1 . When B k = J k , the diagonal structure of blocks Z k and X k , as well as the zero block in the middle, are likely to be lost. However, if we select w T k = s k,x 0 s k,z T , then, by Lemma 1, the zero block is kept in B k+1 .
The update given by this choice of w k is a particular case of Schubert's quasi-Newton update for structured and sparse problems [20]. This update minimizes the distance to B k on the space of the matrices that satisfy (8) and have the same block sparsity pattern of B k [18]. Using the Sherman-Morrison-Woodbury formula, we also have the update for H k : which only needs an extra computation of H k y k to be stored. There is no need to store w k , since it is composed by components of s k . We can say that this approach is inspired in the Broyden "good" update.
On the other hand, if we use w T k = 0 y k,b y k,µ T B k , then we still have M 2 k+1 = 0 by Lemma 1 and, in addition, we are able to remove the calculation H k y k in the inverse. This approach is inspired by the Broyden "bad" update and results in the following update Up to the knowledge of the authors, this update has not been theoretically studied in the literature. Lemma 1 also justifies our choice to work with approximations of J −1 rather than J. After > 0 rank-one updates, if B k u = v is solved by factorizations and backsolves, it would be necessary to perform updates on the factorization of initial matrix B k− , what could introduce many nonzero elements. A clear benefit of defining B k− = J k− is that computing H k v uses the already calculated factorizations/preconditioners for B k− , which were originally used to solve (5) at iteration k − . Step 3 of Algorithm 1 is an example of low rank update (12). Clearly, we do not explicitly compute H k− v, but instead solve the system B k− u = v.
Computation of quasi-Newton steps
Having defined how quasi-Newton updates are initialized and constructed, we now have to insert the approximations in an interior point framework. Denoting (x 0 , λ 0 , z 0 ) as the starting point of the algorithm, at the end of any iteration k it is possible to build a rank-one secant approximation of the unreduced matrix to be used at iteration k + 1. Let us consider iteration k, where k ≥ 0 and ≥ 0. If = 0, then, by the previous subsection, B k− = B k = J k and the step in the interior point iteration is the usual Newton step, given by (5). If > 0, we have a quasi-Newton step, which can be viewed as a generalization of (5), and is computed by solving or, equivalently, by performing H k v. All the other steps of the IPM remain exactly the same. When > 0, the cost of solving (16) depends on the type of update that is used. In general, it is the cost of solving system J k− r = q (or, equivalently, J −1 k− q) plus some vector multiplications and additions. However, since J k− has already been the coefficient matrix of a linear system at iteration k − , it is usually less expensive than solving for the first time. That is one of the main improvements that a quasi-Newton approach brings to interior point methods.
When the Broyden "bad" update (12) is used together with defining B k− = J k− as the initial approximation, it is possible to derive an alternative interpretation of (16). Although this update is known to have worse numerical behavior when compared with the "good" update (10), this interpretation can result in a more precise implementation, which is described in Lemma 2.
Lemma 2. Assume that k, ≥ 0 and H k is the approximation of J −1 k constructed by updates (12) using initial approximation Proof. Using the expansion (11) of Broyden "bad" update, the definition of α i and the fact that H k = J −1 k , we have that where the last equality comes from the definition of V k in (11), applied recursively. When i = 1, we assume that k−1 j=k−i+1 V j results in the identity matrix, therefore α 1 = y T k−1 v/ρ k−1 . Multiplying J k− on the left on both sides of (17), we obtain By Lemma 1 and definition (14), the first two components of J k− s k−i − y k−i are zero, for all i, which demonstrates the lemma.
Lemma 2 states that only the third component of the right hand side actually needs to be changed in order to compute Broyden "bad" quasi-Newton steps at iteration k. This structure is very similar to corrector or multiple centrality correctors in IPMs and reinforce the argument that the cost of computing a quasi-Newton step is lower than the Newton step. It is important to note that scalars α i are the same as the ones computed at step 2 of Algorithm 1.
Dealing with regularization
Rank-deficiency of A, near singularity of Q or the lack of strict complementarity at the solution may cause matrix J, the augmented system or the normal equations to become singular near the solution of (1). As the iterations advance, it becomes harder to solve the linear systems. Regularization techniques address this issue by adding small perturbations to J in order to increase numerical accuracy and convergence speed, without losing theoretical properties. A common approach is to interpret the perturbation as the addition of weighted proximal terms to the primal and dual formulations of (1). Saunders and Tomlin [21] consider fixed perturbations while Altman and Gondzio [22] consider dynamic ones, computed at each iteration. Friedlander and Orban [23] add extra variables to the problem, expand the unreduced system and, after an initial reduction, arrive in a regularized system similar to [22]. In all these approaches, given reference pointsx andλ, the regularized matrix J where diagonal matrices R p ∈ R n×n and R d ∈ R m×m represent primal and dual regularization, respectively, can be viewed as the Jacobian of the following functionF Any choice is possible for reference pointsx andλ. However, in order to solve the original Newton system (5) and make use of the good properties of the regularization (18) at the same time, they are usually set to the current iteration points x k and λ k , respectively, which annihilates terms R p (x −x) and R d (λ −λ) on the right hand side of (5) during affine scaling steps. Matrix J given by (18) now depends on R p and R d in addition to x and z. The regularization terms R p and R d do not need to be considered as variables, but if new regularization parameters are used, a new factorization or preconditioner needs to be computed. Since this is one of the most expensive tasks of the IP iteration, during quasi-Newton step k the regularization parameters are not allowed to change from those selected at iteration k − , where the initial approximation was selected. That is a reasonable decision, as the system that is actually being solved in practice has the coefficient matrix from iteration k − . The fact that the regularization terms are linear inF implies, by Lemma 1, that the structure of (18) is maintained during least change secant updates.
The reference points have no influence in J, but they do influence the func-tionF . Suppose, as an example, that = k, i.e., the initial approximation for quasi-Newton is the Jacobian at the starting point (x 0 , λ 0 , z 0 ), and only quasi-Newton steps are taken in the interior point algorithm. If we use x 0 and λ 0 as the reference points and the algorithm converges, the limit point could be very different from the true solution, as initial points usually are far away from the solution, especially for infeasible IPMs. If we update the reference points at each quasi-Newton iteration, as it is usually the choice in literature [22,23], we eliminate their effect on the right hand side of (16) during affine scaling steps. By (7), B k+1 is the Jacobian of a linear approximation ofF which interpolates (x k , λ k , z k ) and (x k+1 , λ k+1 , z k+1 ). As the regularization parameters are fixed during quasi-Newton iterations, the reference points can be seen as simple constant shifts onF , with no effect on the Jacobian. Therefore, the only request is thatF has to be evaluated at points (x k , λ k , z k ) and (x k+1 , λ k+1 , z k+1 ) using the same reference points, when calculating y k by (14). The effect of changing the reference points at each iteration in practice is the extra evaluation ofF at the beginning of iteration k.
Implementation
The quasi-Newton approach can easily be inserted into an existing interior point method implementation. In this work, the primal-dual interior point algorithm HOPDM [24] was modified to implement the quasi-Newton approach. Algorithm 2 describes the steps of a conceptual quasi-Newton primal-dual interior point algorithm.
Algorithm 2: Quasi-Newton Interior Point algorithm
Initialization: F , J and (x 0 , λ 0 , z 0 ). Set k ← 0 and ← 0. 2. Calculate α k P and α k D such that (x k+1 , λ k+1 , z k+1 ) given by (4) satisfy x k+1 , λ k+1 > 0 3. Compute s k and y k by (14) if will store quasi-Newton information, then Store appropriate quasi-Newton information ← + 1 else ← 0 4. k ← k + 1 and go back to step 1 The most important element of Algorithm 2 is , the memory size of the low rank update, which controls if the iteration involves Newton or quasi-Newton steps. At step 1 several systems (16) might be solved, depending on the IPM used. HOPDM implements the strategy of multiple centrality correctors [7], which tries to maximize the step-size at the iteration. HOPDM also implements the regularization strategy (18). Note in (16) that we do not have to care how the systems are solved, only how to implement the matrix-vector multiplication H k v efficiently.
Step 3 is the most important step in a quasi-Newton IP algorithm, since it decides whether or not quasi-Newton steps will be used in the next iteration. Several possible strategies are discussed in this section, as well as some implementation details.
Bound constraints l ≤ x ≤ u, l, u ∈ R n can be considered in the general definition (1) of a quadratic programming problem by using slack variables. HOPDM explicitly deals with bound constraints and increases the number of variables to 4n + m. When bound constraints are considered, function F is given by Note that, in this case, l is eliminated by proper shifts, u represents upper shifted constraints and t represents slacks. All the results and discussions considered so far can be easily adapted to the bound-constrained case. Therefore, in order to keep notation simple, we will refer to the more general and simpler formulation (1) and work in the (2n + m)-dimensional space.
Storage of H k and computation of H k v
When solving quadratic problems, the Jacobian of function F used in a primaldual interior point method is not expensive to compute and has an excellent structure, which can be efficiently explored by traditional approaches. Therefore, there is no point in explicitly building approximation matrix B k (or H k ) since, by Lemma 1, they would be denser. For an efficient implementation of the algorithm only the computation H k v has to be performed in (16). To accomplish this task, we store • Initial approximation J k− and . . , , if updates are based on Broyden "good" or "bad" method, respectively.
In order to store J k− we have to store vectors x k− and λ k− , since all other blocks of J are constant. If regularization is being used, vectors R p and R d used at iteration k − are also stored. The reference points are not stored. The most important structure to store is the factorization or the preconditioner computed when solving (16) at iteration k − for the first time. Without this information, the computation of H k v would have the same computational cost of using the true matrix J k . Data is stored at step 3 of Algorithm 2, whenever it has decided to store quasi-Newton information and = 0.
Regarding the triples, they are composed of two (2n + m)-dimensional vectors and one scalar. Storing y k−i is the most expensive part in Broyden "bad" updates, since function F has to be evaluated twice. In Broyden "good" updates the computation of u k−i is the most expensive, due to the computation of H k−i y k−i .
The implementation of an algorithm to compute H k v depends on the selected type of low rank update. Algorithm 1 is an efficient implementation of the general Broyden "bad" update (12). If the structure described by Lemma 1 is being used, then all vector multiplications are performed before the solution of the linear system, as described by Algorithm 3. Both algorithms can be easily modified to use updates of the form w T k = a k b k c k T B k in the generic update (13). The only changes are the storage of an extra vector and the computation of scalars α i at step 2. The implementation of the sparse update (15) is straightforward and there is no need to store extra information. Algorithm 3 uses a little extra computation, since vector q is discarded after the computation of all α i . On the other hand, there is no need to store blocks s k−i,λ , i = 1, . . . , .
Algorithm 3: Algorithm for matrix-vector multiplications in Broyden "bad" update using structural information Algorithm 4 describes the steps to compute H k v when Broyden "good" update (10) is considered. Note that a linear system is first solved, then a sequence of vector multiplications and additions is applied. The algorithm is simpler and more general than Algorithm 1, but it has to be called more often in an interior point algorithm: to compute the steps (step 1 in Algorithm 2) and to compute H k y k , needed to build u k (step 3 in Algorithm 2). Algorithm 4 is very general and can be easily modified to consider any least change secant update of the form (13) without extra storage requirements, although not necessarily in an efficient way.
Size of
The cost of computing H k v increases as the quasi-Newton memory increases. In addition, it was observed that the quality of the approximation decreases when the quasi-Newton memory is large [17]. In our implementation of Algorithm 2, we also observed the decrease in the quality of the steps when is too large. The decrease of the barrier parameter µ k = x k T z k /n for different bounds on is shown in Figure 1, for problem afiro, the smallest example in Netlib test collection. In this example, Newton steps were allowed after max quasi-Newton iterations, where max ∈ {0, 5, 20, 100, 200}. The maximum of 200 iterations was allowed.
We can see that if the Jacobian is only evaluated once ( max = 200) then the method is unable to converge in 200 iterations. As the maximum memory is reduced, the number of iterations to convergence is also reduced. On the other hand, the number of (possibly expensive) Newton steps is increased. When max = 0, i.e., no quasi-Newton steps, the algorithm converges in 7 iterations. We take the same approach as [17] and define an upper bound max on in the implementation of Algorithm 2. When this upper bound is reached, we set to 0, which, by (16), results in the computation of a Newton step. The verification is performed at step 3 of Algorithm 2. This approach is also known as quasi-Newton with restarts [25] and differs from usual limited-memory quasi-Newton [12], where only the oldest information is dropped.
The quasi-Newton steps
The behavior of consecutive quasi-Newton steps depicted in Figure 1 reminds us that it is important to use the true Jacobian in order to improve convergence of the method. However, we would like to minimize the number of times the Jacobian is evaluated, since it involves expensive factorizations and computations. Unfortunately, to use only the memory bound as a criterion to compute quasi-Newton steps is not a reasonable choice. When max = 100, for example, the algorithm converges in 110 iterations, but it spends around 60 iterations without any improvement. As the dimension of the problem increases, this behavior is getting even worse. We can also see that the choice max = 20 is better for this problem, as the algorithm converges in 31 iterations, computing only two times the Cholesky factorization of the Jacobian.
The lack of reduction is related to small step-sizes α k P and α k D . Our numerical experience with quasi-Newton IP methods indicates that the quasi-Newton steps often are strongly attracted to the boundaries. The step-sizes calculated for directions originated from a quasi-Newton predictor-corrector strategy are almost always small and need to be fixed. Several strategies have been tried to increase the step-sizes of those steps: (i) Perturb complementarity pairs x i z i for which the relative component-wise direction magnitude is high and then recompute quasi-Newton direction; (ii) Use multiple centrality correctors [7]; (iii) Gentle reduction of µ on quasi-Newton iterations, selecting σ k close to 1 in the predictor and corrector steps.
Note that the terms in (i) are the inverse of the maximum step-size allowed by each component. The motivation of strategy (i) is the strong relation observed between components of the quasi-Newton direction which are too large with respect their associated variable and components which differ too much from the respective component of the Newton direction for the same iteration, i.e., We display this relation in Figure 2(a) for one iteration on linear problem GE. Positive spikes represent the component-wise relative magnitude of quasi-Newton steps (19) for each component of variables x and z. The higher the spikes, the smaller the step-sizes are. Negative spikes represent the componentwise relative error between the Newton and quasi-Newton directions (20). The lower the spikes, the larger the relative difference between Newton and quasi-Newton components. To generate this figure, the problem was solved twice and, at the selected iteration, the Newton step and quasi-Newton step were saved. Only negative quasi-Newton directions were considered in the figure. It is possible to see in Figure 2(a) that very few components are responsible for the small step-sizes. Interestingly, most of those blocking components are associated with components of the quasi-Newton direction which differ considerably from the Newton direction. Unfortunately, numerical experiments show that the perturbation of variables or setting the problematic components to zero has the drawback of increasing the infeasibility and cannot be performed at every iteration.
To test the impact of each strategy on the quality of the steps, four linear programming problems were selected: afiro, GE, stocfor3 and finnis. The tests were performed as follows. Given an iteration k of a problem, we run algorithm HOPDM allowing only Newton steps up to iteration k − 1. At iteration k only one of each approach is applied: Newton step, quasi-Newton step, or one of the discussed strategies (i), (ii) or (iii). Only one affine-scaling predictor and one corrector were allowed, except for strategy (ii), where multiple centrality correctors were used at iteration k. We repeated this procedure for k from 2 up to the total number of iterations that the original version of HOPDM needed to declare convergence.
The average of the sum of the step-sizes for each problem and for each approach is shown in Table 1. We can see that quasi-Newton steps are considerably smaller than Newton steps. All improvement strategies are able to increase, on average, the sum of the step-sizes. Strategy (i) has the drawback of increasing the infeasibility and has a huge impact on the convergence of the algorithm. Strategy (iii) is simple and efficient to implement but has worse results when compared to strategy (ii), based on multiple centrality correctors. Strategy (ii) has the ability to improve quasi-Newton directions in almost all iterations and has the drawback of extra backsolves. Similar behavior was observed in [7]. The effect of strategy (ii) is shown in Figure 2(b).
Step-sizes are increased, (a) (b) Figure 2: Relation between small step-sizes for quasi-Newton steps (positive spikes) and large relative errors when compared with Newton step (negative spikes) for one iteration on linear problem GE. High positive spikes represent blocking components of the quasi-Newton direction. The errors when only a simple predictor-corrector direction is used are displayed in (a). The effect of using strategy (ii) to improve step-sizes is shown in (b).
but the new quasi-Newton direction is slightly different from the Newton direction for the same step. Strategy (ii) was selected as the default one in our implementation.
In order to perform as few Newton steps as possible, step 3 of Algorithm 2 has to be carefully implemented. Clearly, the first basic condition to try a quasi-Newton step at iteration k + 1, k ≥ 0, is to check if there is available memory to store it at iteration k.
Our experience shows that quasi-Newton steps should always be tried, since they are cheaper than Newton steps. This means that a quasi-Newton step is always tried (but not necessarily accepted) after a Newton step in the present implementation. As shown in Figure 1, using only Criterion 1 can lead to slow convergence and slow convergence is closely related to small step-sizes. Therefore, in addition to Criterion 1 we tested two criteria, which cannot be used together. In Section 5 we compare those different acceptance criteria.
Criterion 2 (α criterion). If iteration k is a quasi-Newton iteration and Table 1: Average of the sum α k P + α k D for different improvement strategies on selected linear programming problems. The use of multiple centrality correctors (strategy (ii)) resulted in values similar to the Newton step.
Criterion 3 (Centrality criterion). If iteration k is a quasi-Newton iteration and
x k+1 T z k+1 ≤ ε c x k T z k .
Numerical results
Algorithm 2 was implemented in Fortran 77 as a modification of the primaldual interior point algorithm HOPDM [24], release 2.45. The code was compiled using gfortran 4.8.5 and run in a Dell PowerEdge R830 powered with Red Hat Enterprise Linux, 4 processors Intel Xeon E7-4660 v4 2.2GHz and 512GB RAM. The modifications discussed in Sections 3 and 4 have been performed in order to accommodate the quasi-Newton strategy. The main stopping criteria have been set to Mehrotra and Li's stopping criteria [7,26]: where µ = x T z/n. By default, in HOPDM parameters are defined to ε opt = 10 −10 , ε P = 10 −8 and ε D is set to 10 −8 for linear problems and to 10 −6 for quadratic problems. In addition to (21), successful convergence is also declared when lack of improvement is detected and µ/(1 + |c T x|) ≤ 10 3 ε opt . Besides several performance heuristics, HOPDM implements the regularization technique [22] and the multiple centrality correctors strategy [7]. When solving systems with the unreduced matrix, sparse Cholesky factorization of normal equations or LDL T factorization of the augmented system is automatically selected on initialization.
HOPDM also has a matrix-free [27] implementation for which the present approach is fully compatible. According to Algorithm 2, once a quasi-Newton step is computed, it is used to build point (x k+1 , λ k+1 , z k+1 ). However, in practice, if such step is considered "bad", it is also possible to discard it, setting = 0, compute the exact Jacobian and perform the Newton step at this iteration. The idea is to avoid quasi-Newton steps which might degrade the quality of the current point. Preliminary experiments using linear programming problems from Netlib collection were performed, in order to test several possibilities for max in Criterion 1 and to select between Criteria 2 and 3. In addition we also verified the possibility to reject quasi-Newton steps, instead of always accepting them. The selected combination uses max = 5 and Criterion 3 with ε c = 0.99. Rejecting quasi-Newton steps has not led to reductions in the number of factorizations and has the drawback of more expensive iterations, therefore, the steps are always taken. As mentioned in Section 4, the multiple centrality correctors strategy (ii) is used to improve quasi-Newton directions.
A key comparison concerns the type of low rank update to be used. Three implementations were tested: U1 General Broyden "bad" algorithm, described by Algorithm 1; U2 Sparse Broyden "bad" algorithm, described by Algorithm 3 using update (15) inspired in Schubert's update [20]; U3 General Broyden "good" algorithm, described by Algorithm 4.
Four test sets were used in the comparison: 96 linear problems from Netlib 1 , 10 medium-sized linear problems from Maros-Mészáros misc library 2 , 39 linear problems from the linear relaxation of Quadratic Assignment Problems (QAP) 3 and 138 convex quadratic programming problems from Maros-Mészáros qpdata library 4 . In order to compare algorithms in large test sets, performance profiles were used [28]. A problem is declared solved by an algorithm if the obtained solution (x * , λ * , z * ) satisfies (21). Number of factorizations or total CPU time are used as performance measures.
Using the default HOPDM values for (21), implementations U1, U2 and U3 are able to solve 269, 275 and 271 problems, respectively, out of 283. There were 19 problems in which at least one implementation did not solve. We relaxed the parameters in (21), multiplying them by a factor of 10 2 , and solved the 19 problems again. The resulting performance profiles in numbers are shown in Table 2, using number of factorizations and CPU time as performance measures. The efficiency of an algorithm is the number of solved problems in which the algorithm spent the smallest number of factorizations (or the smallest amount of CPU time) among the compared algorithms. The robustness is the total number of problems solved.
We can see that update U2 solves 210 problems using the smallest number of factorizations and 137 problems using least CPU time, while U1 solves 177 and 126 and U3 solves 123 and 85, respectively. In addition, updates U2 and U3 are the most robust implementations, being able to solve 281 out of 283 problems. Therefore, U2 was used as the default update in this work. Update U2 has performed particularly well on quadratic problems, what explains the difference in efficiency between updates.
Based on the preliminary results, the default implementation of Algorithm 2, denoted qnHOPDM from now on, uses update U2 for solving (16) the step, strategy (ii) to improve quasi-Newton directions and Criteria 1 and 3 to decide when to use quasi-Newton at step 3. By default, HOPDM uses multiple centrality correctors, which were shown to improve convergence of the algorithm [7]. We implemented two versions of Algorithm 2: with (qnHOPDM-mc) and without (qnHOPDM) multiple centrality correctors for computing Newton steps. Since we are using strategy (ii), multiple correctors are always used for quasi-Newton steps. Each implementation was compared against its respective original version: HOPDM-mc and HOPDM.
In the first round of tests only the QAP collection was excluded from the comparison, which gives 244 problems from Netlib and from Maros-Mészáros linear and quadratic programming test collection. The performance profiles using number of factorizations and CPU time as performance measures are shown in Figure 3. Comparisons between the implementation of HOPDM without multiple centrality correctors and qnHOPDM are given by Figures 3(a) and 3(b). The comparison of implementations HOPDM-mc and qnHOPDM-mc is displayed in Figures 3(c) and 3(d).
Similarly to the previous comparison, using default parameters, 5 problems were not solved by qnHOPDM or HOPDM without multiple centrality correctors, while 7 problems were not solved by qnHOPDM-mc or HOPDM-mc. Criteria (21) was relaxed in the same way on these problems. Using this approach, HOPDM is able to solve all the 244 problems, qnHOPDM solves 242, HOPDM-mc solves 243 and qnHOPDM-mc solves 242. The quasi-Newton implementations are able to successfully reduce the number of factorizations, as shown in Figures 3(a) and 3(c). We can see in Figure 3(a) that from all 242 problems considered solved by qnHOPDM, in 237 it uses less factorizations than HOPDM without multiple centrality correctors. On the other hand, for about 150 problems, qnHOPDM uses at least twice as much CPU time as HOPDM (Figure 3(b)). The behavior of the implementations using multiple centrality correctors in the Newton step is similar, but HOPDM-mc has improved efficiency results. The problems where qnHOPDM reduces both factorizations and CPU time when compared to HOPDM without centrality correctors are highlighted in Table 3. The only problem which qnHOPDM-mc uses strictly less CPU time than HOPDM-mc is the quadratic programming problem cont-101.
Our last comparison considers 39 medium-sized problems from the QAP collection. These problems are challenging, since they are sparse, but their Cholesky factorization is very dense. Performance profiles were once more used for comparing the implementations. As the algorithm approaches the solution, the linear systems become harder to solve. Therefore, using default HOPDM values for parameters in (21) the number of problems solved is 21 (HOPDM), 31 (qnHOPDM), 25 (HOPDM-mc) and 35 (qnHOPDM-mc). Clearly the quasi-Newton approach benefits of using matrices that are not too close to the solution. From the 39 problems, 19 were solved again using relaxed parameters for the comparison between HOPDM and qnHOPDM, and 14 were solved again for the comparison between HOPDM-mc and qnHOPDM-mc. The results are shown in Figure 4. Quasi-Newton IPM is the most efficient and robust algorithm in terms of CPU time for both implementations, solving all 39 problems. Without multiple centrality correctors (Figure 4(a)), HOPDM has a poor performance and is not able to solve any problem using less CPU time than qnHOPDM. When multiple centrality correctors are allowed (Figure 4(b)), HOPDM-mc is able to solve only 10 problems using less or equal CPU time than qnHOPDM-mc. Clearly, the efficiency of qnHOPDM is due to the decrease in the number of factorizations, as shown in Table 4. In this table we display the number of factorizations (F) and CPU time (CPUt) for each problem and each algorithm in all QAP test problems considered. When no multiple centrality correctors are allowed at Newton steps, qnHOPDM displays the biggest improvements, being the fastest solver in all problems. The results are more competitive when multiple centrality correctors are allowed, but qnHOPDM-mc was the most efficient in 29 problems while HOPDM-mc was the most efficient in 10 problems.
Conclusions
In this work we discussed a new approach to IPM based on rank-one secant updates for solving quadratic programming problems. The approach was motivated by the multiple centrality correctors, which provide many possible points where the function F can be evaluated in order to build a good approximation of J. Instead of using several points, the present approach uses only the new computed point in order to build a low rank approximation to the unreduced matrix at the next iteration. The computational cost of solving the quasi-Newton linear system can be compared with the cost of computing one corrector, as all the factorizations and preconditioners have already been calculated.
It was shown that rank-one secant updates maintain the main structure of the unreduced matrix. Also, several aspects of an efficient implementation were discussed. The proposed algorithm was implemented as a modification of algorithm HOPDM using the Broyden "bad" update, modified to preserve the (a) (b) Figure 4: Performance profiles for the comparison between quasi-Newton IPM and HOPDM on the QAP test collection. The CPU time was used as performance measure.
sparsity structure of the unreduced matrix. The implementation was compared with the original version of HOPDM and was able to reduce the overall number of factorizations in most of the problems. However, only in the test set containing linear relaxations of quadratic assignment problems, the reduction in the number of factorizations was systematically translated into the reduction of the CPU time of the algorithm. This suggests that the proposed algorithm is suitable for problems where the computational cost of the factorizations is much higher than the cost of the backsolves. | 2018-06-26T09:40:55.000Z | 2018-06-26T00:00:00.000 | {
"year": 2018,
"sha1": "4b8567aa35054b3f068f9e549bd193daa7d83954",
"oa_license": "CCBY",
"oa_url": "https://link.springer.com/content/pdf/10.1007/s10589-019-00102-z.pdf",
"oa_status": "HYBRID",
"pdf_src": "Arxiv",
"pdf_hash": "4b8567aa35054b3f068f9e549bd193daa7d83954",
"s2fieldsofstudy": [
"Computer Science",
"Mathematics"
],
"extfieldsofstudy": [
"Mathematics",
"Computer Science"
]
} |
257767898 | pes2o/s2orc | v3-fos-license | Score to assess the probability of relapse in granulomatosis with polyangiitis and microscopic polyangiitis
Objective To develop a score assessing the probability of relapse in granulomatosis with polyangiitis (GPA) and microscopic polyangiitis (MPA). Methods Long-term follow-up data from GPA and MPA patients included in five consecutive randomised controlled trials were pooled. Patient characteristics at diagnosis were entered into a competing-risks model, with relapse as the event of interest and death the competing event. Univariate and multivariate analyses were computed to identify variables associated with relapse and build a score, which was then validated in an independent cohort of GPA or MPA patients. Results Data collected from 427 patients (203 GPA, 224 MPA) at diagnosis were included. Mean±SD follow-up was 80.6±51.3 months; 207 (48.5%) patients experienced ≥1 relapse. Relapse risk was associated with proteinase 3 (PR3) positivity (HR=1.81 (95% CI 1.28 to 2.57); p<0.001), age ≤75 years (HR=1.89 (95% CI 1.15 to 3.13); p=0.012) and estimated glomerular filtration rate (eGFR) ≥30 mL/min/1.73 m² (HR=1.67 (95% CI 1.18 to 2.33); p=0.004) at diagnosis. A score, the French Vasculitis Study Group Relapse Score (FRS), from 0 to 3 points was modelised: 1 point each for PR3-antineutrophil cytoplasmic antibody positivity, eGFR ≥30 mL/min/1.73 m² and age ≤75 years. In the validation cohort of 209 patients, the 5-year relapse risk was 8% for a FRS of 0, 30% for 1, 48% for 2 and 76% for 3. Conclusion The FRS can be used at diagnosis to assess the relapse risk in patients with GPA or MPA. Its value for tailoring the duration of maintenance therapy should be evaluated in future prospective trials.
INTRODUCTION
Granulomatosis with polyangiitis (GPA) and microscopic polyangiitis (MPA) are the two main antineutrophil cytoplasmic antibody (ANCA)-associated vasculitides (AAV). 1 GPA and MPA differ in their pathogenesis, genetics and serotypes, but patients with these two types of vasculitis share many clinical features and currently receive similar treatments. 2-7 A therapeutic strategy combining glucocorticoids (GC) and cyclophosphamide or rituximab to induce remission has dramatically improved survival of AAV in the past decades. [5][6][7][8][9][10] Despite this improvement, maintaining remission in patients with GPA or MPA remains challenging. Relapse occurs in 13.7%-44% of cases at 18-36 months depending on the duration of follow-up, patient characteristics and maintenance treatment. 2 4 5 10 Maintenance therapy with low-dose preemptive rituximab has significantly decreased the risk of relapse in comparison with azathioprine, 11 but relapses still occur after rituximab discontinuation. Long-term follow-up of the MAIN-RITSAN 1 trial showed that relapse-free survival was 58% at 60 months in patients treated with rituximab 500 mg every 6 months for 18 months. 12 Increasing the duration of azathioprine maintenance therapy from 24 to WHAT IS ALREADY KNOWN ON THIS TOPIC ⇒ Granulomatosis with polyangiitis and microscopic polyangiitis are two antineutrophil cytoplasmic antibody (ANCA)-associated vasculitides (AAV) in which relapse is common. There is a lack of criteria to predict the risk of relapse in AAV.
WHAT THIS STUDY ADDS
⇒ We identified three factors (PR3-ANCA, age ≤75 years and estimated glomerular filtration rate ≥30 mL/min/1.73 m²) associated with a higher risk of relapse of AAV. These factors were combined to form a score, ranging from 0 to 3 points (1 point for each factor) that predicts the risk of relapse in AAV.
HOW THIS STUDY MIGHT AFFECT RESEARCH, PRACTICE OR POLICY
⇒ This score could, therefore, help clinicians tailor the duration of maintenance therapy in AAV, provided it is evaluated and validated in future prospective trials.
RMD Open RMD Open RMD Open
48 months has been shown to decrease the risk of relapse and improve renal outcome. 13 However, adverse events were more frequent in the group of patients receiving 48 months of maintenance treatment, suggesting that prolonged maintenance therapy is probably not suitable for every GPA or MPA patient. 13 The MAINRITSAN 3 trial recently confirmed that extended therapy with biannual rituximab infusions over 18 months also significantly decreases the frequency of AAV relapse without increasing the number of severe adverse events. Although prolonged treatment with rituximab had a good safety profile during the follow-up period of the pivotal clinical trials, 5 9 11 12 14 data available since the SARS-Cov2 pandemic have shown that rituximab increases morbidity and mortality in these patients when infected with SARS-Cov2 15 16 because it increases the risk of severe forms of COVID-19 and because it decreases the quality of the postvaccine humoral immune response to SARS-Cov2. 17 18 We do not yet have sufficient knowledge to predict flares or relapses. We also lack reliable indicators that could be used to tailor treatment in order to minimise immunosuppression in those less likely to experience relapse. Such markers could also be used to monitor patients more likely to experience relapse while maintaining higher levels of immunosuppressive therapy. It is therefore essential to better identify the factors associated with the risk of relapse in order to take a further step towards personalised management. To date, studies have mainly identified kidney failure as a protective factor against relapse, whereas anti-proteinase 3 (PR3) antibodies and cardiovascular involvement have been associated with an increased risk of relapse. [19][20][21][22] The objective of this study was to identify baseline clinical and biological patient characteristics associated with the probability of relapse during follow-up in order to develop a relapse prediction score. This score could help clinicians determine whether patients in remission after induction therapy should receive extended maintenance treatment.
PATIENTS AND METHODS Patients and definitions
Patient and disease characteristics at diagnosis and longterm follow-up data were pooled from five consecutive prospective trials conducted by the French Vasculitis Study Group (FVSG) involving GPA and MPA patients (CHUSPAN I, CHUSPAN II, CORTAGE, MAINRITSAN and WEGENT). 4 10-12 23-27 All trials included GPA and MPA patients fulfilling the revised Chapel Hill Consensus Conference nomenclature. 1 All patients were included at AAV diagnosis, except in the MAINRITSAN and WEGENT studies in which patients were included at the start of maintenance therapy. Thereby, GPA and MPA patients included in these studies after a last flare which was already a relapse and for whom data at diagnosis of AAV were missing, were excluded from this study. Patients with eosinophilic GPA (EGPA) or polyarteritis nodosa (PAN) included in the CHUSPAN I, CHUSPAN II and CORTAGE studies were also excluded from the analysis.
Development cohort
Patients were assessed at baseline for manifestations of GPA and MPA in each organ system using the Birmingham Activity Score (BVAS) 28 and were treated according to the
Vasculitis Vasculitis Vasculitis
protocol of the CHUSPAN I, CHUSPAN II, WEGENT, MAINRITSAN and CORTAGE trials. Then, patients were prospectively monitored routinely in extended monitoring studies with data reported on relapses, treatments, vasculitis-induced damage and vital status. Follow-up data on ANCA were not collected in this study since we were only interested in the effect of clinico-biological characteristics at diagnosis on subsequent relapse risk.
Validation cohort
The validation cohort included AAV patients aged ≥65 years from the FVSG registry database. 29 Patients were included if they had a new diagnosis of GPA or MPA made after 2000, according to the revised Chapel Hill Consensus Conference, 1 and if they were either followed up for at least 6 months and/or were deceased. Duplicates with the development cohort were removed from the validation cohort. All patients enrolled in the validation cohort were drawn from the FVSG registry and thus provided written informed consent.
Definitions
Remission was defined as the absence of disease activity attributable to GPA or MPA manifestations for ≥3 consecutive months, corresponding to BVAS=0, not requiring being off or on a specified GC dose. Relapse was defined as the recurrence and/or appearance of ≥1 new vasculitis manifestation(s) after remission lasting ≥3 months. 30 Statistical analyses Continuous variables are expressed as means ±SD and categorical variables as numbers (%). Statistical analyses were computed using SAS V.9.4 (SAS Institute) and R software using the TimeROC package. 31 In AAV, death and relapse are not independent events. Conventional statistical methods for time-to-event analysis assume that competing risks are absent, meaning that incidence (assuming independent competing risks) of relapse is estimated in a population where no one dies, which may not realistically reflect relapse risk in a population. 32 Thus, we applied a Fine-Gray subdistribution hazard model to model the subdistribution hazard of AAV relapse with the death being considered as the competing event. The proportional hazards assumption was not realistic for all data, insofar as the effectiveness of the treatments differs from one trial to the other. To account for this heterogeneity of treatment effect on relapse risk, we perform a stratified analysis, using The STRATA statement of SAS PHREG procedure. 33 The 4 10-12 23-27 was used to determine the strata levels. Times to relapse and/or death were calculated from treatment onset. The candidate variables to be entered in the score were: sex, diagnosis (MPA vs GPA), ANCA status (positive vs negative), PR3-ANCA status (positive vs negative), MPO-ANCA status (positive vs negative), fever, myalgias, arthralgias, ear, nose and throat (ENT) involvement, pulmonary involvement, asthma, alveolar haemorrhage, pulmonary nodules, kidney involvement, proteinuria >0.2 g/day, haematuria, skin lesions, neurological involvement, peripheral nervous system involvement, mononeuritis multiplex, peripheral polyneuropathy, central nervous system involvement, cardiac involvement, pericarditis, specific cardiomyopathy, gangrene, gastrointestinal involvement, abdominal pain, severe abdominal involvement, digestive haemorrhage, ophthalmologic symptoms, serum creatinine, estimated glomerular filtration rate (eGFR) (CKD-EPI), five factor score (1996 and 2011 versions).
RMD Open RMD Open RMD Open
We developed the score using the following steps: Step 1 The relapse sub-HRs and corresponding CIs and p values were obtained for all candidate variables, separately, using a bivariate Fine-Gray model stratified on the treatment arm. All candidate variables were categorical except for age and GFR, for which the optimal threshold was determined according to clinical relevance and spline regression.
Step 2 A multivariate Fine-Gray model was constructed using a manual backward selection procedure including all variables influencing the relapse sub-HR with a p≤0.2 in the last step. All covariates with a significant (p<0.05) effect on relapse subhazard were kept in the final model (the list of included variables is provided in online supplemental file.
Step 3 Two score calculations have been proposed from the final model. In the first, estimated regression coefficients were multiplied by 10 and rounded to the nearest integer. 34 The resultant integer represents the ponderation for the presence of the given risk factor. Reference levels of the categorical variable were assigned a score of zero.
In the second, we developed a simple score for which the weighting was 1 (for variables increasing the risk of relapse) or −1 (for variable protecting against relapse). The relapse score was obtained by summing the weights of the risk factors present in both scoring systems.
Step 4 Then, the discriminative ability of simple and complex scores to predict relapse at 24, 36, 48, 60, 72 and 84 months after a diagnosis of vasculitis were investigated. For this purpose, we estimated the area under the ROC curve (AUC) for relapse prediction as a function of the score value at each of these time points using the procedure described by Blanche et al. 31 This step allowed us to choose the best weighting system for scoring.
Step 5 Finally, the score was validated in an independent cohort of 209 patients with GPA or MPA using a Fine-Gray model with the score value as the independent variable. The cumulative incidence function of relapse was estimated and compared between the different categories of score value. Finally, the AUC of the score to predict relapse at different times was calculated to check the discrimination of the score.
Studied population
Patients with PAN (n=108), EGPA (n=185) and those with missing data at diagnosis of AAV (n=23) were not considered in this study. Score predicting AAV relapse Multivariate analysis was used to build a predictive score for relapse. AAV treatment (induction and maintenance) was taken into account via stratification on the randomisation arm in each therapeutic trial. For the first model, each item independently associated with the occurrence of relapse was weighted based on its HR. This first version of the score (complex score) varied from −11 to +6 points with 6 possible states. Simplification of this complex score was evaluated by grouping=strata (online supplemental figure 1). Therefore, a simplified version of the score, entitled the FVSG Relapse Score (FRS), was modelised as follow: 1 point for PR3-ANCA, 1 point for eGFR (CKD-EPI) ≥30 mL/min/1.73 m² and 1 point for age ≤75 years at AAV diagnosis for each item. Thus, the FRS varied from 0 to 3 points. Comparison of the AUC assessing the sensitivity and specificity of the complex score versus the FRS in the development cohort showed that the performance of these two scores was comparable (online supplemental table 1). We; therefore, retained the simple version of the score (FRS) for further analysis.
In the development cohort, the FRS was 0 for 23 (5. Specificity and sensitivity of the FRS AUC the ROC curves analysing the sensitivity and specificity of the score for predicting relapse throughout follow-up after AAV diagnosis showed that the performance of the FRS was better in the validation cohort than in the development cohort, increasing over time to 0.71 (0.61-0.81) at 36 months and 0.83 (0.78-0.89) at 60 months after AAV diagnosis ( figure 3C,D).
DISCUSSION
This study provides a simple score, entitled FRS, which can be calculated at the time of diagnosis of GPA or MPA to predict the risk of relapse during follow-up and which may ultimately allow clinicians to tailor the duration of maintenance therapy for these vasculitides. The FRS is composed of three variables and ranges from 0 to 3 points: positivity of PR3-ANCA, GFR ≥30 mL/min/1.73 m² and age ≤75 years (1 point each). The higher the FRS, the
RMD Open RMD Open RMD Open
greater the risk of relapse. Thus, we can consider that the risk is very low for a FRS of 0, moderate for a FRS of 1 and high for a FRS of 2 or 3.
As previously reported, 19-22 35 we found that the presence of PR3-ANCA at diagnosis was associated with an increased risk of relapse, whereas kidney failure reduced it. However, unlike previous studies, 19 21 22 we did not find that cardiovascular involvement was associated with an increased risk of relapse, which may be related to the fact that this type of AAV involvement was not frequent in our development cohort (only 14% of cases). Another study 36 had also shown that having PR3 ANCA was associated with an increased risk of relapse in two independent cohorts. The authors also found that the presence of Table 4 Risk of relapse (cumulative incidence function) in each cohort depending on the FRS at AAV diagnosis
Vasculitis Vasculitis Vasculitis
lung involvement was associated with an increased risk of relapse, although this was not the case in our study. This discrepancy can be explained in several ways: (1) less than 50% of patients received maintenance immunosuppressive therapy in the study of Pagnoux et al (compared with 70% in the development cohort and 80% in the validation cohort of our study), (2) the statistical model used in this study did not take into account competing risks (death vs relapse) and (3) and because there is a very significant overlap between pulmonary manifestations and the presence of PR3 ANCA, but PR3 ANCA has a greater impact on the risk of relapse. By contrast, we were able to identify that an age ≤75 years at AAV diagnosis was independently associated with an increased risk of relapse, which was also confirmed by the FVSG in a recently published study. 29 It is likely that this finding was not previously identified because patients aged over 80 years were not included in European Vasculitis Society (EUVAS) therapeutic trials, 19 which was not the case in the present work since the CORTAGE study, which was specifically designed for elderly individuals, enrolled 83 patients. 10 Age could also explain why the performance of the FRS was better in the validation cohort than in the development cohort since the mean age at diagnosis of the validation cohort was 78 years. This result is very important since age is one of the most important prognostic factors for severe infections, which are currently the most frequent cause of death within 1 year after AAV diagnosis and the third most common after 1 year of follow-up. [37][38][39] Therefore, our results suggest that the oldest patients, who are most at risk of infection and least at risk of relapse, should not be overtreated. However, we have also shown that although AAV patients >75 years have a lower relapse risk than patients aged 65-75 years despite a lower probability of having received maintenance therapy, they still benefit from such treatment regimen. 29 The FRS is not applicable to EGPA and PAN patients, who were not included in this study. In PAN, the risk of relapse is lower. 40 Regarding EGPA, even if it is also classified as an AAV, 1 this vasculitis has notable differences and it is not treated exactly the same as GPA and MPA, including an approach that increasingly targets eosinophils. 41 42 Estimation of the risk of relapse in the subgroup of MPA or GPA patients in the validation cohort showed that for the same FRS, the risk of relapse was higher in GPA than in MPA, although the CI was wide due to the small number of patients, and remained significantly predictive of relapse in both vasculitides. After 5 years of follow-up, none of the GPA patients with an FRS of 0 points had relapsed, 3/4 of those with an FRS of 3 points had relapsed, and about one in two patients with an initial intermediate FRS (1 or 2 points) had relapsed.
One of the weaknesses of our study is that most patients were treated before the widespread use of rituximab, which changed the treatment of GPA and MPA patients in a major way. 43 In the validation cohort, 84 patients received rituximab maintenance therapy and of these only 8 relapsed during follow-up. 44 Thus, it was not possible to reliably assess the performance of our score in the subgroup of patients who received maintenance therapy with rituximab. In a recent study including GPA and MPA patients treated with rituximab in induction and maintenance, age, presence of ANCA and ENT signs (which are associated with PR3-ANCA) were among the variables included in a predictive model for the risk of relapse after rituximab discontinuation, which is consistent with our results. 45 The long-term follow-up data of the MAINRITSAN 1 trial also identified the importance of PR3-ANCA and the persistence of this marker after 1 year of maintenance therapy in predicting the occurrence of relapse after discontinuation of rituximab. 12 The importance of the ANCA status during follow-up was confirmed in an independent cohort of GPA patients. 46 However, these recent data and the fact that our analyses were stratified on the treatment arm of each trial, one of which included rituximab maintenance therapy
RMD Open RMD Open RMD Open
(MAINRITSAN 1), suggests that our score may work in patients receiving maintenance therapy with rituximab. In any case, it will be necessary to validate the FRS in a larger and more recent cohort of patients, in whom RTX is the main used agent for induction and maintenance, as per current guidelines. Our study has many strengths: the reliability of the data (data from prospective therapeutic trials), the use of a Fine-Gray model to distinguish between two competitive risks (death and relapse), the analysis stratified on the induction and maintenance treatment (which influences the progression of the disease), and its validation in an independent cohort, which allowed us to show that the FRS reliably predicts the risk of relapse and that its level of performance increases with time.
In summary, we propose a simple score, entitled FRS, ranging from 0 to 3 points (PR3-ANCA (1 point), age≤75 years (1 point) and GFR >30 mL/min/1.73 m² (1 point)), which may be used at diagnosis to predict the risk of relapse in patients with GPA or MPA. The value of the FRS for tailoring maintenance treatment for AAVs deserves to be validated in future prospective trials, with the aim of providing patients with appropriate personalised management.
Collaborators French Vasculitis Study Group.
Contributors MS and LG were the principal investigators and take primary responsibility for the paper. MS, ST, PC, CP, PC, AK, LM, BT, XP and LG recruited the patients. MS, HD, ST, BT, XP and LG contributed to data interpretation. HD did the statistical analysis. MS, BT, XP and LG drafted the manuscript. All authors revised the manuscript for important intellectual content and approved the final version of the article. Guarantor: MS.
Funding The authors have not declared a specific grant for this research from any funding agency in the public, commercial or not-for-profit sectors.
Patient consent for publication Not applicable.
Ethics approval Patients enrolled in the development cohort provided signed informed consent at time they were included in five consecutive prospective trials conducted by the French Vasculitis Study Group (FVSG) (CHUSPAN I, CHUSPAN II, CORTAGE, MAINRITSAN and WEGENT). These trials were conducted according to the Declaration of Helsinki and subsequent amendments. In addition, all patients enrolled in the validation cohort were drawn from the French Vasculitis Study Group registry and thus provided written informed consent. The study followed the Strengthening the Reporting of Observational Studies in Epidemiology (STROBE) reporting guideline for cohort studies. Participants gave informed consent to participate in the study before taking part.
Provenance and peer review Not commissioned; externally peer reviewed.
Data availability statement Data are available on reasonable request.
Supplemental material This content has been supplied by the author(s). It has not been vetted by BMJ Publishing Group Limited (BMJ) and may not have been peer-reviewed. Any opinions or recommendations discussed are solely those of the author(s) and are not endorsed by BMJ. BMJ disclaims all liability and responsibility arising from any reliance placed on the content. Where the content includes any translated material, BMJ does not warrant the accuracy and reliability of the translations (including but not limited to local regulations, clinical guidelines, terminology, drug names and drug dosages), and is not responsible for any error and/or omissions arising from translation and adaptation or otherwise.
Open access This is an open access article distributed in accordance with the Creative Commons Attribution Non Commercial (CC BY-NC 4.0) license, which permits others to distribute, remix, adapt, build upon this work non-commercially, and license their derivative works on different terms, provided the original work is properly cited, appropriate credit is given, any changes made indicated, and the use is non-commercial. See: http://creativecommons.org/licenses/by-nc/4.0/. | 2023-03-28T13:04:14.200Z | 2023-03-01T00:00:00.000 | {
"year": 2023,
"sha1": "2bec93b4f9ed3ff9c1909742e421db1afcff9eb0",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "BMJ",
"pdf_hash": "2bec93b4f9ed3ff9c1909742e421db1afcff9eb0",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
231939079 | pes2o/s2orc | v3-fos-license | RNF43 inhibits WNT5A driven signaling and suppresses melanoma invasion
RNF43 is a E3 ubiquitin ligase and known negative regulator of WNT/β-catenin signaling. We demonstrate that RNF43 is also regulator of noncanonical WNT5A-induced signaling in human cells. Analysis of the RNF43 interactome using BioID and immunoprecipitation showed that RNF43 can interact with the core receptor complex components dedicated to the noncanonical Wnt pathway such as ROR1, ROR2, VANGL1 and VANGL2. RNF43 triggers VANGL2 ubiquitination and proteasomal degradation and clathrin-dependent internalization of ROR1 receptor. This activity of RNF43 is physiologically relevant and blocks pro-metastatic WNT5A signaling in melanoma. RNF43 inhibits responses to WNT5A, which results in the suppression of invasive properties of melanoma cells. Furthermore, RNF43 prevented WNT5A-assisted development of resistance to BRAF V600E inhibitor. In line with these findings, RNF43 expression decreases during melanoma progression and RNF43-low patients have worse prognosis. We conclude that RNF43 is a newly discovered negative regulator of WNT5A-mediated biological responses that desensitizes cells to WNT5A.
human cells. Analysis of the RNF43 interactome using BioID and immunoprecipitation showed 23 that RNF43 can interact with the core receptor complex components dedicated to the 24 noncanonical Wnt pathway such as ROR1, ROR2, VANGL1 and VANGL2. RNF43 triggers 25 VANGL2 ubiquitination and proteasomal degradation and clathrin-dependent internalization of 26 ROR1 receptor. This activity of RNF43 is physiologically relevant and blocks pro-metastatic 27 WNT5A signaling in melanoma. RNF43 inhibits responses to WNT5A, which results in the 28 suppression of invasive properties of melanoma cells. Furthermore, RNF43 prevented WNT5A-29 assisted development of resistance to BRAF V600E inhibitor. In line with these findings, RNF43 30 expression decreases during melanoma progression and RNF43-low patients have worse 31 prognosis. We conclude that RNF43 is a newly discovered negative regulator of WNT5A-32 mediated biological responses that desensitizes cells to WNT5A. 33 Introduction cells deficient in both RNF43 and ZNRF3 (RNF43/ZNRF3 dKO; R/Z dKO) showed higher 163 VANGL2 levels and higher DVL phosphorylation (Fig. 3B, Figure 3 figure supplement 1B). 164 Interestingly, treatment with proteasome inhibitor MG132 but not with autophagosome-lysosome 165 inhibitor Chloroquine blocked these effects of RNF43 (Fig. 3C). This suggests that RNF43 action 166 in noncanonical Wnt pathway depends on the proteasomal degradation pathway, which differs 167 from the Wnt/β-catenin pathway, where RNF43 triggers FZD degradation via lysosomal pathway 168
Vemurafenib. 292
Therefore, we challenged with Vemurafenib A375 WT and its RNF43 expressing 293 derivatives. As shown in Fig. 6E exogenous RNF43 decreased colony formation and proliferation 294 of cells seeded in the low density and Vemurafenib further enhanced this effect. Importantly, both 295 A375 WT and A375 IV overexpressing RNF43 completely failed to develop resistance to 296 Vemurafenib and died off during the selection at 1 μM Vemurafenib concentration (Fig. 6F). 297 Altogether these data confirm earlier findings on the importance of WNT5A signaling in the 298 acquisition of Vemurafenib resistance and demonstrate that RNF43 can completely block this 299 process. 300
Discussion 302
Our study identified RNF43 as the inhibitor of noncanonical WNT5A-induced signaling. 303 RNF43 physically interacted with multiple receptor components of the Wnt/PCP pathway such as 304 ROR1/2, VANGL1/2 or DVL1/2/3 and triggered degradation of VANGL2 and membrane clearance 305 of ROR1; ultimately resulting in the reduced cell sensitivity to WNT5A. The newly discovered 306 RNF43 action in WNT5A-mediated signaling seems to be mechanistically different than the well-307 known function in the Wnt/β-catenin pathway. For example, we observed ROR1 and VANGL2 308 interaction with RNF43 in the absence of DVL. In contrast, DVL seems to be essential for the 309 inhibition further highlights importance of WNT5A signaling in this process and also uncovers a 343 mechanism that can be explored therapeutically. 344 Relevance of our findings is likely not limited to melanoma. Signaling cascade RSPO-345 LGR4/5-RNRF43/ZNRF3 has been shown to regulate variety of biological processes. In light of 346 our results, it is tempting to speculate that WNT5A-RNF43 axis regulates other developmental, 347 physiological and patho-physiological conditions. For example, WNT5A is overexpressed in by the global minimum and e) differential expression using LIMMA statistical test. Prior to volcano 503 plot plotting, suspected BirA* binders were filtered out (proteins identified on at least 2 peptides 504 in both technical replicates of particular BirA* sample, and present in >3 samples). Volcano plot 505 was created in R using ggplot2 and ggrepel R packages by R version 3.6.1. Proteins with adjusted 506 p-value <0.05 and log fold change >1 were further subjected to gene ontology tools, considering 507 only the first ID of majority protein IDs: g:Profiler online tool (https://biit.cs.ut.ee/gprofiler/gost,
Western blotting and antibodies 535
Western blot analysis was performed as it was described before using samples with same 536 Table 3. 608
Colony formation assay 642
To assess an ability to colony formation in the presence of 0.3 μM vemurafenib, 300 of After that time, medium was removed and colonies were washed in PBS, fixed in the ice-cold 645 methanol for 30 min and stained in the 0.5% crystal violet in 25% methanol. After washing and 646 drying, bound crystal violated was eluted with 10% acetic acid and absorbance at 590 nm was 647 measured on Tecan Sunrise plate reader. Result were normalized to the non-treated A375 wild 648 type results. 649 650 | 2021-02-17T14:17:26.158Z | 2021-02-08T00:00:00.000 | {
"year": 2021,
"sha1": "72b42f1ed95c3dd5de549430809d500f786bc77f",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.7554/elife.65759",
"oa_status": "GREEN",
"pdf_src": "BioRxiv",
"pdf_hash": "72b42f1ed95c3dd5de549430809d500f786bc77f",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Biology"
]
} |
251169762 | pes2o/s2orc | v3-fos-license | Association Between Levels of Pre-operative Glycosylated Hemoglobin and Post-operative Surgical Site Infections After Elective Surgery in a Low-Income Country
Introduction: Diabetic patients undergoing surgery are more susceptible to hospital-acquired infection, particularly surgical site infection (SSI). Good glycemic control in preoperative patients significantly decreases the risk of SSI. There is a scarcity of data from low-income countries studying the relation between perioperative glycated hemoglobin (HbA1c) levels and postoperative SSI. We aim to establish statistical relation between HbA1c and SSI which will help decrease post-operative infections and morbidity. Methods: This study was conducted in the surgical unit of Jinnah Sindh Medical University, Karachi, Pakistan, from August 2020 to April 2022. Patients who underwent elective surgical procedures (n= 1024) were included in the study and divided into two groups based on their HbA1c levels. Patients with HbA1c levels higher than 6.5% were classified as group A and those with HbA1c less than 6.5% belonged to group B. For statistical analysis, IBM SPSS Statistics for Windows, Version 24.0 (Released 2016; IBM Corp., Armonk, New York, United States) was used. Results: Group A comprised 579 (56.5%) patients. The presence of SSI in participants with HbA1c >6.5% was statistically significant (p-value: 0.011). Genderwise comparison with the presence of SSI was found to be insignificant (p-value: 0.28). Smoking was positively correlated with the absence of SSI. No significance in terms of presence or absence of SSI was found in the comparison of the type of wounds (p-value: 0.25). Conclusion: There is a positive relationship between raised HbA1c levels and the development of SSI. Our study emphasizes the importance of the use of HbA1c levels as a more accurate predictor of glycemic control in pre-operative patients rather than blood glucose levels. It is imperative that surgeons must check HbA1c levels before selecting patients for elective surgeries, especially in low-income countries where the healthcare burden is already huge.
Introduction
Surgical site infection (SSI) is an infection that occurs at or near surgical incisions. The Centers for Disease Control and Prevention (CDC) classifies SSI into incisional (superficial or deep) or organ space [1]. There are several independent risk factors of SSI that increase the mortality and morbidity among surgical patients including the type of procedure, type of wound, preoperative antiseptic techniques, diabetes, hypertension, cigarette smoking, obesity, blood loss, hypothermia, and native flora of patient's own skin or visceral [2].
Patients undergoing surgery with hyperglycemia due to type 2 diabetes mellitus (DM) are more susceptible to hospital-acquired infection [3]. In recent years, clinical trials and studies evaluated that blood glucose level control in preoperative patients significantly decreases mortality and morbidity [3]. In 2021, an explorative analysis in patients undergoing surgery was conducted where the role of glycated hemoglobin (HbA1c) as a positive perioperative predictor for SSI was discussed. About 38.5% of infections were found among patients with undiagnosed and pre-diabetic states [4]. Ahmed M et al. also conducted a crosssectional study with a sample size of 163 patients where patients with more than 8.5% HbA1c were vulnerable to superficial SSI (33.1%) and deep SSI (12.3%). Therefore, he concluded that HbA1c is a proper modifiable independent risk factor for SSI [5].
It is of significant value to determine the scenarios and subgroups of diabetic individuals where preoperative HbA1c plays role in predicting the risks of increased post-operative complications [6]. There is a scarcity of data from low-income countries studying the relation between perioperative HbA1c levels and postoperative SSI and its overall impact. We aim to fill the gap of this lack of statistical relation between HbA1c and SSI which will help decrease post-operative morbidity and mortality, length of hospital stay, and readmissions, leading to a better quality of life.
Materials And Methods
This cohort study was conducted in the surgical unit of Jinnah Sindh Medical University (JSMU), Karachi, Pakistan, from August 2020 to April 2022 after taking ethical review board approval (JSMU/IRB/2020-81).
Patients who underwent elective surgical procedures, such as appendectomy, cholecystectomy, and laparotomy during this time frame were enrolled in the study via consecutive convenient non-probability sampling after taking their informed consent.
After enrollment, as part of the pre-operative assessment, patients' blood was taken via phlebotomy via a cubital vein and sent to the laboratory for random blood sugar (RBS) and HbA1c, along with a complete blood report. Patients with elevated RBS were excluded from the study. Only patients operated by the same surgeon in the same operating room were included in the study to reduce the impact on an individual's skill and the environment. After exclusion, 1024 participants were included in the study. Patients were divided into two groups based on their HbA1c levels. Patients with HbA1c levels higher than 6.5% were classified as group A and patients with HbA1c levels lower than 6.5% were classified as group B.
All surgical procedures were carried out in accordance with protocol. Following a review of the operational reports, each surgery received one of four wound classifications, i.e. clean, clean/contaminated, contaminated, or dirty. Patients were instructed to return for a check-up following the treatment after one week, then again after 30 days, or if they experienced any redness, leaking, pain, or tenderness at the site of the surgical incision. Based on documented purulence, discomfort, redness, tenderness, swelling, and suspicion of SSI upon physical examination, patients were categorized as having SSI or not during follow-up. The final analysis excluded participants who could not be reached for follow-up.
For statistical analysis, IBM SPSS Statistics for Windows, Version 24.0 (Released 2016; IBM Corp., Armonk, New York, United States) was used. While categorical data was shown as frequency and percentage, continuous variables were assessed using descriptive statistics and displayed as means ± standard deviations (SDs). To compare the two groups, the chi-square test was used. When the p-value is less than 0.05, the null hypothesis is rejected since there is a significant difference between the groups.
Results
Of the total 1024 participants included in the final analysis, 579 (56.5%) belonged to group A while the remaining belonged to group B. Mean age of patients from group A was 43 ± 8 years while 45 ± 8 years of group B (p-value: 0.0001). There were no statistical differences between the two groups in terms of gender (p-value: 0.88), and other comorbidities, including smoking (p-value: 0.306) and hypertension (p-value: 0.77). A majority of participants had a clean type of wound in both groups A and B (95.5% and 95.9%), however, no significant relation was found while comparing the types of wound (p-value: 0.85). The presence of SSI in participants with HbA1c >6.5% was statistically significant (p-value: 0.011) ( Table 1).
Discussion
Prevention of surgical site infections is important since it plays a significant role in hospital readmissions, prolonged stays, and greater financial burden [7]. After doing thorough literature research, we came to the conclusion that many studies support the association between elevated HbA1c levels with increased risk for SSI [8][9][10]. This aligns with the findings of our study where patients with HbA1c levels greater than 6.5% were more likely to develop SSI as compared to those who had lower levels (p-value: 0.01). A study done by Kopp Lugli et al. found an overall 4.1 times increased risk of developing SSI [4]. Raised HbA1c preoperatively is linked with an increased risk of infection, length of hospital stay and readmission in 30 days [9].
However, it is worth mentioning that we also came across a few studies that either showed mixed results or did not find any conclusive relationship between HbA1c and SSI [11]. In a 2019 study, the probability of developing surgical complications or readmission with respect to gender, age, or presence of hypertension was found to be nonsignificant [9]. Furthermore, no association between the type of wound and HbA1c levels was found in our study. In our study, we used HbA1c rather than fasting blood sugar (FBS) or RBS levels because HbA1c is a more accurate marker of glycemic control in comparison to RBS or FBS; levels of which can rise due to stress hyperglycemia in pre-operative patients. Furthermore, even though candidates in our study had normal FBS/RBS levels before surgery, the group with HbA1c levels greater than 6.5% had a higher chance of developing SSI.
HbA1c is a marker of variation in blood glucose levels. Fluctuation in glucose levels causes an increased production of free radicals which generates increased oxidative stress. In a study, it was established that acute and chronic fluctuations in blood glucose levels corresponded with elevated oxidative stress markers namely: urine 8-isoprostaglandin F2α, serum thiobarbituric acid-reactive substance, and serum 8hydroxydeoxyguanosine [9,12]. In addition, variation in glucose levels also induced a rise in serum level of the chronic inflammatory marker (c-reactive protein), absence of antioxidants, and a greater incidence of microvascular complications [12][13][14][15]. Other than increasing oxidative stress and inflammatory markers, hyperglycemia also harms the immune system by disrupting chemotaxis, phagocytosis, and overproduction of free fatty acids [16]. All these factors altogether may lead to an increased risk of developing SSI.
Our study had some limitations, it was a single-center study, and patients had different procedures, different types, and sites of wounds. We used a single cut-off value; a different cut-off might have yielded a different result. The use of HbA1c rather than FBS or RBS brought accuracy to our results as it eliminated the probability of patients having stress hyperglycemia. We also made sure our results were not affected by surgeon skill and the environment by choosing patients operated on by the same surgeon in the same operating room.
Conclusions
Correlating the results of our study with preexisting literature, our study is of the view that there is a positive relationship between raised HbA1c levels and the development of SSI. Furthermore, our study emphasizes the importance of the use of HbA1c levels as a more accurate predictor of glycemic control in pre-operative patients rather than blood glucose levels. Since SSI plays a significant role in patient morbidity, mortality, and financial burden, it is imperative that surgeons must check HbA1c levels before selecting patients for elective surgeries, especially in low-income countries where the healthcare burden is already huge.
Additional Information
Disclosures | 2022-07-30T15:04:25.964Z | 2022-07-01T00:00:00.000 | {
"year": 2022,
"sha1": "12a05a765be794c44a85616e2ca72fd60f9d6feb",
"oa_license": "CCBY",
"oa_url": "https://www.cureus.com/articles/105749-association-between-levels-of-pre-operative-glycosylated-hemoglobin-and-post-operative-surgical-site-infections-after-elective-surgery-in-a-low-income-country.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "39c2bbe2543486469585047b40d861ff4055e5a3",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": []
} |
213610207 | pes2o/s2orc | v3-fos-license | Optimization of Peer-to-Peer Power Trading in a Microgrid with Distributed PV and Battery Energy Storage Systems
Integrating distributed generation (DG) into the main grid is a challenge for the safety and stability of the grid. The application of peer-to-peer (P2P) technology in microgrids with distributed generation is expected to facilitate increased self-consumption of distributed and renewable energy, and the rise of prosumers’ monetary benefits. A P2P energy trading model in microgrids with photovoltaic (PV) distributed generation and battery energy storage systems (BESSs) is proposed in this paper. We additionally designed a P2P electricity trading mechanism based on coalition game theory. A simulation framework of this model is presented which assumed a local community with 30 households under comprehensive constraints encompassing a customer load profile, PV system, BESSs, market signals including feed-in tariffs, and retail prices. Firstly, individual customers can post orders (purchasing orders or selling orders) and exchange information in a P2P energy trading market. Secondly, the microgrid operator can validate the orders based on how to achieve the minimum overall energy consumption in microgrids and set reasonable real-time purchasing and selling prices for P2P energy transactions. Thirdly, the orders can be automatically conducted and completed at the designed optimal price. This mechanism can be a practical solution motivating individual customers to participate in P2P electricity trading, assist with electricity cost reduction, benefit from electricity supply increases, and help the grid operators to make the most economically and socially friendly decisions.
Introduction
Deterioration of the global environment and the depletion of fossil fuel energy has led to growing attention being focused on distributed and renewable forms of generation, like solar energy and wind power. Within a global context of rapid development of solar photovoltaic (PV), Australia has the leading position in the penetration of PV panels for residential users [1]. Globally, due to the shift of energy supply technology and patterns, energy markets are also undergoing a shift toward a decentralized and digital economy, while peer-to-peer (P2P) energy trading for distributed generation (DG) is finding its way into the energy sector. The application of P2P makes it possible for individual consumers to become prosumers and to share their excess energy with neighbors [2][3][4]. They can achieve a win-win situation by looking for a reasonable trading price and by making a deal in a seamless way [5,6]. P2P energy trading empowers customers to trade electricity at a P2P marginal price that is cheaper than the time of use (TOU) price and higher than feed-in tariffs (FIT), Sustainability 2020, 12, 923 2 of 20 respectively, which provides attractive savings for buyers and profit for sellers. In particular, it will support local consumption of distributed energy while reducing fossil energy consumption and carbon dioxide emissions.
In 2017, the Chinese government issued a guideline named A Pilot Approach to Facilitate the Construction of the Grid-Connected Microgrid [7]. Its conclusion can be summarized into the following four statements: (1) Generally, the maximum annual exchange capacity between a microgrid and the main grid is 50% of the total amount of annual electricity consumption in microgrids. (2) The microgrid operator (or associated agency) is responsible for the operation and maintenance management of the microgrid, the internal and external power balance, and electricity exchanges. (3) It is important to encourage the establishment of a price system for self-negotiation between any two parties where at least one uses a microgrid, as well as a trading mechanism for energy markets that covers various energy resources like cold energy, hot energy, and electricity. (4) Complying with market rule, the microgrid operator needs to take charge of electricity exchanges and the corresponding transmission and distribution costs. As a result, P2P as a new mechanism allows energy from one prosumer to be directly sold to another prosumer within the network at a negotiated price without any influence of a provincial transaction center. At the same time, the network served as a whole can sign a contract with the provincial transaction center for any external generation that is produced in the wholesale market. The microgrid operator is in charge of maintaining the distributed energy balance and managing consumption rather than the main grid operators, whose job it is to check the exchange capacity of the also controllable tie line. In addition, pressure to approve intermediaries' cost can be eased by including distribution costs within the microgrid into P2P electricity prices. Hence, P2P trading provides much promise both for academic circles and for the energy ecosystem.
The state-of-the-art literature relating to P2P trading in the energy sector is discussed in this paper and they can be classified into two main categories: (a) applications of P2P trading in the energy sector and (b) a pricing mechanism for a P2P energy trading market. In the first category, P2P energy transactions require advanced technical support, which is usually obtained by using online services based on information and communication technology [8]. A hierarchical system architecture has been proposed to identify and classify the key elements and technologies involved in P2P energy trading. In addition, a P2P energy trading platform has been presented and illustrated in a simulation whose results show that P2P energy trading can help with network and congestion management and allow more DG to be more widely shared within a community [9]. A two-level aggregation control technology for P2P energy trading in microgrids in communities was devised, which empowers users to follow their orders through a third party, i.e., an energy sharing coordinator. In the first phase, a constrained nonlinear programming (CNLP) optimization with a rolling horizon was used to minimize the community's energy costs. In the second phase, the control setting point was updated with a change of real-time measurement results, and was operated based on a price mechanism. It was found that a P2P energy trading model helps households to obtain 30% lower energy costs than the traditional energy trading model [10]. An architecture model for the design and interoperability of P2P energy trading components in microgrids was also proposed and simulated based on game theory, involving a specific customer-to-customer business model being introduced into a reference grid-connected microgrid and a core component of the bidding system, Elecbay, being presented [11]. Ottesen et al. proposed an aggregator exclusive available to the prosumer market, which would be intelligent enough to make electricity trading decisions on behalf of prosumers. The flexibility and stochastic planning of bidding in the day-ahead market with the help of an aggregator was investigated and the results showed that the system flexibility increases with the presence of an aggregator [12].
In the second category of the state-of-the-art literature, a novel dynamic pricing method was proposed to promote market-oriented decentralized energy transactions and provide the most economical benefits for owners with distributed generation [13]. A P2P energy trading mechanism for energy auctions was presented, which ensures the fairness and efficiency of energy auction by establishing a market design mechanism for energy auction. The Bayesian game strategy is used to develop an optimal bidding strategy for distributed energy owners, so that each participant can obtain an efficient and economic bidding price. The results show that this model can maximize the utility of typical distribution network users [14,15]. Tushar W et al. introduced how motivational psychology encourages distributed energy owners to actively participate in P2P energy trading, and proposed a game theory P2P energy trading scheme [16]. A P2P trading mechanism was proposed, in which the decision-making process is modeled by game theory and Shapley value. The game theory approach provides a distributed energy management solution for individual decision-making with respect to the optimality and fairness among consumers. Compared with the existing trading mechanism, the Shapley value trading mechanism helped P2P energy trading achieve better optimization and fairness [17]. A synchronous game theory based on P2P energy trading in a day-ahead market was presented, which will empower participants, without limitation to its number, to decide the transaction price, and VaR is introduced, a risk analysis tool to reduce the risk of transaction failure, maximize the market interests, and increase the success rate of energy trading [18]. Chen K et al. regarded the continuous double auction (CDA) market as a promising mechanism for a P2P market that empowers interactions among prosumers and consumers in distribution grids. For prosumers, achieving optimal operations and maximize profits necessitates acting as price makers and simultaneously optimizing their operations and trading strategies [19].
Existing pieces of literature on P2P energy trading have contributed to the applications of technology and transaction price mechanism modeling. However, there is an imperative gap in user-centered P2P energy trading. Therefore, a P2P energy trading mechanism based on the coalition game theory was devised to bridge the gap. The focus of the coalition game is how to motivate independent decision makers (users) to act together in alignment to elevate their status (or utility) in the game. The coalition's stability lies in two bases: (1) The revenue of the grand coalition surpasses the sum of the benefits that each member would achieve if they were allowed to self-operate. (2) The revenue allocation of the grand coalition should be based on Pareto improved property, which means each member will receive a revenue no less than the benefits under independent action. Among the members, the transparent sharing of information between each other and the enforcement of signed agreements are the keys to maintain the stability of the coalition.
The main contribution of this paper lies in the following aspects: (1) A peer-to-peer (P2P) energy trading mechanism in the microgrid with distributed photovoltaic distributed generation and battery energy storage systems (BESSs) was devised and users in the microgrids were reasonably classified. (2) A mixed integer linear programming (MILP) method based on YALMIP [20] (it is a modular language for defining and solving advanced optimization problems, which are written in MATLAB language) is proposed to optimize the decision-making of P2P electricity transactions considering a large number of users with distributed photovoltaic generation and battery energy storage systems (BESSs). The proposed model respects variable real-world constraints, including P2P power trading, microgrid users, customer load profile, PV system, battery energy storage systems (BESSs), and market signals. (3) Based on the coalition game theory, this paper improves the market intermediate rate price model and introduces the weight variables α and β. By changing the values of α and β, the trajectory of P2P transaction price can be obtained, which changes the P2P power transaction price from statics to dynamics, more in line with reality.
The rest of this paper is organized as follows: Section 2 presents the structure of the microgrid P2P energy trading system with distributed photovoltaic generation and battery energy storage systems (BESSs), and proposes a mathematical model. Section 3 describes how to design the P2P power trading mechanism and trading process in this paper. Section 4 discusses and analyzes the results. In Section 5, some concluding remarks are drawn with future research.
Operational Model Analysis and Mathematical Model Construction of P2P Power Trading
In this section, the structure of the power trading system with PV only based on peer-to-peer (P2P) technology is presented. Among this system, PV, BESSs, microgrid scheduling, and balance and management systems are considered. Furthermore, the mathematical model of each component and the design of the P2P power trading mechanism will be demonstrated.
The Operation Model of P2P Power Trading with PV
In light of the status quo of a P2P in power trading, we propose a P2P power trading system in the microgrid with PV with respect to a single form of energy, i.e., electricity. As shown in Figure 1, energy flow refers to the flow of power between buyers and sellers and cash flow illustrates the cost and income of electricity trade. and management systems are considered. Furthermore, the mathematical model of each component and the design of the P2P power trading mechanism will be demonstrated.
The Operation Model of P2P Power Trading with PV
In light of the status quo of a P2P in power trading, we propose a P2P power trading system in the microgrid with PV with respect to a single form of energy, i.e., electricity. As shown in Figure 1, energy flow refers to the flow of power between buyers and sellers and cash flow illustrates the cost and income of electricity trade. The main market subjects in the microgrid constructed in this paper mainly include the following types: Consumers: users without PV can be classified into two major classes: consumers with BESSs (represented by W) and consumers without BESSs (represented by M). By purchasing electricity at a lower price in P2P power trading, consumers can reduce electricity expenditure.
Prosumers: users with PV can be classified into two categories: prosumers with BESSs (represented by N) and prosumers without BESSs (represented by R). Receiving considerable benefit by selling surplus electricity to nearby users or charging energy into storage will motivate users to participate in P2P power trading.
The microgrid operator: a third party is responsible for the operation and maintenance of the microgrid and the settlement of the transaction fees. As with the implementation of the P2P energy system, organizing and supervising the trading platform is the operator's responsibility as well.
In this system, PV powered by the sun can be the energy source of users. Surplus electricity will first be stored in the battery or sold to users in the microgrid, then traded with the grid. On the contrary, purchasing electricity from other microgrid users will be consumers' first choice as well instead of the grid. The P2P energy trading system ensures the users of the microgrid enjoy priority in electricity transactions. The microgrid scheduling and balance management system is operated by the microgrid operator, and what is more, the microgrid operator is also responsible for energy transactions being carried out in an orderly manner, purchase and sale expenses settlement, and interaction with the grid. Its working mechanism is to purchase the surplus electricity of the microgrid users at the P2P transaction price, and then sell it to other microgrid users. If there is surplus electricity, it will sell it to the main grid at the FIT, as opposed to purchasing electricity from the grid at TOU price when the supply of electricity is insufficient. Playing as a third party in the microgrid, the microgrid operator signs a contract with users, offers a real-time monitoring service via a dispatching and balancing management system to execute the contract, and guarantees the priority of the microgrid users in electricity transactions. The microgrid operator functions like an intelligent contract but it is different. When the microgrid is in short supply, the microgrid operator can make a deal with the grid to satisfy the user's demand. Furthermore, the microgrid operator also provides the users with technical support, such as operation and maintenance service, energy generation prediction, and the like. Therefore, the implementation of the P2P energy transaction in the microgrid can not only increase the effectiveness of energy utilization in the The main market subjects in the microgrid constructed in this paper mainly include the following types: Consumers: users without PV can be classified into two major classes: consumers with BESSs (represented by W) and consumers without BESSs (represented by M). By purchasing electricity at a lower price in P2P power trading, consumers can reduce electricity expenditure.
Prosumers: users with PV can be classified into two categories: prosumers with BESSs (represented by N) and prosumers without BESSs (represented by R). Receiving considerable benefit by selling surplus electricity to nearby users or charging energy into storage will motivate users to participate in P2P power trading.
The microgrid operator: a third party is responsible for the operation and maintenance of the microgrid and the settlement of the transaction fees. As with the implementation of the P2P energy system, organizing and supervising the trading platform is the operator's responsibility as well.
In this system, PV powered by the sun can be the energy source of users. Surplus electricity will first be stored in the battery or sold to users in the microgrid, then traded with the grid. On the contrary, purchasing electricity from other microgrid users will be consumers' first choice as well instead of the grid. The P2P energy trading system ensures the users of the microgrid enjoy priority in electricity transactions. The microgrid scheduling and balance management system is operated by the microgrid operator, and what is more, the microgrid operator is also responsible for energy transactions being carried out in an orderly manner, purchase and sale expenses settlement, and interaction with the grid. Its working mechanism is to purchase the surplus electricity of the microgrid users at the P2P transaction price, and then sell it to other microgrid users. If there is surplus electricity, it will sell it to the main grid at the FIT, as opposed to purchasing electricity from the grid at TOU price when the supply of electricity is insufficient. Playing as a third party in the microgrid, the microgrid operator signs a contract with users, offers a real-time monitoring service via a dispatching and balancing management system to execute the contract, and guarantees the priority of the microgrid users in electricity transactions. The microgrid operator functions like an intelligent contract but it is different. When the microgrid is in short supply, the microgrid operator can make a deal with the grid to satisfy the user's demand. Furthermore, the microgrid operator also provides the users with technical support, such as operation and maintenance service, energy generation prediction, and the like. Therefore, the implementation of the P2P energy transaction in the microgrid can not only increase the effectiveness of energy utilization in the microgrid, but also significantly promote the distributed energy consumption and cut down the electricity costs of the users.
The traditional operation mode of microgrid usually operates in a way that the distributed generation in microgrids is served first to meet the needs of individual customers, and then to trade the surplus electricity, if any, with the grid at the FIT of new energy. Nevertheless, the operation mode of the microgrid integrating peer-to-peer (P2P) technology into power trading will empower individual customers of the microgrid to trade with each other in real time and then sell the surplus power to microgrid at the P2P trading price. The application of P2P technology in energy transactions in microgrids, which enables decentralized transactions, will help with better network and congestion management, allowing a more efficient trading process, assisting with the renewable generation intermittency problem and bringing more social capital into the energy sector.
Mathematical Model
In this paper, a mixed integer linear programming (MILP) model is proposed to optimize operational decisions with respect to a large number of distributed resources and participation in P2P energy trading. In the proposed model, the P2P energy trading mechanism and the real-world limitations on distributed resources, especially PV and BESSs, as well as the fluctuation of market electricity price, is taken into account. The objective function is to minimize the total energy expenditure of all individual customers in the microgrid. In the final step, the model is solved by MATLAB R2016b and CPLEX Optimization Studio v12.8.
The Objective Function
This paper assesses the monetary benefit of P2P energy trading in microgrid from the view of the microgrid operator. From this point on, we formulate the objective function by looking for the minimum total energy expenditure of all individual customers in the microgrid.
The inputs for the model include the sets of individual customers and their demand, physical constraints of photovoltaic panels and energy storage, and market price signals. Decision variables include real-world PV generation, actual user electricity consumption, battery charge and discharge, P2P power transaction price, electricity interacting with the main network, and so on.
where C btg and C btp are the cost of purchasing electricity from the grid and P2P energy market, respectively. C pv and C ess represent the cost of maintaining PV and BESSs, respectively. C stg represents the benefit to the grid from selling electricity and C stp is the benefit to individual customers from selling electricity in the P2P energy market.
(1) The cost and benefit of interacting with the main grid Sustainability 2020, 12, 923 6 of 20 where P g sell and P g buy are the selling and purchasing price of electricity of the grid. Q gtu,i,t and Q gtb,i,t are energy bought from the grid for household Ω and BESSs. Q pvtg,i,t represents the energy sold to the grid from PV and Q btg,i,t is the energy sold to the grid from BESSs discharge.
(2) The cost and benefit of individual customers participating in P2P energy trading where P P2P buy represents the purchasing price of electricity in the microgrid. Q pvtm,i,t is the energy sold to the microgrid generated by PV and Q btm,i,t is the energy sold to the microgrid discharged from BESSs.
where P P2P sell represents the selling price of electricity in the P2P energy trading market. Q mtu,i,t and Q mtb,i,t are the energy bought from the P2P energy trading market by individual customers for daily life and BESSs, respectively (1) The cost of maintaining PV where R PV is the operation and maintenance cost of PV per unit electricity. Q pvtu,i,t and Q pvtb,i,t represent the energy generated by PV for household and BESSs. (2) The cost of maintaining BESSs where R ess is the operation and maintenance cost of BESSs per unit of electricity.
Demand Constraint
Depending on the type of individual customers, constraints of customers' demand can be classified into four categories.
(1) Neither PV nor BESSs (2) PV but BESSs (4) Both PV and BESSs The constraints (8)- (11) ensure that each class of users' needs can be met. Individual customers in electricity deficiency without the distributed generation will first purchase electricity from the P2P energy trading market and then the grid. Households with PV and BESSs will first participate in the P2P market to sell surplus electricity to peers in the microgrid.
The Output Constraint of PV
Affected by the solar intensity and temperature, photovoltaic generation is intrinsically intermittent and fluctuant. According to research [21,22], solar intensity is approximately subordinate to the beta distribution during a certain period of time, and the probability density formulation is: where Γ(•) is a gamma function in which G and G m are the real-world and maximum solar intensity during a certain period of time. Shape parameters α, β can be obtained by making use of the average light intensity and variance during this period of time.
where G u is the average solar intensity during a certain period of time and σ 2 represents variance of solar intensity.
Measuring the temperature of the photovoltaic cell directly is not that easy due to the technical limitations, therefore, for the packaged solar cell assembly, the working temperature of the photovoltaic cell can be estimated by measuring the ambient temperature according to function (15).
where T w and T e are the working and ambient temperature of photovoltaic cells, respectively. The output of the PV can be expressed as follows [23]: where G s is the solar intensity under new conditions of the standard test and G s = 1000 W/m 2 . P s is the maximum output under the standard test. K is the power temperature coefficient. T r represents the reference temperature, and T r = 25 • C. As shown in Figure 2, the output of the PV is simulated according to the historical solar intensity and the temperature data. Depending on the classes of individual customers, the output of the PV can be classified into two categories. Depending on the classes of individual customers, the output of the PV can be classified into two categories.
(2) Both PV and BESSs The electricity generated by the PV can be used for households, BESSs, and trade. The constraints (17) and (18) ensure that the energy generated by all PV systems will not exceed the maximum predicted power generation Q * pv,i,t .
The Constraint of Battery Energy Storage Systems
During a period of time, the storage capacity and physical constraints of the BESSs can be modeled by a simplified linear function and described as follows: where Q B,i,t is the storage capacity of BESSs at hour t and η sd,i,t represents self-discharge rate. To simplify the computational process, η sd,i,t is considered to be [10]. P diss,i,t and P diss,i,t are charging/discharging power. SOC i,t is the storage capacity status of BESSs. SOC max,i and SOC min,i are the maximum and minimum of storage capacity status of BESSs related to the property of the BESSs. U cha,i,t and U diss,i,t represent the charge and discharge power state of the BESSs with the opposite binary value of (0 or 1), for the battery cannot be charged or discharged at the same time.
These constraint functions are formulated by considering the state of the battery, the technical limitations on battery capacity, and charging/discharge operation. At a certain moment, the battery can either be charged or discharged, and the maximum storage capacity cannot go beyond its rated capacity. However, this paper highlights the application of P2P in energy trading, so only the basic model of the battery was considered in this paper.
Coalition Game
The P2P energy trading mechanism is designed to encourage the participation in the decentralized energy market, help with the renewable generation intermittency problem, cut down electricity expenditure, and elevate the monetary benefit of distributed generation owners. The success of P2P trading depends on the design of a reasonable trading mechanism. Many pieces of literature have made relevant studies, such as direct method [27], indirect method [2], and FIT [28]. In direct method, a single distributed resource is controlled and managed by an aggregator, whilst indirect method refers to the strategy in which a central organization sends a signal to the distributed generation owners to affect their electricity consumption and power generation decision. Under the FIT mechanism, however, the engagement of prosumers is more unequivocal for the producers to sell surplus electricity directly to the main grid and purchase electricity from the main grid if needed. We cannot deny the contribution of this research in P2P application, but there is still a key gap in the practical P2P mechanism centered on prosumers.
Therefore, an improved midmarket rate (MMR) model based on the coalition game theory is proposed to bridge the gap between the research and practical application in the P2P energy trading mechanism in this paper. The principle of the MMR [29] is that the reference price of P2P transactions is set as the mean value of the electricity purchasing and selling price of the main grid, and the purchasing and selling price of electricity in the P2P market is based on three scenarios.
(1) The total energy generation of the prosumers is equal to the overall demand in microgrids. In this case, the P2P trading price equals the average value of the electricity purchasing and selling price of the main grid. (2) The total energy generation of the prosumers is greater than the overall demand in microgrids. In this situation, the P2P purchasing price of electricity equals the average value of the electricity purchasing and selling price of the main grid set by the principle of the balance between the cost and the income of the microgrid operator. (3) The total energy generation of the prosumers is smaller than overall demand in microgrids, in which case the P2P selling price of electricity is equal to the average value of the electricity purchasing and selling price of the main grid set by the principle of the balance between the cost and the income of the microgrid operator. In the stage of the customer demands, users forecast the electricity demand and energy generation at time slot t based on reality, and form the demand information.
In the stage of the initiation of the transaction, if the user's net load (load demand minus energy generation) is positive, correspondently, a purchase order will be generated by the P2P energy trading platform, otherwise an electricity sale order will be generated. After that, the orders will be sent to the microgrid through the distributed network.
In the stage of the security check, firstly, the orders will be verified by the microgrid scheduling and balance management system under the constraints of energy balance, the real-time electricity price of the main grid, and the minimum operational cost of the system. Secondly, the optimal P2P purchasing and selling prices will be calculated by MILP optimization, and published to the users.
In the stage of execution of trading, after the orders have been completed, for each participant, the microgrid operator will settle charges of internal orders (orders within the microgrid) at designated P2P purchasing and selling prices and external orders (orders with main grid) at clearing prices.
Coalition Game
The P2P energy trading mechanism is designed to encourage the participation in the decentralized energy market, help with the renewable generation intermittency problem, cut down electricity expenditure, and elevate the monetary benefit of distributed generation owners. The success of P2P trading depends on the design of a reasonable trading mechanism. Many pieces of literature have made relevant studies, such as direct method [27], indirect method [2], and FIT [28]. In direct method, a single distributed resource is controlled and managed by an aggregator, whilst indirect method refers to the strategy in which a central organization sends a signal to the distributed generation owners to affect their electricity consumption and power generation decision. Under the FIT mechanism, however, the engagement of prosumers is more unequivocal for the producers to sell surplus electricity directly to the main grid and purchase electricity from the main grid if needed. We cannot deny the contribution of this research in P2P application, but there is still a key gap in the practical P2P mechanism centered on prosumers.
Therefore, an improved midmarket rate (MMR) model based on the coalition game theory is proposed to bridge the gap between the research and practical application in the P2P energy trading mechanism in this paper. The principle of the MMR [29] is that the reference price of P2P transactions is set as the mean value of the electricity purchasing and selling price of the main grid, and the purchasing and selling price of electricity in the P2P market is based on three scenarios.
(1) The total energy generation of the prosumers is equal to the overall demand in microgrids. In this case, the P2P trading price equals the average value of the electricity purchasing and selling price of the main grid. (2) The total energy generation of the prosumers is greater than the overall demand in microgrids. In this situation, the P2P purchasing price of electricity equals the average value of the electricity purchasing and selling price of the main grid set by the principle of the balance between the cost and the income of the microgrid operator. (3) The total energy generation of the prosumers is smaller than overall demand in microgrids, in which case the P2P selling price of electricity is equal to the average value of the electricity purchasing and selling price of the main grid set by the principle of the balance between the cost and the income of the microgrid operator.
The P2P market mechanism mentioned above with little resilience has a limited effect on encouraging participation in P2P energy trading. The focus of a coalition game is how to motivate independent decision makers to act together as a whole to elevate their status (or utility) in the game. Game theory is a mathematical tool used to analyze coping strategies in competitive environments, in which the behavior of one player depends on the behavior of the other players [30]. Game theory can be roughly divided into two categories: noncooperative games and cooperative games. There are two bases for the stability of the coalition: (1) The overall revenue that the grand coalition can achieve is greater than the sum of the revenue when each member operates independently. (2) For each member, the allocation of revenue should abide by the principle of Pareto improvement. In other words, the revenue that each member may attain is no less than that of not joining the alliance. The characteristics of the coalition game ensure those conditions to be satisfied, which means members of the alliance exchange information and the signed agreement must be enforced.
The Design of P2P Trading Mechanisms
According to the motivational psychology of users to participate in microgrid P2P trading, the P2P trading prices in microgrids fall in the range of P g buy , P g sell , which can be converted to linear weighting, and the red line in Figure 4 illustrates the trajectory of P2P trading prices.
where P g buy and P g sell are the purchasing and selling price of electricity of the main grid. P P2P buy and P P2P sell are the purchasing and selling price of electricity in the P2P market. Q n (t) is the energy demand of the user n at time slot t and Q n pv (t) is the energy generated by PV n at time slot t.
Sustainability 2020, 12, x FOR PEER REVIEW 11 of 22 The P2P market mechanism mentioned above with little resilience has a limited effect on encouraging participation in P2P energy trading. The focus of a coalition game is how to motivate independent decision makers to act together as a whole to elevate their status (or utility) in the game. Game theory is a mathematical tool used to analyze coping strategies in competitive environments, in which the behavior of one player depends on the behavior of the other players [30]. Game theory can be roughly divided into two categories: noncooperative games and cooperative games. There are two bases for the stability of the coalition: (1) The overall revenue that the grand coalition can achieve is greater than the sum of the revenue when each member operates independently. (2) For each member, the allocation of revenue should abide by the principle of Pareto improvement. In other words, the revenue that each member may attain is no less than that of not joining the alliance. The characteristics of the coalition game ensure those conditions to be satisfied, which means members of the alliance exchange information and the signed agreement must be enforced.
The Design of P2P Trading Mechanisms
According to the motivational psychology of users to participate in microgrid P2P trading, the P2P trading prices in microgrids fall in the range of , g g buy sell P P , which can be converted to linear weighting, and the red line in Figure 4 illustrates the trajectory of P2P trading prices. The P2P energy trading mechanism is designed to facilitate the consumption of distributed resources in local areas, have revenue of distributed generation retained locally, and give social capital a boost in distributed generation and microgrid.
P2P energy trading price is set in reference to P P2P and the principle of balance between income and cost of microgrid operators. The formula can be expressed as follows: In order to present the pricing process of the P2P market more specifically, we divided the operation of microgrid into three possible scenarios and analyze them respectively. Scenario 1: production equals demand, as shown in Figure 5.
In order to present the pricing process of the P2P market more specifically, we divided the operation of microgrid into three possible scenarios and analyze them respectively. Scenario 1: production equals demand, as shown in Figure 5. In this case, P2P trading price can be calculated by In this case, P2P trading price can be calculated by Scenario 2: production exceeds demand, as shown in Figure 6.
Sustainability 2020, 12, x FOR PEER REVIEW 13 of 22 Scenario 2: production exceeds demand, as shown in Figure 6. In this case, and prosumers will be price takers for surplus energy that has to be exported to the grid at g buy P to maintain the energy balance in the microgrid.
Therefore, in the microgrid, users will be the price taker, which means the P2P energy purchasing price 2 2 P P buy P P P P and 2 P P sell P is set based on the equal value of cost and benefit. Q n (t) > 0 and prosumers will be price takers for surplus energy that has to be exported to the grid at P g buy to maintain the energy balance in the microgrid. Therefore, in the Sustainability 2020, 12, 923 12 of 20 microgrid, users will be the price taker, which means the P2P energy purchasing price P P2P buy = P P2P and P P2P sell is set based on the equal value of cost and benefit.
Scenario 3: demand exceeds production, as shown in Figure 7. In this case, and prosumers will be price takers for surplus energy that has to be exported to the grid at g buy P to maintain the energy balance in the microgrid.
Therefore, in the microgrid, users will be the price taker, which means the P2P energy purchasing price 2 2 P P buy P P P P and 2 P P sell P is set based on the equal value of cost and benefit.
Scenario 3: demand exceeds production, as shown in Figure 7. In this case, and users in energy deficiency will be price takers for the necessary amount of energy that has to be imported from the main grid at g sell P to ensure the demand is met in the microgrid. Therefore, in the microgrid, prosumers will be the price taker, which means the P2P energy selling price In this case, Q n (t) < 0 and users in energy deficiency will be price takers for the necessary amount of energy that has to be imported from the main grid at P g sell to ensure the demand is met in the microgrid. Therefore, in the microgrid, prosumers will be the price taker, which means the P2P energy selling price P P2P sell = P P2P and P P2P buy is set based on the equal value of cost and benefit.
Assumptions
Based on the coalition game theory, we make three assumptions for the P2P trading mechanism: (1) All participants are rational decision makers.
(2) All participants must comply with the internal agreement and forbid members to withdraw casually from the grant union. (3) All participants enjoy priority in purchasing energy from the microgrid and selling electricity to peers in the microgrid.
In Figure 1, prosumers will sell surplus electricity to the microgrid operator and customers will buy energy from the microgrid operator. According to the real-time monitoring data of the microgrid scheduling and balance management system, the microgrid operator will balance the internal system by exporting surplus electricity to the main grid or importing deficient energy from the main grid. The P2P trading price is set based on the principle that cost is equal to revenue. Meanwhile, the main monetary benefit of microgrid operator comes from registration fees for users and services charges, including PV maintenance, PV output prediction, etc.
Case Study
To show the feasibility of the proposed P2P energy trading system, we tested it on an artificial situation, which contained 30 households, including 5 residential customers without PV and BESSs, 5 with BESSs only, 10 with PV only, and 10 with both PV and BESSs. For each residential customer in the microgrid, it was assumed that their maximum demand is 80 kW. Gross capacity of solar panels and BESSs whose specification is the same as PV were assumed to be 115 kW and 50 kW, respectively. The real-time energy usage of residents can be demonstrated in the scheduling and balance management system of microgrid. The value of the FIT price was assumed to be 0.381 ¥/kWh. The tiered selling price of electricity from the grid to residential is shown in Figure 8. In this paper, the mixed integer linear programming (MILP) method based on YALMIP was used to solve the model. YALMIP is a solution toolbox written in MATLAB language. It can optimize the objective function by calling on a variety of commercial optimization solvers, such as LPSOLVE, CPLEX, and GRUOBI. In this article, we wrote the program code through the YALMIP toolbox and called CPLEX in the MATLAB operation environment to optimize the solution. Finally we got the optimal price of the 24 h P2P purchase and sale price in the microgrid, as shown in Figure 9a,b, and the minimum total operating net cost of the microgrid fluctuated with the value of parameter and , as shown in Figure 10. In this paper, the mixed integer linear programming (MILP) method based on YALMIP was used to solve the model. YALMIP is a solution toolbox written in MATLAB language. It can optimize the objective function by calling on a variety of commercial optimization solvers, such as LPSOLVE, CPLEX, and GRUOBI. In this article, we wrote the program code through the YALMIP toolbox and called CPLEX in the MATLAB operation environment to optimize the solution. Finally we got the optimal price of the 24 h P2P purchase and sale price in the microgrid, as shown in Figure 9a,b, and the minimum total operating net cost of the microgrid fluctuated with the value of parameter α and β, as shown in Figure 10.
In Figure 9a, the line of the main grid sales price is always at the top, indicating that purchasing electricity from the P2P market is always cheaper than that in the main grid no matter how the value of α and β change, which is conducive to encouraging microgrid users to participate in P2P transactions. In Figure 9b, the FIT price is fixed and the P2P selling price of electricity fluctuates with the value of α and β. When α = 0.6, β = 0.4, the lower limit of the optimal P2P selling price of electricity obtained for the P2P selling price of electricity is higher than the FIT price at this point, and it keeps rising. Therefore, when α = 0.6, β = 0.4, the minimum operating net cost of the microgrid can be achieved. In Figure 10, with the change of α and β, the net operating cost of microgrid gradually increases, which is due to the increase in the P2P transaction price, and the growth rate of internal cost of the microgrid is greater than that of internal revenue. Simultaneously meeting the conditions of the minimum operating net cost of the microgrid and a higher P2P sales price than FIT, we thus got the best P2P energy transaction price at α = 0.6, β = 0.4. In Figure 9a, the line of the main grid sales price is always at the top, indicating that purchasing electricity from the P2P market is always cheaper than that in the main grid no matter how the value of and change, which is conducive to encouraging microgrid users to participate in P2P transactions. In Figure 9b, the FIT price is fixed and the P2P selling price of electricity fluctuates with the value of and . When = 0.6 , = 0.4 , the lower limit of the optimal P2P selling price of electricity obtained for the P2P selling price of electricity is higher than the FIT price at this point, and it keeps rising. Therefore, when = 0.6 , = 0 .4 , the minimum operating net cost of the microgrid can be achieved. In Figure 10, with the change of and , the net operating cost of microgrid gradually increases, which is due to the increase in the P2P transaction price, and the growth rate of internal cost of the microgrid is greater than that of internal revenue. Simultaneously meeting the conditions of the minimum operating net cost of the microgrid and a higher P2P sales price than FIT, we thus got the best P2P energy transaction price at = 0 .6
Analysis of the Overall Operation of the Microgrid
As shown in Figure 11, from 0:00 to 6:00, PV stops functioning and households can meet the demand by discharging the battery and importing electricity from the main grid. In the day, from 7:00 to 16:00, the PV starts to generate electricity, the outputs of which can satisfy the need for overall demand in the microgrid. To be exact, households with PV and BESSs will first meet their electricity demand, and then charge the battery to get revenue by selling surplus electricity to peers in the microgrid or exporting to the main grid. From 17:00 to 23:00, we can see that PV systems do not generate electricity and the energy resource for households comes from BESSs discharging or importing from the main grid. Figure 11 illustrates that energy transactions in the P2P market mainly occur from 7:00 to 16:00 of the day, of which the green part of the bar chart stands for energy purchased from the P2P market by households and the orange for energy sold in the P2P market by households. If PV outputs exceed the demand of prosumers, it will be more attractive for prosumers to first trade electricity in the P2P market, and then export surplus energy to the main grid cleared at retail price under the organization of the P2P operator, as the selling price of energy
Analysis of the Overall Operation of the Microgrid
As shown in Figure 11, from 0:00 to 6:00, PV stops functioning and households can meet the demand by discharging the battery and importing electricity from the main grid. In the day, from 7:00 to 16:00, the PV starts to generate electricity, the outputs of which can satisfy the need for overall demand in the microgrid. To be exact, households with PV and BESSs will first meet their electricity demand, and then charge the battery to get revenue by selling surplus electricity to peers in the microgrid or exporting to the main grid. From 17:00 to 23:00, we can see that PV systems do not generate electricity and the energy resource for households comes from BESSs discharging or importing from the main grid. Figure 11 illustrates that energy transactions in the P2P market mainly occur from 7:00 to 16:00 of the day, of which the green part of the bar chart stands for energy purchased from the P2P market by households and the orange for energy sold in the P2P market by households. If PV outputs exceed the demand of prosumers, it will be more attractive for prosumers to first trade electricity in the P2P market, and then export surplus energy to the main grid cleared at retail price under the organization of the P2P operator, as the selling price of energy in the P2P market is relatively higher than that in the main grid and the purchasing price is quite the opposite. In general, the proposed P2P energy trading mechanism can achieve favorable savings for households with PV systems installed, decrease the electricity costs for users in the microgrid, and realize a win-win situation for prosumers and customers.
Operational Performance of Prosumers Analysis
The operational performance of prosumers with PV generation and BESSs is illustrated in Figure 12. The PV outputs from 7:00 to 17:00 can be divided into four categories according to the method of energy usage: (1) Self-consumption for prosumers. (2) Charging the battery. (3) Trade in P2P market. (4) Export to the main grid. P2P energy trading for the distributed generation is a promising mechanism that enables the surplus energy of prosumers to be consumed in the local area, giving rise to the revenue of prosumers, helping with electricity cost reduction. As the Chinese government gradually reduces the subsidy for the new energy feed-in tariffs, the investment of social capital in the development and construction of new energy is also gradually reducing, which to some extent hinders the development of new energy industry. Therefore, a scientific and reasonable P2P trading mode is conducive to encouraging social capital to invest in new energy construction.
Sustainability 2020, 12, x FOR PEER REVIEW 18 of 22 in the P2P market is relatively higher than that in the main grid and the purchasing price is quite the opposite. In general, the proposed P2P energy trading mechanism can achieve favorable savings for households with PV systems installed, decrease the electricity costs for users in the microgrid, and realize a win-win situation for prosumers and customers. Figure 11. A 24 h operation state of the microgrid.
Operational Performance of Prosumers Analysis
The operational performance of prosumers with PV generation and BESSs is illustrated in Figure 12. The PV outputs from 7:00 to 17:00 can be divided into four categories according to the method of energy usage: (1) Self-consumption for prosumers. (2) Charging the battery. (3) Trade in P2P market. (4) Export to the main grid. P2P energy trading for the distributed generation is a promising mechanism that enables the surplus energy of prosumers to be consumed in the local area, giving rise to the revenue of prosumers, helping with electricity cost reduction. As the Chinese government gradually reduces the subsidy for the new energy feed-in tariffs, the investment of social capital in the development and construction of new energy is also gradually reducing, which to some extent hinders the development of new energy industry. Therefore, a scientific and reasonable P2P trading mode is conducive to encouraging social capital to invest in new energy construction.
For microgrid users with PV only, the actual operation is similar to that of microgrid users with PV and BESSs. The power generated by PV production first meets its own needs, and then is sold to the P2P market to satisfy the demand of peers in the microgrid. If there is any surplus power, it will be sold to the main grid through the microgrid operator. The difference between the households with PV and battery is that they do not need to charge the battery. When the electricity produced by PV cannot meet their own needs, the only way to obtain electricity is to purchase electricity from the P2P market or the main grid, while the households with battery can meet their own needs by discharging the battery to reduce electricity expenditure.
In Figure 13, the energy sources of households with PV and BESSs is depicted. From 0:00-6:00 and 16:00-23:00, PV stops functioning and households prefer to get electricity from battery discharge. If this is not enough, households will import electricity from the main grid under the help of microgrid operator. From 7:00-15:00, we can see that households like to treat PV as the first energy resource, as PV starts producing energy at this period. Owing to the friendly mode of electricity usage containing PV and BESSs, the PV output abandonment rate is considerably reduced. For microgrid users with PV only, the actual operation is similar to that of microgrid users with PV and BESSs. The power generated by PV production first meets its own needs, and then is sold to the P2P market to satisfy the demand of peers in the microgrid. If there is any surplus power, it will be sold to the main grid through the microgrid operator. The difference between the households with PV and battery is that they do not need to charge the battery. When the electricity produced by PV cannot meet their own needs, the only way to obtain electricity is to purchase electricity from the P2P market or the main grid, while the households with battery can meet their own needs by discharging the battery to reduce electricity expenditure.
In Figure 13, the energy sources of households with PV and BESSs is depicted. From 0:00-6:00 and 16:00-23:00, PV stops functioning and households prefer to get electricity from battery discharge. If this is not enough, households will import electricity from the main grid under the help of microgrid operator. From 7:00-15:00, we can see that households like to treat PV as the first energy resource, as PV starts producing energy at this period. Owing to the friendly mode of electricity usage containing PV and BESSs, the PV output abandonment rate is considerably reduced. Figure 14 demonstrates the energy consumption profile of households only with BESSs throughout the day. We can see that from 0:00-7:00 the primary energy resources of households are battery discharge and the main grid. During the day, from 8:00-16:00, households purchase electricity from the P2P market for self-consumption and battery charge. Due to fact that the P2P price is more favorable than the retail price of the main grid, households prefer to purchase electricity form the P2P market rather than the main grid. From 17:00-23:00, the P2P trading market is off because PV outputs are not available and the energy consumed by households comes from battery discharge and the main grid.
Operational Performance of Households Only with BESSs Analysis
battery discharge and the main grid. During the day, from 8:00-16:00, households purchase electricity from the P2P market for self-consumption and battery charge. Due to fact that the P2P price is more favorable than the retail price of the main grid, households prefer to purchase electricity form the P2P market rather than the main grid. From 17:00-23:00, the P2P trading market is off because PV outputs are not available and the energy consumed by households comes from battery discharge and the main grid.
Conclusions
In this paper, a P2P energy trading model in microgrids with PV and BESSs is proposed. Households could broadly be divided into four major classes. Among these, households with PV could share surplus electricity with other prosumers. Based on the coalition game theory, the trading price is set by considering the minimum overall energy consumption in microgrids under various scenarios. The model is illustrated in a simulation framework. As a result, this mechanism for P2P energy trading promises a lot for changing households' preference toward their own generation and self-consumption, reducing the dependence of the distributed generation on subsidies, encouraging the scale of prosumers both in the business and the residential sectors, assisting with the distributed and renewable generation intermittency problem, and enabling more efficient and effective network monitoring.
Looking at economic aspects, with the help of the P2P energy trading model, it will lower households' electricity expenditure and bring considerable income for prosumers. If the household is in electricity deficiency, the extra energy generated by PV and stored in the battery will be available, which will cut down the cost of purchasing electricity. The BESSs participating in P2P energy markets at peak hours will unlock more benefits for the owners of BESSs.
As for social benefits, P2P energy trading for the distributed generation is a mechanism that shifts the energy, delivering energy in a more flexible and decentralized way, enabling the surplus energy of prosumers to be consumed in the local area. It will bring more social capital to the energy sector, reduce state funds in power infrastructures, and facilitate the development of distributed
Conclusions
In this paper, a P2P energy trading model in microgrids with PV and BESSs is proposed. Households could broadly be divided into four major classes. Among these, households with PV could share surplus electricity with other prosumers. Based on the coalition game theory, the trading price is set by considering the minimum overall energy consumption in microgrids under various scenarios. The model is illustrated in a simulation framework. As a result, this mechanism for P2P energy trading promises a lot for changing households' preference toward their own generation and self-consumption, reducing the dependence of the distributed generation on subsidies, encouraging the scale of prosumers both in the business and the residential sectors, assisting with the distributed and renewable generation intermittency problem, and enabling more efficient and effective network monitoring.
Looking at economic aspects, with the help of the P2P energy trading model, it will lower households' electricity expenditure and bring considerable income for prosumers. If the household is in electricity deficiency, the extra energy generated by PV and stored in the battery will be available, which will cut down the cost of purchasing electricity. The BESSs participating in P2P energy markets at peak hours will unlock more benefits for the owners of BESSs.
As for social benefits, P2P energy trading for the distributed generation is a mechanism that shifts the energy, delivering energy in a more flexible and decentralized way, enabling the surplus energy of prosumers to be consumed in the local area. It will bring more social capital to the energy sector, reduce state funds in power infrastructures, and facilitate the development of distributed and renewable energy. Hence, P2P energy trading will lead households to shift from consumers to prosumers. Correspondingly, it will increase the need for equipment for distributed energy generation and BESSs, assist with redundant capacity elimination, and indirectly give rise to the employment demand.
Note that some shortcomings in this paper are that some factors, including the environmental benefits, photovoltaic distributed construction costs, and investment recovery cycle, are not considered. From this point on, factors mentioned above will be taken into account in future research to elevate existing P2P energy trading models and to inspire the application of P2P technology in the energy sector.
Conflicts of Interest:
The authors declare no conflict of interest. | 2020-01-30T09:13:18.528Z | 2020-01-27T00:00:00.000 | {
"year": 2020,
"sha1": "23e2d00f5a27a51282ab1933ddde9abd69f6adc3",
"oa_license": "CCBY",
"oa_url": "https://res.mdpi.com/d_attachment/sustainability/sustainability-12-00923/article_deploy/sustainability-12-00923-v2.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "9d97a0fef6eb31ebbfcd9161903e74ce7a096e7e",
"s2fieldsofstudy": [
"Engineering"
],
"extfieldsofstudy": [
"Computer Science"
]
} |
86773779 | pes2o/s2orc | v3-fos-license | Increasing plant longevity and associated metabolic events in potted carnation ( Dianthus caryophyllus L . Clove Pink )
The effects of aminooxyacetic acid, benzyladenine, and 1-methylcyclopropene treatments on the post-production flower quality of potted carnation plants (Dianthus caryophyllus L. Clove Pink) were investigated considering ethylene production and antioxidant metabolism. Maximum plant longevity (17 days) was obtained using 70 ppb of 1-methylcyclopropene. As compared to control plants, ethylene production was significantly decreased by aminooxyacetic acid at concentrations over 100 mg L−1, benzyladenine at 20 or 30 mg L-1, and 1-methylcyclopropene at 70 and 140 ppb. A significant increase in 1-aminocyclopropane-1-carboxylic-acid concentration was observed in 1-methylcyclopropene treated plants compared with the control ones. On the other hand, decline in 1-aminocyclopropane-1-carboxylic-acid concentration was observed after using 100 or 150 mg L−1 of aminooxyacetic acid. Use of 1-methylcyclopropene (70 or 140 ppb), aminooxyacetic acid (100 or 150 mg L−1), and benzyladenine (20 or 30 mg L−1) significantly decreased H2O2 concentration and superoxide radical when compared with the untreated control. Significant increases in activities of superoxide dismutase, catalase, and peroxidase were noticed when plants were treated with 70 ppb 1-methylcyclopropene. In conclusion, aminooxyacetic acid, benzyladenine (at high concentrations), and 1-methylcyclopropene treatments can be suitable candidates for extending plant longevity, maintaining the visual quality, and reducing the loss of flower anthocyanin.
INTRODUCTION
Carnations (Dianthus caryophyllus L.) have long been grown as a cut flower, while their presentation as a potted plant is more recent and follows the development of dwarf species (Banon et al., 2002).The quality of potted carnations is often lowered during transportation and by indoor environmental conditions.It has been shown that ethylene can reduce postharvest quality of potted plants (Reid and Wu, 1992).It is a gaseous plant hormone synthesized by the oxidation of 1-aminocyclopropane-1-carboxylic acid (ACC).
Cytokinins (such as benzyladenine, BA) have been particularly effective in delaying senescence of carnation flowers by inhibiting ethylene biosynthesis (Cook et al., 1985).Different Anthurium cultivars, dipped into 200 mg L -1 BA, presented a variable effect on vase life (Paull and Chantrachit, 2001).The efficient ethylene action inhibitor (1-MCP) competitively blocks the hormonal action of ethylene through its irreversible binding to the ethylene receptor (Sisler and Serek, 1997).The ACC content of senescing miniature rose flowers pre-treated with 1-MCP was clearly higher than in the untreated control (Muller et al., 2001).The aminooxyacetic acid (AOA) is also used for extending the vase life of ethylene sensitive cut flowers (Rattanawisalanona et al., 2003).It inhibited senescence and delayed flower abscission in Salvia splendens (Ferrante et al., 2006).
During senescence there is an overproduction of free radicals such as superoxide anion (O 2 − ), hydroxyl radicals (OH -), and hydrogen peroxide (H 2 O 2 ), which may cause damage and cell death.Superoxide dismutase (SOD) is the only enzyme capable of scavenging O 2 − , where as H 2 O 2 it can be directly degraded by catalase (CAT) or peroxidase (POD) in the presence of a reductant (Mates, 2000;Djanaguiraman et al., 2010).The decrease in O 2 − and H 2 O 2 contents in 1-MCPsprayed plants may be due to the lower levels of ethylene production and scavenging of O 2 − and H 2 O 2 by such enzymes (Larrigaudiere et al., 2004).
The present study was carried out to investigate the effect of BA, AOA, and 1-MCP on ethylene production and antioxidant metabolism in potted carnation, in order to provide basic information for future strategies with the aim of increasing post-production of plant longevity in this species.
MATERIAL AND METHODS
Plant material: Potted carnation (Dianthus caryophyllus L. Clove Pink) cuttings were received from a commercial grower in Pakdasht, Varamin, Iran.The cuttings with 60-70 mm height were placed in boxes filled with perlite.Root formation at 18 to 20°C took about four to five weeks.The rooted cuttings were transplanted into plastic pots (1.5 L) that were filled with a mixture of peat and perlite (3:1, v/v), and placed under greenhouse condition: from 20 to 25/10 to 15°C (day/night), and 50 to 60% relative humidity.Flowering occurred after five or six months.
Treatments with benzyladenine, aminooxyacetic acid, and 1-methylcyclopropene: Pots containing uniform and healthy plants were selected at flower bud stage.Treatment with BA, AOA, and 1-MCP was done immediately after the first flower buds were almost fully opened in each pot.Plants were sprayed with solutions containing 10, 20, or 30 mg L -1 of BA (Sigma-Aldrich, Tehran, Iran) and 50, 100, or 150 mg L -1 of AOA (Sigma-Aldrich, Tehran, Iran), with a fine mist to cover all surfaces of the flowers and foliage.After such technique, the plants were held in a greenhouse overnight to allow leaves to dry.For 1-MCP treatment, the plants were placed in 60-liters-plastic containers and sealed with polyethylene bags.Water was added to the powder of EthylBloc™ (Rohm and Hass Philadelphia, PA, USA) to evolve 1-MCP at a concentration of 70 or 140 ppb.After application of all treatments, the plants were placed in an evaluation room where the environmental conditions were: 20±2°C, Relative Humidity (RH) >60%, and 12 hours under photosynthetic photon flux density of 15 µmol m -2 s -1 , using cool-white fluorescent lamps.Such measurements were made after opening three flowers per plant.
Evaluation of plant longevity: Flower senescence was evaluated daily and defined when at least 50% of the flowers per pot were senesced.
Measurement of ethylene production:
In all treatments, flowers (one for each replication) were sealed in a 250 mL glass vessel for the measurement of ethylene production.After two hours, 1 mL of the gaseous mixture of each glass was injected in a gas chromatograph (Shimadzu Gas Chromatograph) equipped with an activated alumina column fitted in a flame ionization detector.Nitrogen was used as a carrier gas.The amount of ethylene was presented as nL g -1 FW h -1 .
Extraction and analysis of 1-aminocyclopropane-1carboxylic-acid:
For the ACC extraction, 2 g of crushed, frozen petal tissue was homogenized in 4 mL of 5% sulfosalicylic acid solution, and centrifuged for 10 minutes at 3,090 g n in a pre-cooled centrifuge at 4ºC.ACC was assayed as described by Bulens et al. (2011).Briefly, 0.4 mL of 10 mM HgCl 2 was added to 1.4 mL of the extract in a 9 mL vial and was immediately sealed with a serum cap.Approximately 0.2 mL of the NaOH-NaOCl mixture was injected into the vial through the serum cap.The mixture was mixed for five seconds and allowed to react during four minutes on ice.The sample was mixed again for five seconds in order to release all ethylene content into the vial headspace.Following the second mixing, a 1 mL gas sample was removed for ethylene determination by gas chromatography.
Oxidants: Hydrogen peroxide (H 2 O 2 ) levels in petals were measured by following the method described by Patterson et al. (1984).One mL of cold-acetone-extracted supernatant was added to 0.1 mL 20% titanium reagent (20% (w/v) TiCl 4 in 12.1 M HCl and 0.2 mL 17 M ammonia solution.The solution was centrifuged at 3,000 g n at 4ºC for ten minutes, and the supernatant was discarded.The pellet was dissolved in 3 mL of 1 M sulfuric acid.Absorbance of the solution was measured at 410 nm with a spectrophotometer.Absorbance values were calibrated to a standard curve generated with known concentrations of H 2 O 2 , which were expressed in nmol g −1 FW.
For superoxide anion (O 2 − ), petals were homogenized in ice cold sodium phosphate buffer (0.2 M, pH=7.2) containing diethyl dithiocarbamate.The homogenate was immediately centrifuged for one minute at 3,000 g n .In the supernatant, superoxide anion was measured by its capacity to reduce nitro blue tetrazolium (2.5×10 −4 M).Absorbance of the end product was measured at 540 nm with a spectrophotometer.Superoxide anion was expressed as a change in optical density (OD) in min −1 g −1 FW (Chaitanya and Naithani, 1994).
Anthocyanin determination: Petal tissues were extracted using methanol containing 1% HCL for 24 hours, and the absorbance was determined by a spectrophotometer at 520 to 700 nm (Paliyath et al., 2008).
Statistical analysis:
The experiment was carried out in a completely randomized design with four replications.Data were statistically analyzed using SAS software (Version 6.12).Mean comparisons to identify significant differences among treatments were performed using the least significant difference (LSD) at a 0.05 probability level.
Effect of 1-methylcyclopropene, aminooxyacetic acid, and benzyladenine on ethylene production:
The treatments with AOA (100 or 150 mg L -1 ), BA (20 or 30 mg L -1 ), and 1-MCP (70 or 140 ppb) significantly inhibited the flower ethylene production (Figure 2).Ethylene production increased sharply in the untreated flowers until the eighth day and decreased thereafter (Figure 3).Evaluation of the regression relationship between plant longevity and ethylene production showed a significant negative association between ethylene production and plant longevity, i.e., plant longevity decreases as ethylene production increases (Figure 4).
Changes in 1-aminocyclopropane-1-carboxylic-acid content of potted carnation:
The effect of different treatments on ACC content of flowers was significant (p<0.001).The highest one was observed with 70 ppb 1-MCP.In contrast, the application of AOA in high concentrations significantly decreased the ACC content in the flowers compared with 1-MCP and BA, and the untreated control (Figure 5).
Effect of 1-methylcyclopropene, aminooxyacetic acid, and benzyladenine on antoxidant metabolism:
The application of 1-MCP (70 or 140 ppb), AOA (100 or 150 mg L -1 ) and BA (20 or 30 mg L -1 ) significantly decreased H 2 O 2 content and superoxide radical (O 2 -) compared with the untreated control (Table 1).The lowest H 2 O 2 and superoxide radical contents were obtained in the treatment with 70 ppb 1-MCP.The highest SOD, CAT, and POD activities were found in the treatment with 1-MCP.Moreover, SOD, CAT, and POD activities were significantly higher in the AOA (100 or 150 mg L -1 ) and BA (20 or 30 mg L -1 ) treatments if compared with the untreated plants (Table 1).
DISCUSSION
The present study clearly indicated that treatment with 1-MCP (70 or 140 ppb), AOA (100 or 150 mg L -1 ) and BA (20 or 30 mg L -1 ) decreased ethylene production in flowers of potted carnation Clove Pink (Figure 2).They delayed the onset of wilting in the flowers, which agrees with the findings of Lerslerwong and Ketsa (2008) for Dendrobium flowers and Seglie et al. (2011) for Dianthus caryophyllus cut ones.
The success in extending the plant longevity (Figure 1) using BA could be attributed to its role in inhibiting ethylene biosynthesis.The present results are in agreement with those of Cook et al. (1985) and Han and Miller (2003).Hassanpour Asil and Karimi (2010) reported that spraying cut Lisianthus flowers with 25 or 50 mg L −1 BA delayed ethylene production and extended its vase life.AOA is a well-known ethylene biosynthesis inhibitor and blocks the ACC synthase activity (Mensuali-Sodi et al., 2005).AOA is used for preserving in increased accumulation of ACC and reduced ethylene production during senescence in miniature rose flowers.
Plants possess a well-defined enzymatic antioxidant defense system to protect them against the reactive oxygen species (ROS), such as H 2 O 2 , OH − and O 2 − (Mates, 2000).Larrigaudiere et al. (2004) analyzed that ethylene was involved in ROS production.During senescence, there is an overproduction of free radicals that may cause damage and consequently cell death.In our study a low level of O 2 − and H 2 O 2 was recorded in 0.5 mg L -1 (Table 1).The decreases in O 2 − and H 2 O 2 contents in 1-MCP sprayed plants may be due to lower levels of ethylene production and scavenging of O 2 − and H 2 O 2 by SOD and POD enzymes (Larrigaudiere et al., 2004).This study also showed that the 1-MCP-treated flowers had significantly higher SOD, CAT, and POD activities compared with the control, AOA, and BA treatments (Table 1), which is in accordance with the findings of Djanaguiraman et al. (2011) and Wang et al. (2009).Application of 1-MCP could inhibit probable loss of membrane integrity (Yuan et al., 2010), therefore lipid peroxidation could be regulated by ethylene.
Color fading and discoloration are important factors in determining visual quality of flowers and in many cases they are the main reasons for determination of post-production quality (Amarjit, 2000).The major types of pigments contributing to the color of the flowers are carotenoids and anthocyanins (Amarjit, 2000).The improvement of petal color expression is at least partially due to the increase in anthocyanin contents.Ethylene has been known to cause petal color fading.In the present study, treatments with 20 or 30 mg L -1 BA, 100 or 150 mg L -1 AOA and 70 or 140 ppb 1-MCP reduced ethylene production in flowers.In some cases, anthocyanin degradation happens due to changes in the vacuoles that decrease the stability of the pigments and cause the chemical degradation of the anthocyanin, which results in senescence process (Hershkovits et al., 2005).Pre-treatment with 1-MCP could cut flowers sensitive to ethylene (Rattanawisalanona et al., 2003).These results showed that the application of 1-MCP, an ethylene perception inhibitor, significantly decreased ethylene production rate in flowers (Figures 2 and 3).The decreased ethylene production in 1-MCP-treated 'pink' pots may have also been due to the inhibition of the autocatalytic ethylene production Pathak et al., 2003).
The increase in the ACC content of the petals coincided closely with that in the ethylene production by the flowers.The ACC content of the 'pink' plants pre-treated with 1-MCP (especially at the concentration of 70 ppb) was clearly higher than in control.The accumulation of ACC in 1-MCP treated flowers may indicate that the treatment reduces ACC oxidase activity and to a lesser extent ACC synthase.However, no accumulation of ACC was observed after the AOA retreatment (especially at the concentration of 150 mg L -1 ), which suggests that ACC synthase was inhibited by AOA.Muller et al. (2001) − concentrations in petals of potted carnation plants treated with benzyladenine at 10, 20, and 30 mg L -1 , aminooxyacetic acid at 50, 100, and 150 mg L -1 , and 1-methylcyclopropene at 70 and 140 ppb.SOD: superoxide dismutase; POD: peroxidase; CAT: catalase; BA: benzyadenine; AOA: aminooxyacetic acid; 1-MCP: 1-methylcyclopropene; LSD: least significant difference.
reduce the damage of membrane in fresh product, which is an important factor involved in retaining bract discoloration (Hershkovits et al., 2005).
This study on antioxidant metabolism of potted carnation petals can be understood not only as experimental evidence confirming the hypothesis of a link between ethylene and free radicals generation in senescence, but also as a key to the development of adequate methods to prevent or delay deterioration in potted flowers.
Therefore, it could be concluded that AOA, BA (at high concentration), and 1-MCP treatments may be good candidates for extending plant longevity, maintaining the visual quality of flowers in potted carnation plants.The treatment with AOA (100 and 150 mg L -1 ), BA (20 or 30 mg L -1 ), and 1-MCP prevented the increase in the ethylene, O 2 − , and H 2 O 2 production and increased the antioxidant enzyme activity measured in petals.
Figure 3 .
Figure 3. Flower ethylene production by control plants and ones treated with 70 ppb 1-MCP, this is the best treatment for increasing longevity of potted carnation plants.Symbols are the mean values (n=4)±standard error.
Table 1 .
observed that treatment with 1-MCP resulted Activities of superoxide dismutase, peroxidase, catalase, and H 2 O 2 and O 2 | 2019-03-28T13:41:40.579Z | 2012-12-01T00:00:00.000 | {
"year": 2012,
"sha1": "068a13c109c2f8cfc6f822ba6c2b493820cff42d",
"oa_license": "CCBYNC",
"oa_url": "https://www.scielo.br/j/bjpp/a/BFqdYxSfQf6xDn4XxXxCV7y/?format=pdf&lang=en",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "068a13c109c2f8cfc6f822ba6c2b493820cff42d",
"s2fieldsofstudy": [
"Agricultural and Food Sciences",
"Environmental Science"
],
"extfieldsofstudy": [
"Biology"
]
} |
210149073 | pes2o/s2orc | v3-fos-license | Uncertainty Analysis in Life Cycle Assessments Applied to Biorefineries Systems: A Critical Review of the Literature
The aim of the present article was to investigate the relationship between uncertainty analysis and life cycle assessment (LCA) in scientific publications that address the application of LCA in biorefineries systems. Uncertainty analysis and its relationship with environmental impact assessment studies, especially those that address the application of LCA, is a research topic that requires attention because of its possible influence on results. A reference base was defined using a systematic approach and bibliometric analysis, with 64 scientific publications extracted from the Web of Science (WoS) and Scopus® databases, which were analyzed using two computational tools: VOSviewer and SciMAT. This group of publications helped establish the correlation and evolution over the last 10 years of the three key themes: “uncertainty analysis,” “LCA,” and “biorefineries.” The results of bibliometric analysis for the established framework pointed to a close and important relationship among these themes. The results were presented quantitatively and qualitatively, and the latter were visualized using infographics, co-occurrence networks, and strategic keyword diagrams. Although the study confirmed the relevance of uncertainties analysis to support LCA studies, it was identified a secondary role for scientific studies analyzed. The study also presents the analysis and discussions of the main publications found in the scientific literature. Future studies should conduct a more in-depth analysis of advanced knowledge representation and reasoning strategies about uncertainty, such as probabilistic ontologies.
Introduction
The environmental performance of products and services has become an important factor in the new economic context that has been emergent in recent years. This context is guided by the principles of bioeconomies and their relationships with circular economies. A bioeconomy is defined as an economy whose basic pillars of production, such as materials, chemicals, and energy products, are derived from renewable sources, providing alternatives to fossil fuel-based products and energy (European Commision 2015; Embrapa 2018). Circular economy is defined by Saavedra et al. (2018) as the study of real-life nonlinear systems to facilitate efficient flows of materials, energy, work, and information, promoting circular flows that reduce environmental impacts and optimize resource efficiency. In circular economies, the economic and environmental value of materials is preserved as long as possible, keeping them in the economic system, whether by increasing the lifetime of the products derived from them or returning them to the system for reuse (den Hollander et al. 2017;Saavedra et al. 2018). The relevance of these two concepts is that both offer approaches and challenges for handling environmental impacts, such as climate change, land and water use, and toxic emissions, which are the target of discussions in forums around the world, such as the United Nations Climate Change Conference and Europe's Bioeconomy Strategy.
Several tools can be used to measure these impacts: life cycle assessment (LCA), strategic environmental assessment (SEA), environmental impact assessment (EIA), environmental risk assessment (ERA), cost-benefit analysis (CBA), material flow analysis (MFA), and the ecological footprint (EF) method. The studies of Finnveden et al. (2009) and Stavropoulos et al. (2016) showed that LCA is one of the most well-known approaches. According to Finnveden et al. (2009) and Klöpffer (2014), interest in LCA grew quickly in the 1990s, and during this period, two of the method's main features were presented: (1) comparison of production systems using a functional unit, e.g., a 1000 L container of a beverage and its transportation to the sales point, and (2) the possibility of cradle-to-grave analysis, including assessment of all the important phases of a product or service's life cycle. Life cycle assessment has been applied in several cases. Cherubini and Ulgiati (2010), presented a review of the assessment of potential environmental impacts of biomass-derived energy. You et al. (2012) determined greenhouse gas (Gg) balance, global carbon balance, and renewable energy returns in comparison with fossil fuel inputs. There have also been studies assessing the environmental impact of the production of bioproducts and renewable chemicals that can be obtained from biorefinery systems (Yue et al. 2014;Carneiro et al. 2017), among others.
The LCA technique identifies potential environmental impacts, and in this context, it does not predict absolute or precise impacts because, among other factors, of uncertainties inherent in data collection and modeling of these impacts (Klöpffer 2014). An adequate analysis of these uncertainties helps prioritize the choices made throughout the study of impact analysis using LCA. Identifying these uncertainties, as well as their sources and types, allows for increased data transparency and reliability of results, preventing unfeasible or misguided decisions (Baker and Lepech 2009;Curran 2012).
Sources of uncertainty have been defined in Finnveden et al. (2009) as input data for LCA methods that may be uncertain and directly related to several types of uncertainty. Uncertainty in LCA studies can stem from various sources, including low quality and/or absence of data, system limitations, and allocation principles; the time frame adopted for impact assessment; assumptions based on the wrong models; and phenomena in domains subject to statistical fluctuations (Curran 2012;Guo and Murphy 2012;Gargalo et al. 2016;Carvalho et al. 2016).
Types of uncertainty can be understood as aspects that can be wrong (unreliable) if derived from these sources (Finnveden et al. 2009). Some examples of types of uncertainty include choices inconsistent with the goal and scope of the impact assessment study; technological evolution; changes in incentives and government policy; different allocation methods for different supply chains; weather variations; supply of raw materials (feedstocks); and price and cost volatility (Frischknecht et al. 2005;Finnveden et al. 2009;Awudu and Zhang 2012;You et al. 2012;Yue et al. 2014;Mirkouei et al. 2017).
The main goal of the present study was to identify the extent to which uncertainty analysis is addressed in impact assessment studies using the LCA method applied to technological systems defined as biorefineries, through a systematic and bibliometric specialized literature review. In addition, the study indicates the most cited authors and publications, providing their main outputs. Table 1 presents the definitions adopted for the three key terms used in this study: uncertainty analysis, LCA, and biorefinery.
Methodology
The methodology used in this survey consisted of three phases. (1) systematic approach, (2) bibliometric analysis, and (3) discussion of the results of phase 2 (Cronin et al. 2008;Saavedra et al. 2018).
Phase 1 was further divided into three steps: (1) defining the conceptual problem, (2) adopting inclusion and exclusion criteria, and (3) data collection. In step 1, the terms life cycle assessment*, life cycle assessment*, life cycle analy*, life cycle analy*, life cycle assessment (LCA), LCA, uncertaint*; uncertaint* analy*; bioref*, and bio-ref* were established as the keywords to qualify the themes uncertainty analysis, LCA, and biorefineries. In steps 2 and 3, only publications written in English from two scientific literature databases were analyzed, the Web of Science (WoS) and Scopus®, starting from the first year available in the database up to the year of the study. Because the WoS does not allow for direct searches in its keyword field, which is different from the Scopus® database, the Boolean operators "or" and "and" were used in the search fields identified as "Topic" (WoS) and "Title-Abstract-Keywords" (Scopus®). These fields are equivalent.
Using these parameters, 55 publications were found in WoS and 33 in Scopus®, being that the WoS included 72.7% of the results found in Scopus®. For this reason, the WoS was chosen as the reference base for the computational data analysis tools. However, to ensure that the results of both databases were considered in the study, the remaining nine publications (27.3%) were also included. Thus, the basis of reference consisted of 64 publications. The search was conducted in the third week of October 2018.
In Phase 2, bibliometric analysis was conducted to characterize, structure, and quantitatively analyze the field of research and the impact of a set of researchers, for a set of scientific journals or a given publication (Cobo et al. 2011;Gutiérrez-Salcedo et al. 2017;Marques et al. 2018). This approach has been used in bibliographic reviews in the very different fields of knowledge. van Leeuwen (2006) . The present study used VOSviewer (version 1.6.9) and SciMAT (version 1.1.04); both are opensource software and compatible with the WoS and Scopus® database formats. The VOSviewer is a tool for constructing and visualizing bibliometric networks, presenting correlations between the analyzed data (such as authors, keywords and journals), while SciMAT provides important characteristics that allow for data preprocessing, statistical analysis, graphic representation of results, and clustering algorithms for complete bibliometric analysis. A complete review of the features of these tools is presented by Gutiérrez-Salcedo et al. (2017).
The VOSviewer was set up as follows: (1) a thesaurus file with equivalent terms was created; (2) instead of full counting, a fractional counting algorithm was applied, based on the methodology presented by Perianes-Rodriguez et al. (2016); (3) a minimum number of keyword occurrences were set (V min. occurr. ), which indicates the number of publications in which the keyword appeared at least once, and V min. occurr. = 5 was defined to exclude terms with a low number of occurrences (V min. occurr. < 5) and to avoid the indication of a low number of terms, restricting the analysis set (V min. occurr. > 5); and (4) the identified terms were conceptualized as "main keywords." The SciMAT program was set up in three steps: (1) selecting time periods for analysis, (2) eliminating false positives and negatives, and (3) defining calibration parameters for the bibliometric analysis algorithms. In Step 1, the analyzed time frame was split into two periods: 2008-2014 and 2015-2018. This allowed for a good distribution of publications for scientific mapping analysis, with a total of 26 publications in 2008and 38 publications in 2015-2018 Step 2, search algorithms were used for plurals and distance, which were integrated into the tool, thus grouping similar keywords such as biofuel and biofuels, biorefinery and biorefineries, LCA and life-cycle assessment, and uncertainty and uncertainties. In this step, keywords that were not meaningful in the context of the study or were too broad in meaning were excluded, such as model, case study, design, challenges, perspective, methodology, technologies, management, and system. Next, in Step 3, the keyword was selected as the unit of analysis, and the other parameters were defined based on the methodologies presented by Cobo et al. (2011), Cavalcanti (2016), Burnard et al. (2017), and Castillo-Vergara et al. (2018).
Phase 3 consisted of data processing and presenting the results in both quantitative (descriptive) and qualitative (content) formats, following the methodology proposed by Castillo-Vergara et al. (2018) and Cobo et al. (2011Cobo et al. ( , 2012Cobo et al. ( , 2018. Descriptive analysis showed the main areas of related research areas, the main authors, publications, journals, and the geographical scope of the scientific productions. For content analysis, the main correlations and thematic evolutions among the themes uncertainty analysis, LCA, and biorefineries were presented using infographics, keyword co-occurrence networks, strategic diagrams and analysis, and discussion of the most-cited publications for the three main themes identified in each time frame assessed in the present study. Tang et al. 2015). In addition to this definition, it is also important to consider the sensitivity analysis concept as an approach to identify which input parameters carry out more or less influence on the final results on LCA model (Baker and Lepech 2009) LCA A method that assesses the environmental impact of a given product or service throughout its lifetime; its goal is to make the environmental performance of products and services comparable, in addition to allowing consumers to choose the ones with the lowest burden. Four phases are applied when using this technique, in conformity with ISO 14040 (principles and structure) and 14044: definition of scope and objective, inventory (data collection), impact assessment, and interpretation (Rebitzer et al. 2004;Guinée et al. 2011;Curran 2012;Klöpffer 2014;Stavropoulos et al. 2016) Biorefineries Industrial plants that carry out sustainable conversion processes of biomass into products (such as chemical materials and feedstock) and/or energy (electricity, fuel, and heat). Biorefinery systems can mitigate the impacts of climate change, reduce dependence on fossil fuels, and increase cleaner supply chains based on the sustainable processing of biomass (
Results and Discussion
Descriptive Analysis Figure 1 depicts the 10 main areas of research identified, with engineering representing 37 studies (58%) of the total of 64 publications in the basis of reference. The studies in this area focused on chemical and biochemical engineering, especially because of the relevance of the challenges, issues, and uncertainties related to the biomass supply chain for energy conversion, biofuels, and value-added coproducts. The figure shows that the total number of documents was higher than that of publications; i.e., the same publication could be related to more than one field of research. Table 2 lists the five main authors with the most published studies, classified according to their respective impact factors (h-index or HI). The first two authors were Fengqi You from Cornell University, with studies about biofuel and bioenergy supply chain modeling and optimization, and Gürkan Sin from the Technical University of Denmark, with studies about uncertainty analysis and risk management in environmental sustainability processes. Considering a total of the 17 publications listed in the table, most authors were affiliated with institutions in the USA, accounting for 53% of the scientific production. Table 3 presents the five most-cited publications. The top two were by Cherubini et al. (2009) and addressed issues regarding LCA studies for biofuel and bioenergy and relevant recommendations, while You et al. (2012) investigated models for optimizing biofuel supply chains. Figure 2 presents the top ten journals, their respective numbers of publication, and impact factors. These journals included 51.5% of the total number of publications, and the Journal of Cleaner Production headed the list with six publications in the research area and an impact factor of 5.651. Figure 3 shows the ten countries with the highest number of publications. The USA and Denmark accounted for 44% and 12.5%, respectively, while Brazil represented 6.3% of the total. The countries identified on the map in descending order of publications are USA (28), Denmark (8), Brazil, China and Portugal (4 each), and Austria, Holland, France, Spain, and Belgium (3 each). Figure 4 presents an infographic with the main keywords extracted by VOSviewer. The top five terms were LCA (37), biofuels (31), energy (21), and biomass and biorefinery (18 each), while uncertainty and uncertainty analysis resulted in 13 and 5 occurrences, respectively. It is worth noting that the low occurrence of a keyword does not imply irrelevance or lack of correlation with the other main keywords. Figure 5 illustrates the correlations among the main keywords using a co-occurrence network, in which the thickness of the lines indicates the strength of the correlations between the nodes (keywords), calculated by counting the number of publications in which the two keywords occur together. The size of the nodule reveals its weight (representativeness) in the network, which is determined by the total number of occurrences of the node. The network clearly shows a connection among the three keywords investigated in this study (uncertainty analysis, LCA, and biorefinery) through the strength of correlations among the keywords LCA, biofuels, energy, biomass, and biorefinery and, at a lower level, between the keywords uncertainty analysis and LCA, uncertainty and LCA, and biofuels and biorefinery. Van Eck and Waltman (2018) established the following rules for adequately interpreting a co-occurrence network: (1) the closer to yellow and green the node, the higher its relevance in the network; (2) the numeric scale presented in the lower right corner of the figure presents the average number of citations of publications To visualize the thematic evolution of the main keywords, Fig. 6 presents a strategic diagram for each time period. A strategic diagram is a bidimensional graphic representation that shows the thematic nuclei (most representative keywords) arranged in four groups and measured according to two standards: centrality (level of interaction) and density (weight). The greater the centrality and density, the more relevant the thematic nucleus in the context of the study. The groups are represented by the following quadrants: (1) motor themes (upper right quadrant), which are well-developed themes that are important to structuring a field of research; (2) basic and transversal themes (lower right quadrant), which are important to a field of research but not sufficiently developed; (3) themes that are emergent or decadent (lower left quadrant); (4) peripheral and isolated themes (upper left quadrant); or (5) themes that are internally well-developed but isolated from other themes and are of marginal importance to developing the field of research.
Content Analysis
The study of Cobo et al. (2018) established that (1) the main thematic nuclei are formed by well-defined motor and basic themes and that (2) the diameter of the sphere is proportional to the number of publications that cite each thematic nucleus. Figure 6 shows that in 2008-2014, three main thematic nuclei stood out, and of these, biofuels and biorefinery were clearly identified as motor themes, while biomass emerged as a basic theme. In 2015-2018, three main thematic nuclei also emerged, in which the LCA was a motor theme, and biofuels and biomass remained as motor and basic themes, respectively. In both periods, the term supply chain evolved as a motor theme (high density and moderate centrality), defined mainly by analysis in scenarios uncertainty (Nguyen et al. 2014;Osmani and Zhang 2014;Tong et al. 2014); impact assessments involving environmental, social, and economic aspects (Bakshi 2014); and supply chain optimization studies and also scenarios uncertainty, especially in terms of the pathways for biofuel and energy production inputs (Malça and Freire 2010;Awudu and Zhang 2012;You et al. 2012;Gong and You 2014).
The term biorefinery, which was considered a motor theme in 2008-2014, turned to be a peripheral theme in 2015-2018. It may suggest that the research related to the topic shifted from a concept development scenario to a more consolidated theme, especially because of emerging practical applications and its undisputed relevance in studies related to bioeconomy, especially in Europe, as can be seen in the European plan of action for the circular economy (European Commision 2015). Considering this consolidation of the term biorefinery, the term LCA evolved as a motor theme in 2015-2018. One factor that explains this thematic evolution is the advent of the circular economy for the business world in mid-2014, indicating that sustainability and environmental impact reduction as strategic requirements for the life cycle perspective in each step of the value chain in the process production of products and servicese.g., feedstocks supply chain, transportation logistics, and waste management (Ellen MacArthur Foundation 2014; European Commision 2015; CNI 2018; Saavedra et al. 2018).
The themes uncertainty analysis and uncertainty were not identified as the main thematic nuclei in any of the analyzed periods. Together with the low occurrence of these themes (Figs. 4 and 5), this demonstrates that although it is an important issue, uncertainty analysis in LCA is a secondary study and has not been cited as frequently in the keywords of (2010) International Journal of Life Cycle Assessment publications, or is not commonly conducted in practice, and when it is, it still represents a lower percentage of studies (Gargalo et al. 2016). The theme technoeconomic analysis saw a decline between the analyzed periods, in addition to low occurrence (Figs. 4 and 5). However, it is necessarily addressed in studies that discuss, among other things, economic performance in the production of products derived from renewable resources (Paap et al. 2013;Pourhashem et al. 2013).
Tables 4 and 5 present the performance of each of the main thematic nuclei (TN), considering the following measurements: number of publications (NP), citations (CT), h-index (HI), centrality, and density. For each TN, the five most cited publications were included based on the bibliometric analysis results of the SciMAT tool. The data are arranged by level of density. The three themes with the best performance in 2008-2014 were biofuels, biorefinery and biomass, while LCA, biofuels, and biomass stood out in 2015-2018. Tables 4 and 5. Seven of them (35%) addressed uncertainties analysis aiming to increase the reliability of data obtained with LCA, as emphasized on Baker and Lepech (2009), and some of the studies also cited some uncertainties related sources and specific methods. The highlights for these publications are presented as follow: Cherubini et al. (2009) pointed out the difficulty of completely avoiding uncertainty when applying LCA to bioenergetic systems, because some of the parameters relevant to these systems are not well known, such as nitrous oxide emissions and changes in soil carbon reservoirs; You et al. (2012) suggested further studies to investigate other issues, as uncertainties related to fluctuations in the demand for ethanol, interruptions in biomass supply, emergence of more efficient conversion technologies, and changes in government incentives, among others, that are involved in biofuel supply chains, in order to solidly create and operate these chains; Yue et al. (2014) emphasized the relevance of LCA in assessing the environmental impacts on these chains and highlighted the importance of uncertainty analysis to ensure the reliability of LCA results. In addition to the correlation between sources and types of uncertainty, usually found within the scope of biofuel and bioenergy supply chains, the authors also suggested potential approaches for uncertainties analysis, including scenario analysis, stochastic programming, robust optimization, and fuzzy methods; Čuček et al.(2014) present a multiperiod synthesis and optimization model for integrated biorefinery systems (biofuels) and suggested that this model can improve the uncertainties analysis for these biorefineries; Carneiro et al. (2017) emphasized the relevance of addressing potential uncertainties in the algae biomass supply chain, which stem from various production pathways analyzed, especially when it comes to producing third-generation biofuels, because of this technology's stage of development and the many existing pathway options; Cong et al. (2017) addressed the need to discuss uncertainties related to the option of CH 4 (methane) compared to diesel in the transport sector's and the some examples of types of uncertainty that should be discussed relates to vehicles cost and storage infrastructure in the CH 4 supply chain; and Maes et al. (2015) concluded that when there are uncertainties about the labeling of residue that is added to the process, the results do not reflect a wideranging view of the production chain, and they integrate low impacts regarding toxic emissions and reduce the precision of the allocation of these impacts to the products and coproducts generated. The opposite is true when inflows are labeled clearly as value-added residue. This shows the need to be careful and precise when dealing with different sources and types of uncertainty in material supply pathways, as well as the input parameters of impact assessments. Table 6 presents the other 13 studies (65%), directly related to challenges and approaches for uncertainties analysis in LCA, applied to biorefineries systems and/or their components (e.g., supply chain, economic analysis, environment impact), considering the following characteristics: reference (RF), type of publication (TP), strategy adopted (ST), the approach (AP), and tool used. The main aspects about uncertainties discussions are presented as follow.
From 2008 to 2014
Cherubini and Ulgiati (2010) presented an investigation applying sensitivity analysis for various input parameters and resulting greenhouse gas emissions, measuring the effects of land use changes and corn stover crop residue removal and its use as a raw material in biorefineries. The authors approached this technique with the explicit goal of reviewing the results of the analysis, identifying the parameters ("key parameters") that most influenced the results, and verifying the precision of the data. They concluded that for the case of the LCA study presented in the publication, the most uncertain parameter was the factor used to estimate the amount of carbon reservoir reduction as a result of corn stover removal. This reduction turns to be the key parameter, which strongly affects the final balance of greenhouse gas emissions for a biorefinery system. Cherubini and Jungmeier (2010) applied a sensitivity analysis with same goal addressed by Cherubini and Ulgiati (2010), i.e., review the results, identifying the parameters that most influence the results (change in soil C pools and N 2 O soil emissions), and check the data accuracy. Both studies applied LCA to biorefinery case studies. Floudas et al. (2012) analyze optimization thermochemical process to fuel production from three feedstocks (coal, natural gas, and biomass). The authors present a review about important approaches for these feedstock supply chains, among them are LCA, sensitivity/ uncertainty analysis, and related approaches such as MC, MILP, MINLP, and FP. They also indicate that uncertainty is a "challenge and opportunity" component to be addressed in energy balance processes for feedstock supply chains. Osmani and Zhang (2014) present a review of uncertainty scenarios such as: biomass supply chain, bioethanol (lignocellulosic fuel) demand, feedstock purchase price, and bioenergy product sale price. This research focuses on the financial and environmental performance of the bioethanol feedstock supply chain, in particular, to minimize C emissions. The study also emphasizes that uncertainty scenarios are best evaluated by stochastic methods (e.g., MC) than deterministic methods (e.g., linear optimization models as MILP and MINLP) and presents the results comparing these two approaches and presents a hybrid model (two-stage SMILP). Likewise Cherubini and Ulgiati (2010) and Cherubini and Jungmeier (2010), the authors also applied a sensitivity analysis to verify the influence of uncertainty parameters. Nguyen et al. (2014) focused on the supply chain of the lignocellulosic fuel production, identifying corn stover as the main feedstock at Kansas, USA. The use of MC approach was discussed, by applying Oracle Crystal Ball® tool (Oracle Corporation 2017) for the sensitivity and uncertainty analysis of various uncertainty scenarios around this supply chain on biorefinery systems. In the same way of the previous study, Paap et al. (2013) detailed the strategy to identify the influence of some parameters related to the pretreatment, hydrolysis, and fermentation steps, in the evaluation of uncertain scenarios for a technology under development and commercial scale unproven. The authors evaluated the production process of products derived from switchgrass feedstock and analyzed the influence of these parameters on the production process by using a MC approach in the Crystal Ball® add-in for Microsoft Excel® tool.
From 2015 to 2018
Souza et al. (2015) presented an interesting case study involving an integrated biorefinery system. The study compared the impact assessment results for the traditional ethanol production system, the production of algal biomass-derived biodiesel, and the proposed biorefinery system. Due to the specificities of each system, which could cause inconsistencies in the results, sensitivity and uncertainty analysis was conducted for the input parameters using the traditional MC approach in the Oracle Crystal Ball®. The authors emphasized that the analysis were based on projected performance, which indicates the presence of high levels of uncertainty and must be potentially treated. Farzad et al. (2017) presented a sensitivity analysis related to price parameters aiming to demonstrate the economic viability of installing a biorefinery to produce lignocellulosic sugarcane biofuel. The approach applied for the sensitivity analysis was MC, and the LCA was performed with SimaPro. Boldrin and Astrup (2015) discussed how the uncertainty analysis can be applied to distinct sustainability policies (e.g., European Union Renewable Energy Directive -RED and United States Renewable Fuel Standard -RFS) with MC approach and SimaPro tool applied to assess uncertain scenarios around the production system of biofuels on biorefineries. Agostinho et al. (2015) adopted an uncertainty analysis with a MC approach through the MC-Simulation Add-In for Microsoft Excel® tool developed by Barreto and Howland (2010). Like the previous studies, the authors performed an uncertainty analysis to assess potential uncertainties in input parameters related to cellulase enzyme industrial production, in this case. In the same way of Osmani and Zhang (2014), the authors briefly discussed the differences between stochastic and deterministic models, to evaluate uncertainty scenarios in LCA studies. Bairamzadeh et al. (2018) presented a robust MILP approach to verify sources and types of uncertainty on lignocellulosic biofuel (bioethanol) supply chain. The authors identified the main sources, types, and uncertainties approaches that can applied in this supply chain through a literature review. Pérez-López et al. (2018) also addressed an analysis around uncertainties parameters related to economic and environmental assessment on a biorefinery system in the same way of You et al.(2012), Yue et al. (2014), Cong et al. (2017), andFarzad et al. (2017). In this study the authors evaluate a microalgal biorefinery to produce renewable fuel and coproducts. Considering that the discussion addressed technologies under development, there were potential uncertainties scenarios to be analyzed. The authors investigated the influence of these uncertainties applying sensitivity and uncertainty analysis, using a MC approach through the @RISK add-in for Microsoft Excel® tool. Gong and You (2017) presented a processing network with over 46,000 alternative production pathways for fuels and value-added products based on algae biomass, indicating a robust mathematical model to support decisions about obtaining fuels and microalgae-based bioproducts with a maximum return on investment (ROI). For a study with this scope, an elevated level of uncertainty in the model's input parameters is to be expected. To process these uncertainties, the authors developed a deterministic model, 2-Stage ARMIFP, in order to identify, within the network and based on the indicated parameters, the optimal pathway that yields the highest ROI. An optimal processing pathway was proposed with the following characteristics: open pond, polyelectrolyte-based flocculation, filtration, high-pressure homogenization, supercritical CO2 extraction, and transesterification catalyzed with sodium methoxide, among others. The model also indicated that the highest productivity would be 25.94%. For future studies, the authors indicated the need for species of microalgae with higher oil productivity and high biomass concentration.
Final Considerations
The present study, which utilized a systematic and bibliometric approach to a literature review, resulted in a broad scientific mapping of how the theme uncertainty analysis is discussed, together with LCA and biorefinery. The results clearly indicate that in the last 10 years, uncertainty analysis has been treated primarily as a way to help validate the results of LCA studies, when it is applied in scenarios that involve multiple sources and different types of uncertainties, as is the case for biorefinery systems. A total of 64 scientific publications came from the WoS and Scopus® databases, and 72.7% of the publications from Scopus® were also listed in the WoS. Thus, the WoS was considered the basis of reference for the computational analysis. The publications included research carried out in the last 10 years and were divided into two time periods: 2008-2014 and 2015-2018. The results were presented both in quantitative (descriptive) and qualitative (content) formats. Regarding the descriptive results, the studies were mostly concentrated in the fields of chemical and biochemical engineering. The journal with the highest number of publications was the Journal of Cleaner Production, edited by Elsevier, which addresses research and practices in sustainability, environment, and cleaner production. In geographical terms, as observed for the authors with the greatest impact factors and most-cited publications, most studies were from the USA, with 44%, followed by Europe, with 42%, while Brazil accounted for 6.3% of the scientific production found in the present study.
Regarding content analysis, the correlations and evolution of the studied themes were investigated through infographics, co-occurrence networks, and strategic keyword diagrams. Graphic analysis showed that in 2008-2014, the theme biorefinery stood out, a concept that gained greater consolidation as a motor theme in the context of bioeconomy. In 2015-2018, the motor theme was LCA, which was strongly correlated to studies about bioenergy, with relevant discussions about the sustainability and viability of fuel and energy production pathways through renewable resources. This study revealed low occurrence of uncertainty analysis in both periods. This low occurrence does not indicate the theme's irrelevance, but only that the studied publications treated uncertainties as a secondary subject, serving to support the validation of results. The results displayed in the graphic analysis were explored by analyzing and discussing the most-cited publications that were related to the main thematic nuclei of each period. The strategy used in this study contributed to a detailed understanding and scientific mapping of various issues and challenges when handling uncertainties in order to minimize risks and errors in environmental impact assessment diagnoses using the LCA technique. It can also help identify the various sources and types of uncertainty, as well as possible approaches to LCA, such as statistical and probabilistic models, and fuzzy methods, among others.
Future studies in the area should include in-depth investigations for the use of advanced probabilistic techniques, such as probabilistic ontologies, to add value to investigations of knowledge representation and reasoning about uncertainties in all stages of LCA studies, especially in the context of complex environments such as biorefinery systems. There are still few guidelines provided in the literature in the area of study of probabilistic ontologies. However, details and discussions in this field of research and yours evolutions have been addressed by Carvalho et al. (2011Carvalho et al. ( , 2013Carvalho et al. ( , 2016Carvalho et al. ( , 2017 and Riali et al. (2019). | 2020-01-12T14:27:11.032Z | 2020-01-11T00:00:00.000 | {
"year": 2020,
"sha1": "36ca760f3d8955b84be0a40bee3c0654e3faf455",
"oa_license": "CCBY",
"oa_url": "https://link.springer.com/content/pdf/10.1007/s41660-019-00103-9.pdf",
"oa_status": "HYBRID",
"pdf_src": "Adhoc",
"pdf_hash": "9b2745939a975ff83a0205ad0659042bf38e63af",
"s2fieldsofstudy": [
"Engineering"
],
"extfieldsofstudy": [
"Computer Science"
]
} |
4017818 | pes2o/s2orc | v3-fos-license | Microglial dynamics during brain development
Microglia are the resident immune cells of the central nervous system (CNS). In the normal state, microglia have a ramified shape and continuously survey the conditions of the brain. In response to various stimuli, some microglia change to an amoeboid shape. This type of microglia is motile and produces several secretory proteins, including inflammatory cytokines and neurotrophic factors, which regulate brain homeostasis. Microglial morphology also changes a great deal in concert with environmental factors during brain development, and related to microglial migration and proliferation, which contribute to the establishment of precise synaptic connectivity and neural circuits. In this review, we focus on the fundamental concepts of microglial fate during brain development, and discuss whether the molecular mechanisms that control microglial morphology are linked to microglial functions.
Microglia are the resident immune cells of the central nervous system (CNS). In the normal state, microglia have a ramified shape and continuously survey the conditions of the brain. In response to various stimuli, some microglia change to an amoeboid shape. This type of microglia is motile and produces several secretory proteins, including inflammatory cytokines and neurotrophic factors, which regulate brain homeostasis. Microglial morphology also changes a great deal in concert with environmental factors during brain development, and related to microglial migration and proliferation, which contribute to the establishment of precise synaptic connectivity and neural circuits. In this review, we focus on the fundamental concepts of microglial fate during brain development, and discuss whether the molecular mechanisms that control microglial morphology are linked to microglial functions.
Microglial behavior changes during brain development: Microglia are CNS-resident macrophages that exhibit heterogeneous and pleomorphic morphology. Normally, the activation status of microglia is determined by their local environment. It is well established that microglial morphology is inextricably linked to their functions (Kettenmann et al., 2011). To date, the traditional view of microglial morphology, ranging between "amoeboid" and "ramified," has been embraced. In the healthy adult brain, microglia continually extend and retract their ramified processes; this state is called "ramified." In contrast, at the time of neuronal injury in the adult brain, microglia retract their protrusions; this state is called "amoeboid," and involves microglia migrating and accumulating at the site of damage. Furthermore, some microglia have "intermediate" forms, which have crossover functions between "amoeboid" and "ramified". Until recently, it had been thought that ramified microglia are resting forms, and amoeboid microglia are activated one. However, novel technologies, such as single-cell RNA sequencing and two-photon microscope, have revealed that the biological relevance between microglial morphology and functions is not simple for several reasons. First, microglia present heterogeneous population beyond expectation, including morphology, in a region-and age-dependent manner. Second, a different type of microglia is observed in brain even though they have a similar morphology. Finally, the microglia polarization, which is diverted from the concept of M1/M2 polarization in macrophage, is still under discussion although it is a fascinating approach. Therefore, understanding the molecular mechanisms that link microglial morphology to their functions will need to be discussed.
It is noteworthy that this amoeboid shape is observed in both the adult and the developing brain. During neonatal development, the morphology of nascent microglia is similar to that of amoeboid cells. These microglia initially elongate their ramified processes to coordinate with potential intrinsic and extrinsic factors with time, and then become ramified microglia (Perez-Pouchoulen et al., 2015). Thus, it is considered that changes in microglial morphology occur in parallel with normal brain development. During brain development, microglial activity is regulated by several transcription factors that module a variety of differentiation processes. One of the putative candidates involved in the regulation of microglial properties is Runx1. Runx1 is first observed at embryonic day (E)6.5, and is increased around E7.5 in the yolk sac region. These Runx1-expressing cells in the yolk sac infiltrate into the brain and differentiate into mature microglia (Ginhoux et al., 2010). Runx1-expressing cells (i.e., nascent microglia) exhibit amoeboid morphology. In contrast, the morphology of nascent microglia gradually transforms into a ramified shape around 2 postnatal weeks in inverse proportion to Runx1 expression. Therefore, Runx1 may be a potent candidate that controls microglial fate as it is associated with cell shape in normal brain development. It is also known that PU.1 plays an important role in microglial differentiation. PU.1 is a member of the E26 transformation-specifi (ETS) family of transcription factors that regulates a variety of cellular functions, including migration and differentiation. Although PU.1 is normally expressed in both ramified and amoeboid microglia, PU.1 deficiency impairs yolk sac-derived microglia maturation. Thus, PU.1 could be considered an interesting candidate that controls the microglia differentiation involved in their morphology. The remaining question is which signaling pathway is involved in regulating these factors. Previous studies reported that colony stimulating factor 1 receptor (CS-F1R)-mediated signaling is essential for microglia survival (Elmore et al., 2014), and modulates some transcription factors, including Runx1 and PU1. Upon stimulation of CSF1R, several intracellular signaling pathways, such as phosphatidylinositol-3-kinase (PI3K)-Akt signaling, are activated. Indeed, Akt changes the phosphorylation status of CCAAT/enhancer binding protein (C/EBP), which is a putative cofactor of Runx1. It is possible that Akt modulates C/EBP function through its phosphorylation. Moreover, Akt increases PU.1 transcriptional activity through its phosphorylation. Thus, Akt could act as a mediator between CSF1R and these factors in microglia. Although Akt regulates microglia shape in a context-dependent manner, other mediators that are involved in microglia regulation must exist. Investigating these mechanisms in the developing brain is thus a challenging process.
The functional architecture of the neural circuits regulated by microglia: Recent studies have reported that pathological stimuli trigger microglial activation. Severe brain injuries, such as ischemic, excitotoxic, and neurodegenerative insults, result in microglial activation, followed by altering aspects of these cells. In vivo imaging analyses have revealed that ramified microglia are highly motile even in normal conditions (Nimmerjahn et al., 2005). Usually, microglia processes survey their microenvironment; upon sensing a brain injury, they are transformed into a different type of microglia. This machinery is intimately connected to brain homeostasis, and its deficiency aggravates the environment of the brain. Interestingly, microglia behaviors are gradually altered during the developmental stages. During this period, microglia play important roles in the construction and maintenance of neuronal connectivity via synaptogenesis and synapse pruning. It has been shown that microglial properties change dramatically around the first 3 postnatal weeks (Perez-Pouchoulen et al., 2015). Interestingly, it is believed that microglia maturation occurs around 2 to 3 postnatal weeks as a result of a change in the gene expression pattern. At this time, postnatal mice experience enormous environmental stresses compared to those in the fetus period. For instance, postnatal mice usually open their eyes around 10 postnatal days. After opening their eyes, light stimulation activates the optic nerve, which contributes to the formation of robust synapses. If these neurons do not receive the adequate amount of light stimulation, however, they cannot form functional synapses. At this time, microglia sense immature synapses and engulf and eliminate them (Schafer et al., 2012). Consequently, environmental stimulation contributes to the establishment of sophisticated neural circuits. As postnatal mice are exposed to enormous stimulation, this process may also contribute to the production of microglial diversity through their regionand time-dependent activation.
Several genes in microglia function as key factors that control neuronal connectivity. For instance, CX3C chemokine receptor 1 (CX-3CR1), which is one of the chemokine receptors and expresses only microglia in the CNS, regulates synapse pruning around 2 postnatal weeks. Loss of CX3CR1 increases dendritic spines and attenuates the frequency of spontaneous excitatory postsynaptic current (sEPSC) in CA1 pyramidal neurons (Paolicelli et al., 2011), suggesting that defect in CX3CR1 causes immature formation of neuronal connectivity. Furthermore, CX3CR1 plays an important role in chemotaxis in the brain. After sensing abnormal debris such as Aβ, microglia recruit and engulf them to protect the brain from their toxicity. Since phagocytosis and chemotaxis are required for actin reorganization, it is likely that CX3CR1 is implicated in the regulation of the cytoskeleton. Indeed, CX3CR1 deficiency modulates microglial morphology in response to stroke (van der Maten et al., 2017). Therefore, CX3CR1 may control synapse pruning, chemotaxis, and morphology via regulating the cytoskeleton. P2Y12, which is a purinergic receptor, also regulates synaptic pruning in the developing brain. Usually, P2Y12 is downregulated after inflammatory stimulation and increases mi- Microglia are differentiated from primitive macrophages, which are derived from the embryonic yolk sac during development. After infiltrating into the brain around embryonic day (E)8.5, they differentiate into mature microglia. Microglia have an amoeboid morphology when they infiltrate into brain; however, they change into a ramified morphology by postnatal 1 month. CX3CR1: CX3C chemokine receptor 1; CSF1R: colony stimulating factor 1 receptor. croglial process dynamics rapidly. Upon blocking neural activity by monocular deprivation, microglial hyper-ramification is observed in the visual cortex (Sipe et al., 2016). Moreover, P2Y12 disruption decreases the number of ramified microglia induced by monocular deprivation, suggesting that microglial morphology in the visual cortex is dependent on light stimulation. In addition, recent studies have reported that the complement pathway is involved in synapse pruning when neural circuits are connected in the developing brain. Complement component 1q (C1q), which is an initial protein of the complement cascade, is synthesized from retinal ganglion cells, and its metabolic product, complement 3 (C3), stimulates microglia through complement receptor 3 (CR3) on the microglial membrane, leading to enhancement of synapse pruning and phagocytosis (Stevens et al., 2007). Interestingly, transforming growth factor-β (TGF-β) secreted from astrocytes upregulates C1q expression in neurons. These findings demonstrate that the complement pathway controls microglia in coordination with neurons and astrocytes. Another interesting target involved in microglia dynamics is triggering receptor expressed on myeloid cells 2 (TREM2). Usually, TREM2 is expressed in microglia and acts as a sensor of various lipids associated with damaged neurons. In addition, TREM2 is known to be a risk factor of Alzheimer's disease; mutation in TREM2 increases the accumulation of Amyloid beta (Aβ) in the brain. The TREM2 expression pattern during the postnatal period from 1 to 14 days is gradually changed in a region-specific manner (Chertoff et al., 2013). TREM2 adaptor protein, DAP12, is also associated with CSF1R. Although CSF1R activates survival signaling in microglia, the TREM2-DAP12 complex enhances phagocytosis and the inflammatory response. Therefore, it is likely that there is some crosstalk between TREM2 and the CSF1R pathway that is associated with regulating microglial functions.
Conclusions:
Here, we examine the broad aspects of microglial implications in brain development (Figure 1). Several decades ago, we did not have a good approach to observing living microglia in vivo. Technological advancements, however, now allow us to observe them in the living state. For instance, the two-photon microscope is available to observe living microglia on the surface of mice brain. In addition, the simple imaging technique is a good approach for quantification of morphological changes and chemotaxis (Tsuruta et al., 2017). We are also able to manipulate the neurons and glial cell activity using optogenetics. These novel techniques are useful to unveil the unknown functions of microglia and to shed light on new ways in which to understand the mechanisms that link microglia to the architecture of the neural network.
We apologize to the many authors whose papers could not be cited due to space limitations. I would like to thank the members of our laboratory for their helpful discussion, and Drs. Ban Sato and Darina Obukhova from University of Tsukuba for critical reading of this manuscript. | 2018-04-03T01:39:02.645Z | 2018-02-01T00:00:00.000 | {
"year": 2018,
"sha1": "a3dc6b39b2aa5893a85127bb00a06afff208e878",
"oa_license": "CCBYNCSA",
"oa_url": "https://doi.org/10.4103/1673-5374.226386",
"oa_status": "GOLD",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "a3dc6b39b2aa5893a85127bb00a06afff208e878",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.